Sample records for parallel deterministic neutronics

  1. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  2. Configurable Crossbar Switch for Deterministic, Low-latency Inter-blade Communications in a MicroTCA Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karamooz, Saeed; Breeding, John Eric; Justice, T Alan

    As MicroTCA expands into applications beyond the telecommunications industry from which it originated, it faces new challenges in the area of inter-blade communications. The ability to achieve deterministic, low-latency communications between blades is critical to realizing a scalable architecture. In the past, legacy bus architectures accomplished inter-blade communications using dedicated parallel buses across the backplane. Because of limited fabric resources on its backplane, MicroTCA uses the carrier hub (MCH) for this purpose. Unfortunately, MCH products from commercial vendors are limited to standard bus protocols such as PCI Express, Serial Rapid IO and 10/40GbE. While these protocols have exceptional throughput capability,more » they are neither deterministic nor necessarily low-latency. To overcome this limitation, an MCH has been developed based on the Xilinx Virtex-7 690T FPGA. This MCH provides the system architect/developer complete flexibility in both the interface protocol and routing of information between blades. In this paper, we present the application of this configurable MCH concept to the Machine Protection System under development for the Spallation Neutron Sources's proton accelerator. Specifically, we demonstrate the use of the configurable MCH as a 12x4-lane crossbar switch using the Aurora protocol to achieve a deterministic, low-latency data link. In this configuration, the crossbar has an aggregate bandwidth of 48 GB/s.« less

  3. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  4. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-14

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  5. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Wilson, Paul P. H.

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  6. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  7. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less

  8. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  9. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  10. Analysis of dosimetry from the H.B. Robinson unit 2 pressure vessel benchmark using RAPTOR-M3G and ALPAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, G.A.

    2011-07-01

    Document available in abstract form only, full text of document follows: The dosimetry from the H. B. Robinson Unit 2 Pressure Vessel Benchmark is analyzed with a suite of Westinghouse-developed codes and data libraries. The radiation transport from the reactor core to the surveillance capsule and ex-vessel locations is performed by RAPTOR-M3G, a parallel deterministic radiation transport code that calculates high-resolution neutron flux information in three dimensions. The cross-section library used in this analysis is the ALPAN library, an Evaluated Nuclear Data File (ENDF)/B-VII.0-based library designed for reactor dosimetry and fluence analysis applications. Dosimetry is evaluated with the industry-standard SNLRMLmore » reactor dosimetry cross-section data library. (authors)« less

  11. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  12. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    PubMed

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  13. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    NASA Astrophysics Data System (ADS)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  14. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  15. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-08-11

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  16. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A; Mamidala, Amith R

    2014-02-11

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  17. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-30

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  18. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates.

    PubMed

    Scott, B R; Lyzlov, A F; Osovets, S V

    1998-05-01

    During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.

  19. Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor

    DOE PAGES

    Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...

    2017-02-28

    Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less

  20. Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.

    Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less

  1. Figuring of plano-elliptical neutron focusing mirror by local wet etching.

    PubMed

    Yamamura, Kazuya; Nagano, Mikinori; Takai, Hiroyuki; Zettsu, Nobuyuki; Yamazaki, Dai; Maruyama, Ryuji; Soyama, Kazuhiko; Shimada, Shoichi

    2009-04-13

    Local wet etching technique was proposed to fabricate high-performance aspherical mirrors. In this process, only the limited area facing to the small nozzle is removed by etching on objective surface. The desired objective shape is deterministically fabricated by performing the numerically controlled scanning of the nozzle head. Using the technique, a plano-elliptical mirror to focus the neutron beam was successfully fabricated with the figure accuracy of less than 0.5 microm and the focusing gain of 6. The strong and thin focused neutron beam is expected to be a useful tool for the analyses of various material properties.

  2. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segmentmore » of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.« less

  3. 2011.2 Revision of the Evaluated Nuclear Data Library (ENDL2011.2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beck, B.; Descalles, M. A.; Mattoon, C.

    LLNL's Computational Nuclear Physics Group and Nuclear Theory and Modeling Group have col- laborated to create the 2011.2 revised release of the Evaluated Nuclear Data Library (ENDL2011.2). ENDL2011.2 is designed to support LLNL's current and future nuclear data needs and will be em- ployed in nuclear reactor, nuclear security and stockpile stewardship simulations with ASC codes. This database is currently the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles. This library was assembled with strong support from the ASC PEM and Attribution programs, leveraged with support from Campaign 4 and the DOE/O cemore » of Science's US Nuclear Data Program. This document lists the revisions made in ENDL2011.2 compared with the data existing in the original ENDL2011.0 release and the ENDL2011.1-rc4 re- lease candidate of April 2015. These changes are made in parallel with some similar revisions for ENDL2009.2.« less

  4. Experimental validation of a coupled neutron-photon inverse radiation transport solver

    NASA Astrophysics Data System (ADS)

    Mattingly, John; Mitchell, Dean J.; Harding, Lee T.

    2011-10-01

    Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of α-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.

  5. FY16 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2016-09-30

    The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less

  6. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less

  7. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  8. fissioncore: A desktop-computer simulation of a fission-bomb core

    NASA Astrophysics Data System (ADS)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  9. Parallel Tracks as Quasi-steady States for the Magnetic Boundary Layers in Neutron-star Low-mass X-Ray Binaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erkut, M. Hakan; Çatmabacak, Onur, E-mail: mherkut@gmail.com

    The neutron stars in low-mass X-ray binaries (LMXBs) are usually thought to be weakly magnetized objects accreting matter from their low-mass companions in the form of a disk. Albeit weak compared to those in young neutron-star systems, the neutron-star magnetospheres in LMXBs can play an important role in determining the correlations between spectral and temporal properties. Parallel tracks appearing in the kilohertz (kHz) quasi-periodic oscillation (QPO) frequency versus X-ray flux plane can be used as a tool to study the magnetosphere–disk interaction in neutron-star LMXBs. For dynamically important weak fields, the formation of a non-Keplerian magnetic boundary layer at themore » innermost disk truncated near the surface of the neutron star is highly likely. Such a boundary region may harbor oscillatory modes of frequencies in the kHz range. We generate parallel tracks using the boundary region model of kHz QPOs. We also present the direct application of our model to the reproduction of the observed parallel tracks of individual sources such as 4U 1608–52, 4U 1636–53, and Aql X-1. We reveal how the radial width of the boundary layer must vary in the long-term flux evolution of each source to regenerate the parallel tracks. The run of the radial width looks similar for different sources and can be fitted by a generic model function describing the average steady behavior of the boundary region over the long term. The parallel tracks then correspond to the possible quasi-steady states the source can occupy around the average trend.« less

  10. Parallel Tracks as Quasi-steady States for the Magnetic Boundary Layers in Neutron-star Low-mass X-Ray Binaries

    NASA Astrophysics Data System (ADS)

    Erkut, M. Hakan; Çatmabacak, Onur

    2017-11-01

    The neutron stars in low-mass X-ray binaries (LMXBs) are usually thought to be weakly magnetized objects accreting matter from their low-mass companions in the form of a disk. Albeit weak compared to those in young neutron-star systems, the neutron-star magnetospheres in LMXBs can play an important role in determining the correlations between spectral and temporal properties. Parallel tracks appearing in the kilohertz (kHz) quasi-periodic oscillation (QPO) frequency versus X-ray flux plane can be used as a tool to study the magnetosphere-disk interaction in neutron-star LMXBs. For dynamically important weak fields, the formation of a non-Keplerian magnetic boundary layer at the innermost disk truncated near the surface of the neutron star is highly likely. Such a boundary region may harbor oscillatory modes of frequencies in the kHz range. We generate parallel tracks using the boundary region model of kHz QPOs. We also present the direct application of our model to the reproduction of the observed parallel tracks of individual sources such as 4U 1608-52, 4U 1636-53, and Aql X-1. We reveal how the radial width of the boundary layer must vary in the long-term flux evolution of each source to regenerate the parallel tracks. The run of the radial width looks similar for different sources and can be fitted by a generic model function describing the average steady behavior of the boundary region over the long term. The parallel tracks then correspond to the possible quasi-steady states the source can occupy around the average trend.

  11. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianchini, G.; Burgio, N.; Carta, M.

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Severalmore » off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)« less

  12. A Tutorial on Parallel and Concurrent Programming in Haskell

    NASA Astrophysics Data System (ADS)

    Peyton Jones, Simon; Singh, Satnam

    This practical tutorial introduces the features available in Haskell for writing parallel and concurrent programs. We first describe how to write semi-explicit parallel programs by using annotations to express opportunities for parallelism and to help control the granularity of parallelism for effective execution on modern operating systems and processors. We then describe the mechanisms provided by Haskell for writing explicitly parallel programs with a focus on the use of software transactional memory to help share information between threads. Finally, we show how nested data parallelism can be used to write deterministically parallel programs which allows programmers to use rich data types in data parallel programs which are automatically transformed into flat data parallel versions for efficient execution on multi-core processors.

  13. Architecture studies and system demonstrations for optical parallel processor for AI and NI

    NASA Astrophysics Data System (ADS)

    Lee, Sing H.

    1988-03-01

    In solving deterministic AI problems the data search for matching the arguments of a PROLOG expression causes serious bottleneck when implemented sequentially by electronic systems. To overcome this bottleneck we have developed the concepts for an optical expert system based on matrix-algebraic formulation, which will be suitable for parallel optical implementation. The optical AI system based on matrix-algebraic formation will offer distinct advantages for parallel search, adult learning, etc.

  14. EBR-II Static Neutronic Calculations by PHISICS / MCNP6 codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paolo Balestra; Carlo Parisi; Andrea Alfonsi

    2016-02-01

    The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) on the Shutdown Heat Removal Tests (SHRT) performed in the '80s at the Experimental fast Breeder Reactor EBR-II, USA. The scope of the CRP is to improve and validate the simulation tools for the study and the design of the liquid metal cooled fast reactors. Moreover, training of the next generation of fast reactor analysts is being also considered the other scope of the CRP. In this framework, a static neutronic model was developed, using state-of-the art neutron transport codes like SCALE/PHISICS (deterministic solution) and MCNP6 (stochastic solution).more » Comparison between both solutions is briefly illustrated in this summary.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less

  16. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  17. Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions

    DOE PAGES

    Talamo, A.; Gohar, Y.; Gabrielli, F.; ...

    2017-02-01

    This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, D.; Fabbris, O.

    Two different experiments performed in the 8 MWth MELUSINE experimental power pool reactor aimed at analyzing 1 GWd/t spent fuel pellets doped with several actinides. The goal was to measure the averaged neutron induced capture cross section in two very different neutron spectra (a PWR-like and an under-moderated one). This paper summarizes the combined deterministic APOLLO2-stochastic TRIPOLI4 analysis using the JEFF-3.1.1 European nuclear data library. A very good agreement is observed for most of neutron induced capture cross section of actinides and a clear underestimation for the {sup 241}Am(n,{gamma}) as an accurate validation of its associated isomeric ratio are emphasized.more » Finally, a possible huge resonant fluctuation (factor of 2.7 regarding to the 1=0 resonance total orbital momenta) is suggested for isomeric ratio. (authors)« less

  19. Development of Cross Section Library and Application Programming Interface (API)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Marin-Lafleche, A.; Smith, M. A.

    2014-04-09

    The goal of NEAMS neutronics is to develop a high-fidelity deterministic neutron transport code termed PROTEUS for use on all reactor types of interest, but focused primarily on sodium-cooled fast reactors. While PROTEUS-SN has demonstrated good accuracy for homogeneous fast reactor problems and partially heterogeneous fast reactor problems, the simulation results were not satisfactory when applied on fully heterogeneous thermal problems like the Advanced Test Reactor (ATR). This is mainly attributed to the quality of cross section data for heterogeneous geometries since the conventional cross section generation approach does not work accurately for such irregular and complex geometries. Therefore, onemore » of the NEAMS neutronics tasks since FY12 has been the development of a procedure to generate appropriate cross sections for a heterogeneous geometry core.« less

  20. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  1. Execution of parallel algorithms on a heterogeneous multicomputer

    NASA Astrophysics Data System (ADS)

    Isenstein, Barry S.; Greene, Jonathon

    1995-04-01

    Many aerospace/defense sensing and dual-use applications require high-performance computing, extensive high-bandwidth interconnect and realtime deterministic operation. This paper will describe the architecture of a scalable multicomputer that includes DSP and RISC processors. A single chassis implementation is capable of delivering in excess of 10 GFLOPS of DSP processing power with 2 Gbytes/s of realtime sensor I/O. A software approach to implementing parallel algorithms called the Parallel Application System (PAS) is also presented. An example of applying PAS to a DSP application is shown.

  2. Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Johnson, Seth R.; Remec, Igor

    2015-01-01

    Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less

  3. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  4. Efficient Parallel Algorithms on Restartable Fail-Stop Processors

    DTIC Science & Technology

    1991-01-01

    resource (memory), and ( 3 ) that processors, memory and their interconnection must be The model of parallel computation known as the Par- perfectly...setting), arid ure an(I restart errors. We describe these arguments if] [AAtPS 871 (in a deterministic setting). Fault-tolerance Section 3 . of...grannmarity at the processor level --- for recent work on where Al is the nmber of failures during this step’s gate granilarities see [All 90, Pip 85

  5. Modularized Parallel Neutron Instrument Simulation on the TeraGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Meili; Cobb, John W; Hagen, Mark E

    2007-01-01

    In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serialmore » instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evalua-tion, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new fea-tures seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented.« less

  6. WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.

    2017-04-12

    WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less

  7. Study of secondary neutron interactions with 232Th, 129I, and 127I nuclei with the uranium assembly “QUINTA” at 2, 4, and 8GeV deuteron beams of the JINR Nuclotron accelerator

    DOE PAGES

    Adam, J.; Chilap, V. V.; Furman, V. I.; ...

    2015-11-04

    The natural uranium assembly, “QUINTA”, was irradiated with 2, 4, and 8 GeV deuterons. The 232Th, 127I, and 129I samples have been exposed to secondary neutrons produced in the assembly at a 20-cm radial distance from the deuteron beam axis. The spectra of gamma rays emitted by the activated 232Th, 127I, and 129I samples have been analyzed and several tens of product nuclei have been identified. For each of those products, neutron-induced reaction rates have been determined. The transmutation power for the 129I samples is estimated. Furthermore, experimental results were compared to those calculated with well-known stochastic and deterministic codes.

  8. Retrospective dosimetry analyses of reactor vessel cladding samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwood, L. R.; Soderquist, C. Z.; Fero, A. H.

    2011-07-01

    Reactor pressure vessel cladding samples for Ringhals Units 3 and 4 in Sweden were analyzed using retrospective reactor dosimetry techniques. The objective was to provide the best estimates of the neutron fluence for comparison with neutron transport calculations. A total of 51 stainless steel samples consisting of chips weighing approximately 100 to 200 mg were removed from selected locations around the pressure vessel and were sent to Pacific Northwest National Laboratory for analysis. The samples were fully characterized and analyzed for radioactive isotopes, with special interest in the presence of Nb-93m. The RPV cladding retrospective dosimetry results will be combinedmore » with a re-evaluation of the surveillance capsule dosimetry and with ex-vessel neutron dosimetry results to form a comprehensive 3D comparison of measurements to calculations performed with 3D deterministic transport code. (authors)« less

  9. The use of the SRIM code for calculation of radiation damage induced by neutrons

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi

    2017-12-01

    Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.

  10. Current and anticipated uses of thermalhydraulic and neutronic codes at PSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksan, S.N.; Zimmermann, M.A.; Yadigaroglu, G.

    1997-07-01

    The thermalhydraulic and/or neutronic codes in use at PSI mainly provide the capability to perform deterministic safety analysis for Swiss NPPs and also serve as analysis tools for experimental facilities for LWR and ALWR simulations. In relation to these applications, physical model development and improvements, and assessment of the codes are also essential components of the activities. In this paper, a brief overview is provided on the thermalhydraulic and/or neutronic codes used for safety analysis of LWRs, at PSI, and also of some experiences and applications with these codes. Based on these experiences, additional assessment needs are indicated, together withmore » some model improvement needs. The future needs that could be used to specify both the development of a new code and also improvement of available codes are summarized.« less

  11. NEUTRONIC REACTORS

    DOEpatents

    Wigner, E.P.

    1960-11-22

    A nuclear reactor is described wherein horizontal rods of thermal- neutron-fissionable material are disposed in a body of heavy water and extend through and are supported by spaced parallel walls of graphite.

  12. Signatures of asymmetry in neutron spectra and images predicted by three-dimensional radiation hydrodynamics simulations of indirect drive implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chittenden, J. P., E-mail: j.chittenden@imperial.ac.uk; Appelbe, B. D.; Manke, F.

    2016-05-15

    We present the results of 3D simulations of indirect drive inertial confinement fusion capsules driven by the “high-foot” radiation pulse on the National Ignition Facility. The results are post-processed using a semi-deterministic ray tracing model to generate synthetic deuterium-tritium (DT) and deuterium-deuterium (DD) neutron spectra as well as primary and down scattered neutron images. Results with low-mode asymmetries are used to estimate the magnitude of anisotropy in the neutron spectra shift, width, and shape. Comparisons of primary and down scattered images highlight the lack of alignment between the neutron sources, scatter sites, and detector plane, which limits the ability tomore » infer the ρr of the fuel from a down scattered ratio. Further calculations use high bandwidth multi-mode perturbations to induce multiple short scale length flows in the hotspot. The results indicate that the effect of fluid velocity is to produce a DT neutron spectrum with an apparently higher temperature than that inferred from the DD spectrum and which is also higher than the temperature implied by the DT to DD yield ratio.« less

  13. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin, E-mail: collinsbs@ornl.gov; Stimpson, Shane, E-mail: stimpsonsg@ornl.gov; Kelley, Blake W., E-mail: kelleybl@umich.edu

    2016-12-01

    A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  14. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE PAGES

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...

    2016-08-25

    We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  15. Coupled Neutron Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.

    2009-01-01

    Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.

  16. Deterministic and stochastic methods of calculation of polarization characteristics of radiation in natural environment

    NASA Astrophysics Data System (ADS)

    Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.

    2017-11-01

    We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.

  17. Elliptical quantum dots as on-demand single photons sources with deterministic polarization states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng, E-mail: peicheng@umich.edu

    In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.

  18. Some Notes on Neutron Up-Scattering and the Doppler-Broadening of High-Z Scattering Resonances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Donald Kent

    When neutrons are scattered by target nuclei at elevated temperatures, it is entirely possible that the neutron will actually gain energy (i.e., up-scatter) from the interaction. This phenomenon is in addition to the more usual case of the neutron losing energy (i.e., down-scatter). Furthermore, the motion of the target nuclei can also cause extended neutron down-scattering, i.e., the neutrons can and do scatter to energies lower than predicted by the simple asymptotic models. In recent years, more attention has been given to temperature-dependent scattering cross sections for materials in neutron multiplying systems. This has led to the inclusion of neutronmore » up-scatter in deterministic codes like Partisn and to free gas scattering models for material temperature effects in Monte Carlo codes like MCNP and cross section processing codes like NJOY. The free gas scattering models have the effect of Doppler Broadening the scattering cross section output spectra in energy and angle. The current state of Doppler-Broadening numerical techniques used at Los Alamos for scattering resonances will be reviewed, and suggestions will be made for further developments. The focus will be on the free gas scattering models currently in use and the development of new models to include high-Z resonance scattering effects. These models change the neutron up-scattering behavior.« less

  19. New Result for the β-decay Asymmetry Parameter A0 from the UCNA Experiment

    NASA Astrophysics Data System (ADS)

    Brown, M. A.-P.; UCNA Collaboration

    2017-09-01

    The UCNA Experiment at the Ultracold Neutron facility at LANL uses polarized ultracold neutrons (UCN) to determine the neutron β-decay asymmetry parameter A0, the angular correlation between the neutron spin and the decay electron's momentum. A0 further determines λ =gA /gV , which, when combined with the neutron lifetime, permits extraction of the CKM matrix element Vud solely from neutron decay. In the UCNA experiment, UCN are produced in a pulsed, spallation driven solid deuterium source, polarized using a 7 T magnetic field, and transported through an Adiabatic Fast Passage (AFP) spin flipper prior to storage within a 1 T solenoidal spectrometer housing electron detectors at each end. The spin-flipper allows one to form a super-ratio of decay rates for neutron spins aligned parallel and anti-parallel to the 1 T magnetic field, eliminating to first order errors due to variations in the decay rate and detector efficiencies. Leading systematics and analysis techniques from the most recent analysis of data collected from 2011-2013 will be presented. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award Number DE-SC-0014622.

  20. Ex-vessel neutron dosimetry analysis for westinghouse 4-loop XL pressurized water reactor plant using the RadTrack{sup TM} Code System with the 3D parallel discrete ordinates code RAPTOR-M3G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Alpan, F. A.; Fischer, G.A.

    2011-07-01

    Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less

  1. Study of secondary neutron interactions with ²³²Th, ¹²⁹I, and ¹²⁷I nuclei with the uranium assembly "QUINTA" at 2, 4, and 8 GeV deuteron beams of the JINR Nuclotron accelerator.

    PubMed

    Adam, J; Chilap, V V; Furman, V I; Kadykov, M G; Khushvaktov, J; Pronskikh, V S; Solnyshkin, A A; Stegailov, V I; Suchopar, M; Tsoupko-Sitnikov, V M; Tyutyunnikov, S I; Vrzalova, J; Wagner, V; Zavorka, L

    2016-01-01

    The natural uranium assembly, "QUINTA", was irradiated with 2, 4, and 8GeV deuterons. The (232)Th, (127)I, and (129)I samples have been exposed to secondary neutrons produced in the assembly at a 20-cm radial distance from the deuteron beam axis. The spectra of gamma rays emitted by the activated (232)Th, (127)I, and (129)I samples have been analyzed and several tens of product nuclei have been identified. For each of those products, neutron-induced reaction rates have been determined. The transmutation power for the (129)I samples is estimated. Experimental results were compared to those calculated with well-known stochastic and deterministic codes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  3. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  4. Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's

    NASA Astrophysics Data System (ADS)

    Hernandez-Solis, Augusto; Sjöstrand, Henrik; Helgesson, Petter

    2017-09-01

    The novel design of the renewable boiling water reactor (RBWR) allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC) method is used to propagate the different neutron-reactions (as well as angular distributions) covariances that are part of the TENDL-2014 nuclear data (ND) library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.

  5. EFFECT OF MASSIVE NEUTRON EXPOSURE ON THE DISTORTION OF REACTOR GRAPHITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helm, J.W.; Davidson, J.M.

    1963-05-28

    Distortion of reactor-grade graphites was studied at varying neutron exposures ranging up to 14 x 10/sup 21/ neutrons per cm/sup 2/ (nvt)/sup */ at temperatures of irradiation ranging from 425 to 800 deg C. This exposure level corresponds to approximately 100,000 megawatt days per adjacent ton of fuel (Mwd/ At) in a graphite-moderated reactor. A conventionalcoke graphite, CSF, and two needle-coke graphites, NC-7 and NC-8, were studied. At all temperatures of irradiation the contraction rate of the samples cut parallel to the extrusion axis increased with increasing neutron exposure. For parallel samples the needle- coke graphites and the CSF graphitemore » contracted approximately the same amount. In the transverse direction the rate of cortraction at the higher irradiation temperntures appeared to be decreasing. Volume contractions derived from the linear contractions are discussed. (auth)« less

  6. Parallel simulations of Grover's algorithm for closest match search in neutron monitor data

    NASA Astrophysics Data System (ADS)

    Kussainov, Arman; White, Yelena

    We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.

  7. Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport

    NASA Technical Reports Server (NTRS)

    Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.

    2008-01-01

    Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.

  8. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    NASA Astrophysics Data System (ADS)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  9. Reasons for 2011 Release of the Evaluated Nuclear Data Library (ENDL2011.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D.; Escher, J.; Hoffman, R.

    LLNL's Computational Nuclear Physics Group and Nuclear Theory and Modeling Group have collaborated to create the 2011 release of the Evaluated Nuclear Data Library (ENDL2011). ENDL2011 is designed to sup- port LLNL's current and future nuclear data needs. This database is currently the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles, surpassing ENDL2009.0 [1]. The ENDL2011 release [2] contains 918 transport-ready eval- uations in the neutron sub-library alone. ENDL2011 was assembled with strong support from the ASC program, leveraged with support from NNSA science campaigns and the DOE/Offce of Science US Nuclear Datamore » Pro- gram.« less

  10. Compatibility of photomultiplier tube operation with SQUIDs for a neutron EDM experiment

    NASA Astrophysics Data System (ADS)

    Libersky, Matthew; nEDM Collaboration

    2013-10-01

    An experiment at the Spallation Neutron Source at Oak Ridge National Laboratory with the goal of reducing the experimental limit on the electric dipole moment (EDM) of the neutron will measure the precession frequencies of neutrons when a strong electric field is applied parallel and anti-parallel to a weak magnetic field. A difference in these frequencies would indicate a nonzero neutron EDM. To correct for drifts of the magnetic field in the measurement volume, polarized 3He will be used as a co-magnetometer. In one of the two methods built into the apparatus, superconducting quantum interference devices (SQUIDs) will be used to read out the 3He magnetization. Photomultiplier tubes will be used concurrently to measure scintillation light from neutron capture by 3He. However, the simultaneous noise-sensitive magnetic field measurement by the SQUIDs makes conventional PMT operation problematic due to the alternating current involved in generating the high voltages needed. Tests were carried out at Los Alamos National Laboratory to study the compatibility of simultaneous SQUID and PMT operation, using a custom battery-powered high-voltage power supply developed by Meyer and Smith (NIM A 647.1) to operate the PMT. The results of these tests will be presented.

  11. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  12. Construction and comparison of parallel implicit kinetic solvers in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Titarev, Vladimir; Dumbser, Michael; Utyuzhnikov, Sergey

    2014-01-01

    The paper is devoted to the further development and systematic performance evaluation of a recent deterministic framework Nesvetay-3D for modelling three-dimensional rarefied gas flows. Firstly, a review of the existing discretization and parallelization strategies for solving numerically the Boltzmann kinetic equation with various model collision integrals is carried out. Secondly, a new parallelization strategy for the implicit time evolution method is implemented which improves scaling on large CPU clusters. Accuracy and scalability of the methods are demonstrated on a pressure-driven rarefied gas flow through a finite-length circular pipe as well as an external supersonic flow over a three-dimensional re-entry geometry of complicated aerodynamic shape.

  13. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, R; Fallone, B; Cross Cancer Institute, Edmonton, AB

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. Themore » LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been formulated on GPU, resulting in two orders of magnitude speedup. Funding support from Natural Sciences and Engineering Research Council and Alberta Innovates Health Solutions. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less

  14. Conceptual design and optimization of a plastic scintillator array for 2D tomography using a compact D-D fast neutron generator.

    PubMed

    Adams, Robert; Zboray, Robert; Cortesi, Marco; Prasser, Horst-Michael

    2014-04-01

    A conceptual design optimization of a fast neutron tomography system was performed. The system is based on a compact deuterium-deuterium fast neutron generator and an arc-shaped array of individual neutron detectors. The array functions as a position sensitive one-dimensional detector allowing tomographic reconstruction of a two-dimensional cross section of an object up to 10 cm across. Each individual detector is to be optically isolated and consists of a plastic scintillator and a Silicon Photomultiplier for measuring light produced by recoil protons. A deterministic geometry-based model and a series of Monte Carlo simulations were used to optimize the design geometry parameters affecting the reconstructed image resolution. From this, it is expected that with an array of 100 detectors a reconstructed image resolution of ~1.5mm can be obtained. Other simulations were performed in order to optimize the scintillator depth (length along the neutron path) such that the best ratio of direct to scattered neutron counts is achieved. This resulted in a depth of 6-8 cm and an expected detection efficiency of 33-37%. Based on current operational capabilities of a prototype neutron generator being developed at the Paul Scherrer Institute, planned implementation of this detector array design should allow reconstructed tomograms to be obtained with exposure times on the order of a few hours. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A possible explanation of the parallel tracks in kilohertz quasi-periodic oscillations from low-mass-X-ray binaries

    NASA Astrophysics Data System (ADS)

    Shi, Chang-Sheng; Zhang, Shuang-Nan; Li, Xiang-Dong

    2018-05-01

    We recalculate the modes of the magnetohydrodynamics (MHD) waves in the MHD model (Shi, Zhang & Li 2014) of the kilohertz quasi-periodic oscillations (kHz QPOs) in neutron star low mass X-ray binaries (NS-LMXBs), in which the compressed magnetosphere is considered. A method on point-by-point scanning for every parameter of a normal LMXBs is proposed to determine the wave number in a NS-LMXB. Then dependence of the twin kHz QPO frequencies on accretion rates (\\dot{M}) is obtained with the wave number and magnetic field (B*) determined by our method. Based on the MHD model, a new explanation of the parallel tracks, i.e. the slowly varying effective magnetic field leads to the shift of parallel tracks in a source, is presented. In this study, we obtain a simple power-law relation between the kHz QPO frequencies and \\dot{M}/B_{\\ast }^2 in those sources. Finally, we study the dependence of kHz quasi-periodic oscillation frequencies on the spin, mass and radius of a neutron star. We find that the effective magnetic field, the spin, mass and radius of a neutron star lead to the parallel tracks in different sources.

  16. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  17. Improved Neutronics Treatment of Burnable Poisons for the Prismatic HTR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. Wang; A. A. Bingham; J. Ortensi

    2012-10-01

    In prismatic block High Temperature Reactors (HTR), highly absorbing material such a burnable poison (BP) cause local flux depressions and large gradients in the flux across the blocks which can be a challenge to capture accurately with traditional homogenization methods. The purpose of this paper is to quantify the error associated with spatial homogenization, spectral condensation and discretization and to highlight what is needed for improved neutronics treatments of burnable poisons for the prismatic HTR. A new triangular based mesh is designed to separate the BP regions from the fuel assembly. A set of packages including Serpent (Monte Carlo), Xuthosmore » (1storder Sn), Pronghorn (diffusion), INSTANT (Pn) and RattleSnake (2ndorder Sn) is used for this study. The results from the deterministic calculations show that the cross sections generated directly in Serpent are not sufficient to accurately reproduce the reference Monte Carlo solution in all cases. The BP treatment produces good results, but this is mainly due to error cancellation. However, the Super Cell (SC) approach yields cross sections that are consistent with cross sections prepared on an “exact” full core calculation. In addition, very good agreement exists between the various deterministic transport and diffusion codes in both eigenvalue and power distributions. Future research will focus on improving the cross sections and quantifying the error cancellation.« less

  18. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procassini, R.J.

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less

  19. PEM Water Electrolysis: Preliminary Investigations Using Neutron Radiography

    NASA Astrophysics Data System (ADS)

    de Beer, Frikkie; van der Merwe, Jan-Hendrik; Bessarabov, Dmitri

    The quasi-dynamic water distribution and performance of a proton exchange membrane (PEM) electrolyzer at both a small fuel cell's anode and cathode was observed and quantitatively measured in the in-plane imaging geometry direction(neutron beam parallel to membrane and with channels parallel to the beam) by applying the neutron radiography principle at the neutron imaging facility (NIF) of NIST, Gaithersburg, USA. The test section had 6 parallel channels with an active area of 5 cm2 and in-situ neutron radiography observation entails the liquid water content along the total length of each of the channels. The acquisition was made with a neutron cMOS-camera system with performance of 10 sec per frame to achieve a relatively good pixel dynamic range and at a pixel resolution of 10 x 10 μm2. A relatively high S/N ratio was achieved in the radiographs to observe in quasi real time the water management as well as quantification of water / gas within the channels. The water management has been observed at increased steps (0.2A/cm2) of current densities until 2V potential has been achieved. These observations were made at 2 different water flow rates, at 3 temperatures for each flow rate and repeated for both the vertical and horizontal electrolyzer orientation geometries. It is observed that there is water crossover from the anode through the membrane to the cathode. A first order quantification (neutron scattering correction not included) shows that the physical vertical and horizontal orientation of the fuel cell as well as the temperature of the system up to 80 °C has no significant influence on the percentage water (∼18%) that crossed over into the cathode. Additionally, a higher water content was observed in the Gas Diffusion Layer at the position of the channels with respect to the lands.

  20. Proteus-MOC: A 3D deterministic solver incorporating 2D method of characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marin-Lafleche, A.; Smith, M. A.; Lee, C.

    2013-07-01

    A new transport solution methodology was developed by combining the two-dimensional method of characteristics with the discontinuous Galerkin method for the treatment of the axial variable. The method, which can be applied to arbitrary extruded geometries, was implemented in PROTEUS-MOC and includes parallelization in group, angle, plane, and space using a top level GMRES linear algebra solver. Verification tests were performed to show accuracy and stability of the method with the increased number of angular directions and mesh elements. Good scalability with parallelism in angle and axial planes is displayed. (authors)

  1. Determination of the fast-neutron-induced fission cross-section of 242Pu at nELBE

    NASA Astrophysics Data System (ADS)

    Kögler, Toni; Beyer, Roland; Junghans, Arnd R.; Schwengner, Ronald; Wagner, Andreas

    2018-03-01

    The fast-neutron-induced fission cross section of 242Pu was determined in the energy range of 0.5 MeV to 10MeV at the neutron time-of-flight facility nELBE. Using a parallel-plate fission ionization chamber this quantity was measured relative to 235U(n,f). The number of target nuclei was thereby calculated by means of measuring the spontaneous fission rate of 242Pu. An MCNP 6 neutron transport simulation was used to correct the relative cross section for neutron scattering. The determined results are in good agreement with current experimental and evaluated data sets.

  2. Flexible neutron shielding composite material of EPDM rubber with boron trioxide: Mechanical, thermal investigations and neutron shielding tests

    NASA Astrophysics Data System (ADS)

    Özdemir, T.; Güngör, A.; Reyhancan, İ. A.

    2017-02-01

    In this study, EPDM and boron trioxide composite was produced and mechanical, thermal and neutron shielding tests were performed. EPDM rubber (Ethylene Propylene Diene Monomer) having a considerably high hydrogen content is an effective neutron shielding material. On the other hand, the materials containing boron components have effective thermal neutron absorption crossection. The composite of EPDM and boron trioxide would be an effective solution for both respects of flexibility and effectiveness for developing a neutron shielding material. Flexible nature of EPDM would be a great asset for the shielding purpose in case of intervention action to a radiation accident. The theoretical calculations and experimental neutron absorption tests have shown that the results were in parallel and an effective neutron shielding has been achieved with the use of the developed composite material.

  3. Nested Focusing Optics for Compact Neutron Sources

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy A.

    2015-01-01

    NASA's Marshall Space Flight Center, the Massachusetts Institute of Technology (MIT), and the University of Alabama Huntsville (UAH) have developed novel neutron grazing incidence optics for use with small-scale portable neutron generators. The technology was developed to enable the use of commercially available neutron generators for applications requiring high flux densities, including high performance imaging and analysis. Nested grazing incidence mirror optics, with high collection efficiency, are used to produce divergent, parallel, or convergent neutron beams. Ray tracing simulations of the system (with source-object separation of 10m for 5 meV neutrons) show nearly an order of magnitude neutron flux increase on a 1-mm diameter object. The technology is a result of joint development efforts between NASA and MIT researchers seeking to maximize neutron flux from diffuse sources for imaging and testing applications.

  4. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less

  5. Neutron capture and neutron-induced fission experiments on americium isotopes with DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jandel, M.; Bredeweg, T. A.; Fowler, M. M.

    2009-01-28

    Neutron capture cross section data on Am isotopes were measured using the Detector for Advanced Neutron Capture Experiments (DANCE) at Los Alamos National Laboratory. The neutron capture cross section was determined for {sup 241}Am for neutron energies between thermal and 320 keV. Preliminary results were also obtained for {sup 243}Am for neutron energies between 10 eV and 250 keV. The results on concurrent neutron-induced fission and neutron-capture measurements on {sup 242m}Am will be presented where the fission events were actively triggered during the experiments. In these experiments, a Parallel-Plate Avalanche Counter (PPAC) detector that surrounds the target located in themore » center of the DANCE array was used as a fission-tagging detector to separate (n,{gamma}) events from (n,f) events. The first direct observation of neutron capture on {sup 242m}Am in the resonance region in between 2 and 9 eV of the neutron energy was obtained.« less

  6. Neutron capture and neutron-induced fission experiments on americium isotopes with DANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jandel, Marian

    2008-01-01

    Neutron capture cross section data on Am isotopes were measured using the Detector for Advanced Neutron Capture Experiments (DANCE) at Los Alamos National Laboratory. The neutron capture cross section was determined for {sup 241}Am for neutron energies between thermal and 320 keV. Preliminary results were also obtained for {sup 243}Am for neutron energies between 35 eV and 200 keV. The results on concurrent neutron-induced fission and neutron-capture measurements on {sup 242m}Am will be presented, where the fission events were actively triggered during the experiments. In these experiments, the Parallel-Plate Avalanche Counter (PPAC) detector that surrounds the target located in themore » center of the DANCE array was used as a fission-tagging detector to separate (n,{gamma}) from (n,f) events. The first evidence of neutron capture on {sup 242m}Am in the resonance region in between 2 and 9 eV of the neutron energy was obtained.« less

  7. Biological effectiveness of neutrons: Research needs

    NASA Astrophysics Data System (ADS)

    Casarett, G. W.; Braby, L. A.; Broerse, J. J.; Elkind, M. M.; Goodhead, D. T.; Oleinick, N. L.

    1994-02-01

    The goal of this report was to provide a conceptual plan for a research program that would provide a basis for determining more precisely the biological effectiveness of neutron radiation with emphasis on endpoints relevant to the protection of human health. This report presents the findings of the experts for seven particular categories of scientific information on neutron biological effectiveness. Chapter 2 examines the radiobiological mechanisms underlying the assumptions used to estimate human risk from neutrons and other radiations. Chapter 3 discusses the qualitative and quantitative models used to organize and evaluate experimental observations and to provide extrapolations where direct observations cannot be made. Chapter 4 discusses the physical principles governing the interaction of radiation with biological systems and the importance of accurate dosimetry in evaluating radiation risk and reducing the uncertainty in the biological data. Chapter 5 deals with the chemical and molecular changes underlying cellular responses and the LET dependence of these changes. Chapter 6, in turn, discusses those cellular and genetic changes which lead to mutation or neoplastic transformation. Chapters 7 and 8 examine deterministic and stochastic effects, respectively, and the data required for the prediction of such effects at different organizational levels and for the extrapolation from experimental results in animals to risks for man. Gaps and uncertainties in this data are examined relative to data required for establishing radiation protection standards for neutrons and procedures for the effective and safe use of neutron and other high-LET radiation therapy.

  8. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-02

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  9. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-09

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  10. Release of Continuous Representation for S(α,β) ACE Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent

    2014-03-20

    For low energy neutrons, the default free gas model for scattering cross sections is not always appropriate. Molecular effects or crystalline structure effects can affect the neutron scattering cross sections. These effects are included in the S(α; β) thermal neutron scattering data and are tabulated in file 7 of the ENDF6 format files. S stands for scattering. α is a momentum transfer variable and is an energy transfer variable. The S(α; β) cross sections can include coherent elastic scattering (no E change for the neutron, but specific scattering angles), incoherent elastic scattering (no E change for the neutron, but continuousmore » scattering angles), and inelastic scattering (E change for the neutron, and change in angle as well). Every S(α; β) material will have inelastic scattering and may have either coherent or incoherent elastic scattering (but not both). Coherent elastic scattering cross sections have distinctive jagged-looking Bragg edges, whereas the other cross sections are much smoother. The evaluated files from the NNDC are processed locally in the THERMR module of NJOY. Data can be produced either for continuous energy Monte Carlo codes (using ACER) or embedded in multi-group cross sections for deterministic (or even multi-group Monte Carlo) codes (using GROUPR). Currently, the S(α; β) files available for MCNP use discrete energy changes for inelastic scattering. That is, the scattered neutrons can only be emitted at specific energies— rather than across a continuous spectrum of energies. The discrete energies are chosen to preserve the average secondary neutron energy, i.e., in an integral sense, but the discrete treatment does not preserve any differential quantities in energy or angle.« less

  11. Using MCBEND for neutron or gamma-ray deterministic calculations

    NASA Astrophysics Data System (ADS)

    Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith

    2017-09-01

    MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  12. The NASA Neutron Star Grand Challenge: The coalescences of Neutron Star Binary System

    NASA Astrophysics Data System (ADS)

    Suen, Wai-Mo

    1998-04-01

    NASA funded a Grand Challenge Project (9/1996-1999) for the development of a multi-purpose numerical treatment for relativistic astrophysics and gravitational wave astronomy. The coalescence of binary neutron stars is chosen as the model problem for the code development. The institutes involved in it are the Argonne Lab, Livermore lab, Max-Planck Institute at Potsdam, StonyBrook, U of Illinois and Washington U. We have recently succeeded in constructing a highly optimized parallel code which is capable of solving the full Einstein equations coupled with relativistic hydrodynamics, running at over 50 GFLOPS on a T3E (the second milestone point of the project). We are presently working on the head-on collisions of two neutron stars, and the inclusion of realistic equations of state into the code. The code will be released to the relativity and astrophysics community in April of 1998. With the full dynamics of the spacetime, relativistic hydro and microphysics all combined into a unified 3D code for the first time, many interesting large scale calculations in general relativistic astrophysics can now be carried out on massively parallel computers.

  13. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  14. Tunable Transmission and Deterministic Interface states in Double-zero-index Acoustic Metamaterials.

    PubMed

    Zhao, Wei; Yang, Yuting; Tao, Zhi; Hang, Zhi Hong

    2018-04-20

    Following the seminal work by Dubois et al. (Nat. Commun. 8, 14871 (2017)), we study a double-zero-index acoustic metamaterial with triangular lattice. By varying the height and diameter of air scatterers inside a parallel-plate acoustic waveguide, acoustic dispersion of the first-order waveguide mode can be manipulated and various interesting properties are explored. With accidental degeneracy of monopolar and dipolar modes, we numerically prove the double-zero-index properties of this novel acoustic metamaterial. Acoustic waveguides with tunable and asymmetric transmission are realized with this double-zero-index acoustic metamaterial embedded. Band inversion occurs if the bulk acoustic band diagram of this acoustic metamaterial is tuned. Deterministic interface states are found to exist on the interface between two acoustic metamaterials with inverted band diagrams.

  15. Measurements of energy dependence of average number of prompt neutrons from neutron-induced fission of 242Pu from 0.5 to 10 Mev

    NASA Astrophysics Data System (ADS)

    Khokhlov, Yurii A.; Ivanin, Igor A.; In'kov, Valerii I.; Danilin, Lev D.

    1998-10-01

    The results of energy dependence measurements of the average number of prompt neutrons from neutrons-induced fission of 242Pu from 0.5 to 10 MeV are presented. The measurements were carried out with neutrons beam from uranium target of electron linac of Russian Federal Nuclear Center using time-of-flight technique on 28.5 m flight-path. The neutrons from fission were detected by a liquid scintillator detector loaded with gadolinium, events of fission—by parallel plate avalanche detector for fission fragments. Least squares fitting results give ν¯p(En)=(2.881±0.033)+(0.141±0.003)ṡEn. The work is executed on ISTC project # 471-97.

  16. Optics for Advanced Neutron Imaging and Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moncton, David E.; Khaykovich, Boris

    2016-03-30

    During the report period, we continued the work as outlined in the original proposal. We have analyzed potential optical designs of Wolter mirrors for the neutron-imaging instrument VENUS, which is under construction at SNS. In parallel, we have conducted the initial polarized imaging experiment at Helmholtz Zentrum, Berlin, one of very few of currently available polarized-imaging facilities worldwide.

  17. Determination of rhenium in molybdenite by neutron-activation analysis.

    PubMed

    Terada, K; Yoshimura, Y; Osaki, S; Kiba, T

    1967-01-01

    A neutron-activation method is described for the determination of rhenium in molybdenite. Radiochemical separation by a carrier technique was carried out very rapidly by means of successive liquid-liquid extraction processes. The recovery of rhenium, which was determined by a spectrophotometric method, was about 93%. About 10 samples could be analysed within 6 hr in parallel runs.

  18. Separator assembly for use in spent nuclear fuel shipping cask

    DOEpatents

    Bucholz, James A.

    1983-01-01

    A separator assembly for use in a spent nuclear fuel shipping cask has a honeycomb-type wall structure defining parallel cavities for holding nuclear fuel assemblies. Tubes formed of an effective neutron-absorbing material are embedded in the wall structure around each of the cavities and provide neutron flux traps when filled with water.

  19. Nuclear and radiological terrorism: continuing education article.

    PubMed

    Anderson, Peter D; Bokor, Gyula

    2013-06-01

    Terrorism involving radioactive materials includes improvised nuclear devices, radiation exposure devices, contamination of food sources, radiation dispersal devices, or an attack on a nuclear power plant or a facility/vehicle that houses radioactive materials. Ionizing radiation removes electrons from atoms and changes the valence of the electrons enabling chemical reactions with elements that normally do not occur. Ionizing radiation includes alpha rays, beta rays, gamma rays, and neutron radiation. The effects of radiation consist of stochastic and deterministic effects. Cancer is the typical example of a stochastic effect of radiation. Deterministic effects include acute radiation syndrome (ARS). The hallmarks of ARS are damage to the skin, gastrointestinal tract, hematopoietic tissue, and in severe cases the neurovascular structures. Radiation produces psychological effects in addition to physiological effects. Radioisotopes relevant to terrorism include titrium, americium 241, cesium 137, cobalt 60, iodine 131, plutonium 238, califormium 252, iridium 192, uranium 235, and strontium 90. Medications used for treating a radiation exposure include antiemetics, colony-stimulating factors, antibiotics, electrolytes, potassium iodine, and chelating agents.

  20. Theory and Performance of AIMS for Active Interrogation

    NASA Astrophysics Data System (ADS)

    Walters, William J.; Royston, Katherine E. K.; Haghighat, Alireza

    2014-06-01

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) determination of neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, γ) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water. In the first step, a response-function formulation has been developed to calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, γ) cross sections to find the resulting gamma source distribution. Finally, in the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma flux at a detector window. A code, AIMS (Active Interrogation for Monitoring Special-Nuclear-materials), has been written to output the gamma current for an source-detector assembly scanning across the cargo using the pre-calculated values and takes significantly less time than a reference MCNP5 calculation.

  1. Scoping analysis of the Advanced Test Reactor using SN2ND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolters, E.; Smith, M.; SC)

    2012-07-26

    A detailed set of calculations was carried out for the Advanced Test Reactor (ATR) using the SN2ND solver of the UNIC code which is part of the SHARP multi-physics code being developed under the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program in DOE-NE. The primary motivation of this work is to assess whether high fidelity deterministic transport codes can tackle coupled dynamics simulations of the ATR. The successful use of such codes in a coupled dynamics simulation can impact what experiments are performed and what power levels are permitted during those experiments at the ATR. The advantages of themore » SN2ND solver over comparable neutronics tools are its superior parallel performance and demonstrated accuracy on large scale homogeneous and heterogeneous reactor geometries. However, it should be noted that virtually no effort from this project was spent constructing a proper cross section generation methodology for the ATR usable in the SN2ND solver. While attempts were made to use cross section data derived from SCALE, the minimal number of compositional cross section sets were generated to be consistent with the reference Monte Carlo input specification. The accuracy of any deterministic transport solver is impacted by such an approach and clearly it causes substantial errors in this work. The reasoning behind this decision is justified given the overall funding dedicated to the task (two months) and the real focus of the work: can modern deterministic tools actually treat complex facilities like the ATR with heterogeneous geometry modeling. SN2ND has been demonstrated to solve problems with upwards of one trillion degrees of freedom which translates to tens of millions of finite elements, hundreds of angles, and hundreds of energy groups, resulting in a very high-fidelity model of the system unachievable by most deterministic transport codes today. A space-angle convergence study was conducted to determine the meshing and angular cubature requirements for the ATR, and also to demonstrate the feasibility of performing this analysis with a deterministic transport code capable of modeling heterogeneous geometries. The work performed indicates that a minimum of 260,000 linear finite elements combined with a L3T11 cubature (96 angles on the sphere) is required for both eigenvalue and flux convergence of the ATR. A critical finding was that the fuel meat and water channels must each be meshed with at least 3 'radial zones' for accurate flux convergence. A small number of 3D calculations were also performed to show axial mesh and eigenvalue convergence for a full core problem. Finally, a brief analysis was performed with different cross sections sets generated from DRAGON and SCALE, and the findings show that more effort will be required to improve the multigroup cross section generation process. The total number of degrees of freedom for a converged 27 group, 2D ATR problem is {approx}340 million. This number increases to {approx}25 billion for a 3D ATR problem. This scoping study shows that both 2D and 3D calculations are well within the capabilities of the current SN2ND solver, given the availability of a large-scale computing center such as BlueGene/P. However, dynamics calculations are not realistic without the implementation of improvements in the solver.« less

  2. Tsunamigenic scenarios for southern Peru and northern Chile seismic gap: Deterministic and probabilistic hybrid approach for hazard assessment

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Yanez, G. A.; Melgar, D.; Salazar, P.; Shrivastava, M. N.; Das, R.; Catalan, P. A.; Cienfuegos, R.

    2017-12-01

    Plausible worst-case tsunamigenic scenarios definition plays a relevant role in tsunami hazard assessment focused in emergency preparedness and evacuation planning for coastal communities. During the last decade, the occurrence of major and moderate tsunamigenic earthquakes along worldwide subduction zones has given clues about critical parameters involved in near-field tsunami inundation processes, i.e. slip spatial distribution, shelf resonance of edge waves and local geomorphology effects. To analyze the effects of these seismic and hydrodynamic variables over the epistemic uncertainty of coastal inundation, we implement a combined methodology using deterministic and probabilistic approaches to construct 420 tsunamigenic scenarios in a mature seismic gap of southern Peru and northern Chile, extended from 17ºS to 24ºS. The deterministic scenarios are calculated using a regional distribution of trench-parallel gravity anomaly (TPGA) and trench-parallel topography anomaly (TPTA), three-dimensional Slab 1.0 worldwide subduction zones geometry model and published interseismic coupling (ISC) distributions. As result, we find four higher slip deficit zones interpreted as major seismic asperities of the gap, used in a hierarchical tree scheme to generate ten tsunamigenic scenarios with seismic magnitudes fluctuates between Mw 8.4 to Mw 8.9. Additionally, we construct ten homogeneous slip scenarios as inundation baseline. For the probabilistic approach, we implement a Karhunen - Loève expansion to generate 400 stochastic tsunamigenic scenarios over the maximum extension of the gap, with the same magnitude range of the deterministic sources. All the scenarios are simulated through a non-hydrostatic tsunami model Neowave 2D, using a classical nesting scheme, for five coastal major cities in northern Chile (Arica, Iquique, Tocopilla, Mejillones and Antofagasta) obtaining high resolution data of inundation depth, runup, coastal currents and sea level elevation. The probabilistic kinematic tsunamigenic scenarios give a more realistic slip patterns, similar to maximum slip amount of major past earthquakes. For all studied sites, the peak of slip location and shelf resonance is a first order control for the observed coastal inundation depths results.

  3. Compton Scattering Cross Sections in Strong Magnetic Fields: Advances for Neutron Star Applications

    NASA Astrophysics Data System (ADS)

    Eiles, Matthew; Gonthier, P. L.; Baring, M. G.; Wadiasingh, Z.

    2013-04-01

    Various telescopes including RXTE, INTEGRAL and Suzaku have detected non-thermal X-ray emission in the 10 - 200 keV band from strongly magnetic neutron stars. Inverse Compton scattering, a quantum-electrodynamical process, is believed to be a leading candidate for the production of this intense X-ray radiation. Magnetospheric conditions are such that electrons may well possess ultra-relativistic energies, which lead to attractive simplifications of the cross section. We have recently addressed such a case by developing compact analytic expressions using correct spin-dependent widths and Sokolov & Ternov (ST) basis states, focusing specifically on ground state-to-ground state scattering. However, inverse Compton scattering can cool electrons down to mildly-relativistic energies, necessitating the development of a more general case where the incoming photons acquire nonzero incident angles relative to the field in the rest frame of the electron, and the intermediate state can be excited to arbitrary Landau levels. In this paper, we develop results pertaining to this general case using ST formalism, and treating the plethora of harmonic resonances associated with various cyclotron transitions between Landau states. Four possible scattering modes (parallel-parallel, perpendicular-perpendicular, parallel-perpendicular, and perpendicular-parallel) encapsulate the polarization dependence of the cross section. We present preliminary analytic and numerical investigations of the magnitude of the extra Landau state contributions to obtain the full cross section, and compare these new analytic developments with the spin-averaged cross sections, which we develop in parallel. Results will find application to various neutron star problems, including computation of Eddington luminosities in the magnetospheres of magnetars. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), and the NASA Astrophysics Theory and Fundamental Program.

  4. REVIEWS OF TOPICAL PROBLEMS: Superfluidity and the magnetic field of pulsars

    NASA Astrophysics Data System (ADS)

    Sedrakyan, D. M.; Shakhabasyan, K. M.

    1991-07-01

    The current state of the theory of superfluidity in pulsars is presented. The superfluidity of hadronic matter in neutron stars is considered. It is shown that strong interaction between the neutron and proton condensates leads to a drag current of superconducting protons and to the generation of a strong time-independent magnetic field (B = 1012 G) parallel to the axis of rotation. The strength of this field depends on the microscopic parameters of the superfluid hadrons. Models explaining the origin of glitches and postglitch relaxation are discussed. The coupling time between the neutron superfluid and the rigid crust of the neutron star is calculated.

  5. Lanl Neutron-Induced Fission Cross Section Measurement Program

    NASA Astrophysics Data System (ADS)

    Laptev, A. B.; Tovesson, F.; Hill, T. S.

    2014-09-01

    A well established program of neutron-induced fission cross section measurement at Los Alamos Neutron Science Center (LANSCE) is supporting the Fuel Cycle Research program (FC R&D). Combining measurements at two LANSCE facilities, the Lujan Center and the Weapons Neutron Research facility (WNR), cover neutron energies over 10 orders of magnitude: from sub-thermal up to 200 MeV. A parallel-plate fission ionization chamber was used as a fission fragment detector. The 235U(n,f) standard was used as the reference. Fission cross sections have been measured for multiple actinides. The new data presented here completes the suite of long-lived Uranium isotopes that were investigated with this experimental approach. The cross section data are presented in comparison with existing evaluations and previous measurements.

  6. Faster Heavy Ion Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.

    2013-01-01

    The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.

  7. NEUTRON SHIELDING STRUCTURE

    DOEpatents

    Mattingly, J.T.

    1962-09-25

    A lightweight neutron shielding structure comprises a honeycomb core which is filled with a neutron absorbing powder. The honeycomb core is faced with parallel planar facing sheets to form a lightweight rigid unit. Suitable absorber powders are selected from among the following: B, B/sub 4/C, B/sub 2/O/ sub 3/, CaB/sub 6/, Li/sub 2/CO3, LiOH, LiBO/sub 2/, Li/s ub 2/O. The facing sheets are constructed of a neutron moderating material, so that fast neutrons will be moderated while traversing the facing sheets, and ultimately be absorbed by the absorber powder in the honeycomb. Beryllium is a preferred moderator material for use in the facing sheets. The advantage of the structure is that it combines the rigidity and light weight of a honeycomb construction with the neutron absorption properties of boron and lithium. (AEC)

  8. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  9. Parallel computation safety analysis irradiation targets fission product molybdenum in neutronic aspect using the successive over-relaxation algorithm

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos

    2014-09-01

    One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g (× 106 cm-1) the limits were exceeded.

  10. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  11. Calculation of the Effective Cross Sections of the Reaction X(n,p)Y with Neutrons in the Energy Range 2-5 Mev; CALCULO DE SECCIONES EFICACES X(n,p)Y CON NEUTRONES DE ENERGIAS ENTRE 2-5 Mev

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rapaport, J.; Trier, A.

    1960-05-01

    Parallel with experimental work to measure the ic neutrons between 2 and 3.6 Mev, it was necessary to estimate the theoretical behavior of these cross sections. The statistical theory of Blatt and Weisskopf was used in the calculation. The theoretical results obtained for squarewell and diffuse-well development are compared with the experimental results. (J.S.R.)

  12. In situ neutron diffraction study of twin reorientation and pseudoplastic strain in Ni-Mn-Ga single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoica, Alexandru Dan

    2011-01-01

    Twin variant reorientation in single-crystal Ni-Mn-Ga during quasi-static mechanical compression was studied using in situ neutron diffraction. The volume fraction of reoriented twin variants for different stress amplitudes were obtained from the changes in integrated intensities of high-order neutron diffraction peaks. It is shown that, during compressive loading, {approx}85% of the twins were reoriented parallel to the loading direction resulting in a maximum pseudoplasticstrain of {approx}5.5%, which is in agreement with measured macroscopic strain.

  13. Development of deterministic transport methods for low energy neutrons for shielding in space

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry

    1993-01-01

    Transport of low energy neutrons associated with the galactic cosmic ray cascade is analyzed in this dissertation. A benchmark quality analytical algorithm is demonstrated for use with BRYNTRN, a computer program written by the High Energy Physics Division of NASA Langley Research Center, which is used to design and analyze shielding against the radiation created by the cascade. BRYNTRN uses numerical methods to solve the integral transport equations for baryons with the straight-ahead approximation, and numerical and empirical methods to generate the interaction probabilities. The straight-ahead approximation is adequate for charged particles, but not for neutrons. As NASA Langley improves BRYNTRN to include low energy neutrons, a benchmark quality solution is needed for comparison. The neutron transport algorithm demonstrated in this dissertation uses the closed-form Green's function solution to the galactic cosmic ray cascade transport equations to generate a source of neutrons. A basis function expansion for finite heterogeneous and semi-infinite homogeneous slabs with multiple energy groups and isotropic scattering is used to generate neutron fluxes resulting from the cascade. This method, called the FN method, is used to solve the neutral particle linear Boltzmann transport equation. As a demonstration of the algorithm coded in the programs MGSLAB and MGSEMI, neutron and ion fluxes are shown for a beam of fluorine ions at 1000 MeV per nucleon incident on semi-infinite and finite aluminum slabs. Also, to demonstrate that the shielding effectiveness against the radiation from the galactic cosmic ray cascade is not directly proportional to shield thickness, a graph of transmitted total neutron scalar flux versus slab thickness is shown. A simple model based on the nuclear liquid drop assumption is used to generate cross sections for the galactic cosmic ray cascade. The ENDF/B V database is used to generate the total and scattering cross sections for neutrons in aluminum. As an external verification, the results from MGSLAB and MGSEMI were compared to ANISN/PC, a routinely used neutron transport code, showing excellent agreement. In an application to an aluminum shield, the FN method seems to generate reasonable results.

  14. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  15. First measurement of the neutron beta asymmetry with ultracold neutrons.

    PubMed

    Pattie, R W; Anaya, J; Back, H O; Boissevain, J G; Bowles, T J; Broussard, L J; Carr, R; Clark, D J; Currie, S; Du, S; Filippone, B W; Geltenbort, P; García, A; Hawari, A; Hickerson, K P; Hill, R; Hino, M; Hoedl, S A; Hogan, G E; Holley, A T; Ito, T M; Kawai, T; Kirch, K; Kitagaki, S; Lamoreaux, S K; Liu, C-Y; Liu, J; Makela, M; Mammei, R R; Martin, J W; Melconian, D; Meier, N; Mendenhall, M P; Morris, C L; Mortensen, R; Pichlmaier, A; Pitt, M L; Plaster, B; Ramsey, J C; Rios, R; Sabourov, K; Sallaska, A L; Saunders, A; Schmid, R; Seestrom, S; Servicky, C; Sjue, S K L; Smith, D; Sondheim, W E; Tatar, E; Teasdale, W; Terai, C; Tipton, B; Utsuro, M; Vogelaar, R B; Wehring, B W; Xu, Y P; Young, A R; Yuan, J

    2009-01-09

    We report the first measurement of an angular correlation parameter in neutron beta decay using polarized ultracold neutrons (UCN). We utilize UCN with energies below about 200 neV, which we guide and store for approximately 30 s in a Cu decay volume. The interaction of the neutron magnetic dipole moment with a static 7 T field external to the decay volume provides a 420 neV potential energy barrier to the spin state parallel to the field, polarizing the UCN before they pass through an adiabatic fast passage spin flipper and enter a decay volume, situated within a 1 T field in a 2x2pi solenoidal spectrometer. We determine a value for the beta-asymmetry parameter A_{0}=-0.1138+/-0.0046+/-0.0021.

  16. Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M

    2009-01-01

    Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less

  17. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  18. Quasielastic neutron scattering in biology: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vural, Derya; Univ. of Tennessee, Knoxville, TN; Hu, Xiaohu

    Neutrons scatter quasielastically from stochastic, diffusive processes, such as overdamped vibrations, localized diffusion and transitions between energy minima. In biological systems, such as proteins and membranes, these relaxation processes are of considerable physical interest. We review here recent methodological advances and applications of quasielastic neutron scattering (QENS) in biology, concentrating on the role of molecular dynamics simulation in generating data with which neutron profiles can be unambiguously interpreted. We examine the use of massively-parallel computers in calculating scattering functions, and the application of Markov state modeling. The decomposition of MD-derived neutron dynamic susceptibilities is described, and the use of thismore » in combination with NMR spectroscopy. We discuss dynamics at very long times, including approximations to the infinite time mean-square displacement and nonequilibrium aspects of single-protein dynamics. Lastly, we examine how neutron scattering and MD can be combined to provide information on lipid nanodomains.« less

  19. Quasielastic neutron scattering in biology: Theory and applications

    DOE PAGES

    Vural, Derya; Univ. of Tennessee, Knoxville, TN; Hu, Xiaohu; ...

    2016-06-15

    Neutrons scatter quasielastically from stochastic, diffusive processes, such as overdamped vibrations, localized diffusion and transitions between energy minima. In biological systems, such as proteins and membranes, these relaxation processes are of considerable physical interest. We review here recent methodological advances and applications of quasielastic neutron scattering (QENS) in biology, concentrating on the role of molecular dynamics simulation in generating data with which neutron profiles can be unambiguously interpreted. We examine the use of massively-parallel computers in calculating scattering functions, and the application of Markov state modeling. The decomposition of MD-derived neutron dynamic susceptibilities is described, and the use of thismore » in combination with NMR spectroscopy. We discuss dynamics at very long times, including approximations to the infinite time mean-square displacement and nonequilibrium aspects of single-protein dynamics. Lastly, we examine how neutron scattering and MD can be combined to provide information on lipid nanodomains.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.; Yu, G.; Wang, K.

    The physical designs of the new concept reactors which have complex structure, various materials and neutronic energy spectrum, have greatly improved the requirements to the calculation methods and the corresponding computing hardware. Along with the widely used parallel algorithm, heterogeneous platforms architecture has been introduced into numerical computations in reactor physics. Because of the natural parallel characteristics, the CPU-FPGA architecture is often used to accelerate numerical computation. This paper studies the application and features of this kind of heterogeneous platforms used in numerical calculation of reactor physics through practical examples. After the designed neutron diffusion module based on CPU-FPGA architecturemore » achieves a 11.2 speed up factor, it is proved to be feasible to apply this kind of heterogeneous platform into reactor physics. (authors)« less

  1. Ising versus XY anisotropy in frustrated R(2)Ti(2)O(7) compounds as "Seen" by Polarized Neutrons.

    PubMed

    Cao, H; Gukasov, A; Mirebeau, I; Bonville, P; Decorse, C; Dhalenne, G

    2009-07-31

    We studied the field induced magnetic order in R(2)Ti(2)O(7) pyrochlore compounds with either uniaxial (R=Ho, Tb) or planar (R=Er, Yb) anisotropy, by polarized neutron diffraction. The determination of the local susceptibility tensor {chi(parallel to),chi(perpendicular)} provides a universal description of the field induced structures in the paramagnetic phase (2-270 K), whatever the field value (1-7 T) and direction. Comparison of the thermal variations of chi(parallel to) and chi(perpendicular) with calculations using the rare earth crystal field shows that exchange and dipolar interactions must be taken into account. We determine the molecular field tensor in each case and show that it can be strongly anisotropic.

  2. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  3. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  4. Development and application of a hybrid transport methodology for active interrogation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, K.; Walters, W.; Haghighat, A.

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less

  5. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  6. Proceedings of the Expert Systems Workshop Held in Pacific Grove, California on 16-18 April 1986

    DTIC Science & Technology

    1986-04-18

    13- NUMBER OF PAGES 197 N IS. SECURITY CLASS, (ol Mm raport) UNCLASSIFIED I5a. DECLASSIFI CATION/DOWNGRADING SCHEDULE 16. DISTRIBUTION...are distributed and parallel. * - Features unimplemented at present; scheduled for phase 2. Table 1-1: Key design characteristics of ABE 2. a...data structuring techniques and a semi- deterministic scheduler . A program for the DF framework consists of a number of independent processing modules

  7. Tests of peak flow scaling in simulated self-similar river networks

    USGS Publications Warehouse

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  8. Tables of model atmospheres of bursting neutron stars

    NASA Technical Reports Server (NTRS)

    Madej, Jerzy

    1991-01-01

    This paper presents tables of plane-parallel neutron star model atmospheres in radiative and hydrostatic equilibrium, with effective temperatures of 8 x 10 exp 6, 1.257 x 10 exp 7, 2 x 10 exp 7, and 3 x 10 exp 7 K, and surface gravities of 15.0 and less (cgs units). The equations of model atmospheres on which the tables are based fully account for nonisotropies of the radiation field and effects of noncoherent Compton scattering of thermal X-rays by free electrons. Both the effective temperatures and gravities listed above are measured on the neutron star surface.

  9. Final Stage in the Design of a Boron Neutron Capture Therapy facility at CEADEN, Cuba

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabal, F. Padilla; Martin, G.

    A neutron beam simulation study is carried out to determine the most suitable neutron energy for treatment of shallow and deep-seated brain tumors in the context of Boron Neutron Capture Therapy (BNCT). Two figures-of-merit, the therapeutic gain and the neutron fluence are utilized as beam assessment parameters. An irradiation cavity is used instead of a parallel beam port for the therapy. Calculations are performed using the MCNP5 code. After the optimization of our beam-shaper a study of the dose distribution in the head, neck, tyroids, lungs and upper and middle spine had been made. The therapeutic gain is increased whilemore » the current required for one hour treatment is decreased in comparison with the trading prototypes of NG used for BNCT.« less

  10. Periodic magnetic field as a polarized and focusing thermal neutron spectrometer and monochromator.

    PubMed

    Cremer, J T; Williams, D L; Fuller, M J; Gary, C K; Piestrup, M A; Pantell, R H; Feinstein, J; Flocchini, R G; Boussoufi, M; Egbert, H P; Kloh, M D; Walker, R B

    2010-01-01

    A novel periodic magnetic field (PMF) optic is shown to act as a prism, lens, and polarizer for neutrons and particles with a magnetic dipole moment. The PMF has a two-dimensional field in the axial direction of neutron propagation. The PMF alternating magnetic field polarity provides strong gradients that cause separation of neutrons by wavelength axially and by spin state transversely. The spin-up neutrons exit the PMF with their magnetic spins aligned parallel to the PMF magnetic field, and are deflected upward and line focus at a fixed vertical height, proportional to the PMF period, at a downstream focal distance that increases with neutron energy. The PMF has no attenuation by absorption or scatter, as with material prisms or crystal monochromators. Embodiments of the PMF include neutron spectrometer or monochromator, and applications include neutron small angle scattering, crystallography, residual stress analysis, cross section measurements, and reflectometry. Presented are theory, experimental results, computer simulation, applications of the PMF, and comparison of its performance to Stern-Gerlach gradient devices and compound material and magnetic refractive prisms.

  11. Periodic magnetic field as a polarized and focusing thermal neutron spectrometer and monochromator

    PubMed Central

    Cremer, J. T.; Williams, D. L.; Fuller, M. J.; Gary, C. K.; Piestrup, M. A.; Pantell, R. H.; Feinstein, J.; Flocchini, R. G.; Boussoufi, M.; Egbert, H. P.; Kloh, M. D.; Walker, R. B.

    2010-01-01

    A novel periodic magnetic field (PMF) optic is shown to act as a prism, lens, and polarizer for neutrons and particles with a magnetic dipole moment. The PMF has a two-dimensional field in the axial direction of neutron propagation. The PMF alternating magnetic field polarity provides strong gradients that cause separation of neutrons by wavelength axially and by spin state transversely. The spin-up neutrons exit the PMF with their magnetic spins aligned parallel to the PMF magnetic field, and are deflected upward and line focus at a fixed vertical height, proportional to the PMF period, at a downstream focal distance that increases with neutron energy. The PMF has no attenuation by absorption or scatter, as with material prisms or crystal monochromators. Embodiments of the PMF include neutron spectrometer or monochromator, and applications include neutron small angle scattering, crystallography, residual stress analysis, cross section measurements, and reflectometry. Presented are theory, experimental results, computer simulation, applications of the PMF, and comparison of its performance to Stern–Gerlach gradient devices and compound material and magnetic refractive prisms. PMID:20113108

  12. Comparison of Transport Codes, HZETRN, HETC and FLUKA, Using 1977 GCR Solar Minimum Spectra

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Slaba, Tony C.; Tripathi, Ram K.; Blattnig, Steve R.; Norbury, John W.; Badavi, Francis F.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; hide

    2009-01-01

    The HZETRN deterministic radiation transport code is one of several tools developed to analyze the effects of harmful galactic cosmic rays (GCR) and solar particle events (SPE) on mission planning, astronaut shielding and instrumentation. This paper is a comparison study involving the two Monte Carlo transport codes, HETC-HEDS and FLUKA, and the deterministic transport code, HZETRN. Each code is used to transport ions from the 1977 solar minimum GCR spectrum impinging upon a 20 g/cm2 Aluminum slab followed by a 30 g/cm2 water slab. This research is part of a systematic effort of verification and validation to quantify the accuracy of HZETRN and determine areas where it can be improved. Comparisons of dose and dose equivalent values at various depths in the water slab are presented in this report. This is followed by a comparison of the proton fluxes, and the forward, backward and total neutron fluxes at various depths in the water slab. Comparisons of the secondary light ion 2H, 3H, 3He and 4He fluxes are also examined.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  14. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  15. SU-E-J-115: Graticule for Verification of Treatment Position in Neutron Therapy.

    PubMed

    Halford, R; Snyder, M

    2012-06-01

    Until recently the treatment verification for patients undergoing fast neutron therapy at our facility was accomplished through a combination of neutron beam portal films aligned with a graticule mounted on an orthronormal x-ray tube. To eliminate uncertainty with respect to the relative positions of the x-ray graticule and the therapy beam, we have developed a graticule which is placed in the neutron beam itself. For a graticule to be visible on the portal film, the attenuation of the neutron beam by the graticule landmarks must be significantly greater than that of the material in which the landmarks are mounted. Various materials, thicknesses, and mounting points were tried to gain the largest contrast between the graticule landmarks and the mounting material. The final design involved 2 inch steel pins of 0.125 inch diameter captured between two parallel plates of 0.25 inch thick clear acrylic plastic. The distance between the two acrylic plates was 1.625 inches, held together at the perimeter with acrylic sidewall spacers. This allowed the majority of length of the steel pins to be surrounded by air. The pins were set 1 cm apart and mounted at angles parallel to the divergence of the beam dependent on their position within the array. The entire steel pin and acrylic plate assembly was mounted on an acrylic accessory tray to allow for graticule alignment. Despite the inherent difficulties in attenuating fast neutrons, our simple graticule design produces the required difference of attenuation between the arrays of landmarks and the mounting material. The graticule successfully provides an in-beam frame of reference for patient portal verification. © 2012 American Association of Physicists in Medicine.

  16. NEUTRONIC REACTORS

    DOEpatents

    Anderson, J.B.

    1960-01-01

    A reactor is described which comprises a tank, a plurality of coaxial steel sleeves in the tank, a mass of water in the tank, and wire grids in abutting relationship within a plurality of elongated parallel channels within the steel sleeves, the wire being provided with a plurality of bends in the same plane forming adjacent parallel sections between bends, and the sections of adjacent grids being normally disposed relative to each other.

  17. Nanoconduits and nanoreplicants

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2007-06-12

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus includes a substrate and a nanoconduit material coupled to a surface of the substrate, where the substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate. An apparatus includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  18. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.

    PubMed

    Zelyak, O; Fallone, B G; St-Aubin, J

    2017-12-14

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  19. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-01-01

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  20. Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".

    PubMed

    Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel

    2018-03-12

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.

  1. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    PubMed

    Komarov, Ivan; D'Souza, Roshan M

    2012-01-01

    The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  2. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  3. Capillary optics for radiation focusing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peurrung, A.J.; Reeder, P.L.; Bliss, M.

    Capillary lens technology may ultimately bring benefits to neutron and x-ray-based science like conventional lenses with visible light. Although the technology is not yet 10 years old, these lenses have already had a significant impact in engineering, science, and medicine. Capillary lenses are advantageous when it is desirable to increase the radiation flux at a location without regard to its angular divergence. PNNL has worked to improve the technology in several ways. A single, optimally tapered capillary was manufactured, which allows intensity gains of a factor of 270 for an initially parallel, incident x-ray beam. Feasibility of constructing neutron lensesmore » using {sup 58}Ni (particularly effective at reflecting neutrons) has been explored. Three applications for capillary optics have been identified and studied: neutron telescope, Gandolphi x-ray diffractometry, and neutron radiotherapy. A brief guide is given for determining which potential applications are likely to be helped by capillary optics.« less

  4. 242Pu absolute neutron-capture cross section measurement

    NASA Astrophysics Data System (ADS)

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; Bucher, B.; Chyzh, A.; Bredeweg, T. A.; Baramsai, B.; Couture, A.; Jandel, M.; Mosby, S.; O'Donnell, J. M.; Ullmann, J. L.

    2017-09-01

    The absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. During target fabrication, a small amount of 239Pu was added to the active target so that the absolute scale of the 242Pu(n,γ) cross section could be set according to the known 239Pu(n,f) resonance at En,R = 7.83 eV. The relative scale of the 242Pu(n,γ) cross section covers four orders of magnitude for incident neutron energies from thermal to ≈ 40 keV. The cross section reported in ENDF/B-VII.1 for the 242Pu(n,γ) En,R = 2.68 eV resonance was found to be 2.4% lower than the new absolute 242Pu(n,γ) cross section.

  5. Simple Interpretation of Proton-Neutron Interactions in Rare Earth Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oktem, Y.; Cakirli, R. B.; Wright Nuclear Structure Laboratory, Yale University, New Haven, CT 06520

    2007-04-23

    Empirical values of the average interactions of the last two protons and last two neutrons, {delta}Vpn, which can be obtained from double differences of binding energies, provide significant information about nuclear structure. Studies of {delta}Vpn showed striking behavior across major shell gaps and the relation of proton-neutron (p-n) interaction strengths to the increasing collectivity and onset of deformation in nuclei. Here we focus on the strong regularity at the {delta}Vpn values in A{approx}150-180 mass region. Experimentally, for each nucleus, the valence p-n interaction strengths increase systematically against the neutron number and it decreases for the observed last neutron number. Thesemore » experimental results give almost nearly perfect parallel trajectories. A microscopic interpretation with a zero range {delta}-interaction in a Nilsson basis gives reasonable agreement for Er-W but more significant discrepancies appear for Gd and Dy.« less

  6. Neutron-induced fission cross section measurements for uranium isotopes 236U and 234U at LANSCE

    NASA Astrophysics Data System (ADS)

    Laptev, A. B.; Tovesson, F.; Hill, T. S.

    2013-04-01

    A well established program of neutron-induced fission cross section measurement at Los Alamos Neutron Science Center (LANSCE) is supporting the Fuel Cycle Research program (FC R&D). The incident neutron energy range spans from sub-thermal up to 200 MeV by combining two LANSCE facilities, the Lujan Center and the Weapons Neutron Research facility (WNR). The time-of-flight method is implemented to measure the incident neutron energy. A parallel-plate fission ionization chamber was used as a fission fragment detector. The event rate ratio between the investigated foil and a standard 235U foil is converted into a fission cross section ratio. In addition to previously measured data new measurements include 236U data which is being analyzed, and 234U data acquired in the 2011-2012 LANSCE run cycle. The new data complete the full suite of Uranium isotopes which were investigated with this experimental approach. Obtained data are presented in comparison with existing evaluations and previous data.

  7. Neutrons on a surface of liquid helium

    NASA Astrophysics Data System (ADS)

    Grigoriev, P. D.; Zimmer, O.; Grigoriev, A. D.; Ziman, T.

    2016-08-01

    We investigate the possibility of ultracold neutron (UCN) storage in quantum states defined by the combined potentials of the Earth's gravity and the neutron optical repulsion by a horizontal surface of liquid helium. We analyze the stability of the lowest quantum state, which is most susceptible to perturbations due to surface excitations, against scattering by helium atoms in the vapor and by excitations of the liquid, comprised of ripplons, phonons, and surfons. This is an unusual scattering problem since the kinetic energy of the neutron parallel to the surface may be much greater than the binding energies perpendicular. The total scattering time of these UCNs at 0.7 K is found to exceed 1 h, and rapidly increases with decreasing temperature. Such low scattering rates should enable high-precision measurements of the sequence of discrete energy levels, thus providing improved tests of short-range gravity. The system might also be useful for neutron β -decay experiments. We also sketch new experimental propositions for level population and trapping of ultracold neutrons above a flat horizontal mirror.

  8. Real-time multiplicity counter

    DOEpatents

    Rowland, Mark S [Alamo, CA; Alvarez, Raymond A [Berkeley, CA

    2010-07-13

    A neutron multi-detector array feeds pulses in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel. The word is read at regular intervals, all bits simultaneously, to minimize latency. The electronics then pass the word to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup.

  9. Development of a Nanomaterial Anode for a Low-Voltage Proportional Counter for Neutron Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craps, Matthew Greg

    NanoTechLabs (NTL) in collaboration with the Savannah River National Laboratory (SRNL) and Clemson University have continued development of a next generation proportional counter (PC) for neutron detection utilizing robust, inexpensive nanostructured anodes while maximizing neutron capture. Neutron detectors are vital to national security as they can be used to detect illicit trafficking of radioactive materials, which could mean the presence of or planning of a dirty bomb attack. Typical PCs operate with high bias potentials that create electronic noise. Incorporating nanomaterials into the anode of PCs can theoretically operate at low voltages (eg. 10-300V) due to an increase in themore » electric field associated with a smaller diameter nano-scale anode. In addition to the lower operating voltage, typical high PC voltages (500-1200V) could be used to generate a larger electric field resulting in more electrons being collected, thus increasing the sensitivity of the PC. Other advantages of nano-PC include reduced platform size, weight, cost, and improved ruggedness. Clemson modeled the electric field around the CNT array tips. NTL grew many ordered CNT arrays as well as control samples and densified the arrays to improve the performance. The primary objective for this work is to provide evidence of a commercially viable technique for reducing the voltage of a parallel plate proportional counter using nanosized anodes. The parallel plate geometry has advantages over the typical cylindrical design based on more feasible placement of solid neutron absorbers and more geometrically practical windows for radiation capture and directional detection.« less

  10. Simultaneous measurement of (n,{gamma}) and (n,fission) cross sections with the DANCE 4{pi} BaF2 array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bredeweg, T. A.; Fowler, M. M.; Bond, E. M.

    2006-03-13

    Neutron capture cross section measurements on many of the actinides are complicated by low-energy neutron-induced fission, which competes with neutron capture to varying degrees depending on the nuclide of interest. Measurements of neutron capture on 235U using the Detector for Advanced Neutron Capture Experiments (DANCE) have shown that we can partially resolve capture from fission events based on total photon calorimetry (i.e. total {gamma}-ray energy and {gamma}-ray multiplicity per event). The addition of a fission-tagging detector to the DANCE array will greatly improve our ability to separate these two competing processes so that improved neutron capture and (n,{gamma})/(n,fission) cross sectionmore » ratio measurements can be obtained. The addition of a fission-tagging detector to the DANCE array will also provide a means to study several important issues associated with neutron-induced fission, including (n,fission) cross sections as a function of incident neutron energy, and total energy and multiplicity of prompt fission photons. We have focused on two detector designs with complementary capabilities, a parallel-plate avalanche counter and an array of solar cells.« less

  11. Analysis of dpa rates in the HFIR reactor vessel using a hybrid Monte Carlo/deterministic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward

    2016-01-01

    The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiationmore » transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 below to approximately 12 above the axial extent of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.« less

  12. Modeling of central reactivity worth measurements in Lady Godiva

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenz, T.R.; Busch, R.D.

    The central reactivity worth measurements performed in Lady Godiva were duplicated using TWODANT, a deterministic neutron transport code, and the 16-group Hansen-Roach cross-section library. The purpose of this work was to determine how well the Hansen-Roach library predicts the reactivity worths for a fast neutron system. Lady Godiva is a spherical uranium metal (93.7 wt% [sup 235]U) critical assembly with a neutron flux distribution dominant in the first five groups of the Hansen-Roach energy structure (0.1 MeV and up). Provided that the cross sections of the replacement material do not undergo large variations (less than an order of magnitude) inmore » any of the aforementioned groups, the calculated reactivities were within 10% of the experimental values. For cases where the reactivities were outside this range, a large variation in the cross section was found to exist in one of the groups, which was not fully accounted for in the Hansen-Roach group structure. However, even in the cases where the agreement between calculation and experiment was not good, the calculated reactivity appeared to be an extremum in that the effect was found to be either more negative or more positive than the experimental value.« less

  13. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  14. Parallel Photonic Quantum Computation Assisted by Quantum Dots in One-Side Optical Microcavities

    PubMed Central

    Luo, Ming-Xing; Wang, Xiaojun

    2014-01-01

    Universal quantum logic gates are important elements for a quantum computer. In contrast to previous constructions on one degree of freedom (DOF) of quantum systems, we investigate the possibility of parallel quantum computations dependent on two DOFs of photon systems. We construct deterministic hyper-controlled-not (hyper-CNOT) gates operating on the spatial-mode and the polarization DOFs of two-photon or one-photon systems by exploring the giant optical circular birefringence induced by quantum-dot spins in one-sided optical microcavities. These hyper-CNOT gates show that the quantum states of two DOFs can be viewed as independent qubits without requiring auxiliary DOFs in theory. This result can reduce the quantum resources by half for quantum applications with large qubit systems, such as the quantum Shor algorithm. PMID:25030424

  15. Parallel photonic quantum computation assisted by quantum dots in one-side optical microcavities.

    PubMed

    Luo, Ming-Xing; Wang, Xiaojun

    2014-07-17

    Universal quantum logic gates are important elements for a quantum computer. In contrast to previous constructions on one degree of freedom (DOF) of quantum systems, we investigate the possibility of parallel quantum computations dependent on two DOFs of photon systems. We construct deterministic hyper-controlled-not (hyper-CNOT) gates operating on the spatial-mode and the polarization DOFs of two-photon or one-photon systems by exploring the giant optical circular birefringence induced by quantum-dot spins in one-sided optical microcavities. These hyper-CNOT gates show that the quantum states of two DOFs can be viewed as independent qubits without requiring auxiliary DOFs in theory. This result can reduce the quantum resources by half for quantum applications with large qubit systems, such as the quantum Shor algorithm.

  16. ACHIEVING THE REQUIRED COOLANT FLOW DISTRIBUTION FOR THE ACCELERATOR PRODUCTION OF TRITIUM (APT) TUNGSTEN NEUTRON SOURCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. SIEBE; K. PASAMEHMETOGLU

    The Accelerator Production of Tritium neutron source consists of clad tungsten targets, which are concentric cylinders with a center rod. These targets are arranged in a matrix of tubes, producing a large number of parallel coolant paths. The coolant flow required to meet thermal-hydraulic design criteria varies with location. This paper describes the work performed to ensure an adequate coolant flow for each target for normal operation and residual heat-removal conditions.

  17. Parallel Study of HEND, RAD, and DAN Instrument Response to Martian Radiation and Surface Conditions

    NASA Technical Reports Server (NTRS)

    Martiniez Sierra, Luz Maria; Jun, Insoo; Litvak, Maxim; Sanin, Anton; Mitrofanov, Igor; Zeitlin, Cary

    2015-01-01

    Nuclear detection methods are being used to understand the radiation environment at Mars. JPL (Jet Propulsion Laboratory) assets on Mars include: Orbiter -2001 Mars Odyssey [High Energy Neutron Detector (HEND)]; Mars Science Laboratory Rover -Curiosity [(Radiation Assessment Detector (RAD); Dynamic Albedo Neutron (DAN))]. Spacecraft have instruments able to detect ionizing and non-ionizing radiation. Instrument response on orbit and on the surface of Mars to space weather and local conditions [is discussed] - Data available at NASA-PDS (Planetary Data System).

  18. FAST NEUTRON SPECTROMETER

    DOEpatents

    Davis, F.J.; Hurst, G.S.; Reinhardt, P.W.

    1959-08-18

    An improved proton recoil spectrometer for determining the energy spectrum of a fast neutron beam is described. Instead of discriminating against and thereby"throwing away" the many recoil protons other than those traveling parallel to the neutron beam axis as do conventional spectrometers, this device utilizes protons scattered over a very wide solid angle. An ovoidal gas-filled recoil chamber is coated on the inside with a scintillator. The ovoidal shape of the sensitive portion of the wall defining the chamber conforms to the envelope of the range of the proton recoils from the radiator disposed within the chamber. A photomultiplier monitors the output of the scintillator, and a counter counts the pulses caused by protons of energy just sufficient to reach the scintillator.

  19. Pathways to agility in the production of neutron generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, R.E.; Beavis, L.C.; Cutchen, J.T.

    1994-02-01

    This report is the result of a study team commissioned to explore pathways for increased agility in the manufacture of neutron generators. As a part of Sandia`s new responsibility for generator production, the goal of the study was to identify opportunities to reduce costs and increase flexibility in the manufacturing operation. Four parallel approaches (or pathways) were recommended: (1) Know the goal, (2) Use design leverage effectively, (3) Value simplicity, and (4) Configure for flexibility. Agility in neutron generator production can be enhanced if all of these pathways are followed. The key role of the workforce in achieving agility wasmore » also noted, with emphasis on ownership, continuous learning, and a supportive environment.« less

  20. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  1. Faster PET reconstruction with a stochastic primal-dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane

    2017-08-01

    Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.

  2. Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal.

    PubMed

    Turajlic, Samra; Xu, Hang; Litchfield, Kevin; Rowan, Andrew; Horswell, Stuart; Chambers, Tim; O'Brien, Tim; Lopez, Jose I; Watkins, Thomas B K; Nicol, David; Stares, Mark; Challacombe, Ben; Hazell, Steve; Chandra, Ashish; Mitchell, Thomas J; Au, Lewis; Eichler-Jonsson, Claudia; Jabbar, Faiz; Soultati, Aspasia; Chowdhury, Simon; Rudman, Sarah; Lynch, Joanna; Fernando, Archana; Stamp, Gordon; Nye, Emma; Stewart, Aengus; Xing, Wei; Smith, Jonathan C; Escudero, Mickael; Huffman, Adam; Matthews, Nik; Elgar, Greg; Phillimore, Ben; Costa, Marta; Begum, Sharmin; Ward, Sophia; Salm, Max; Boeing, Stefan; Fisher, Rosalie; Spain, Lavinia; Navas, Carolina; Grönroos, Eva; Hobor, Sebastijan; Sharma, Sarkhara; Aurangzeb, Ismaeel; Lall, Sharanpreet; Polson, Alexander; Varia, Mary; Horsfield, Catherine; Fotiadis, Nicos; Pickering, Lisa; Schwarz, Roland F; Silva, Bruno; Herrero, Javier; Luscombe, Nick M; Jamal-Hanjani, Mariam; Rosenthal, Rachel; Birkbak, Nicolai J; Wilson, Gareth A; Pipek, Orsolya; Ribli, Dezso; Krzystanek, Marcin; Csabai, Istvan; Szallasi, Zoltan; Gore, Martin; McGranahan, Nicholas; Van Loo, Peter; Campbell, Peter; Larkin, James; Swanton, Charles

    2018-04-19

    The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors characterized by early fixation of multiple mutational and copy number drivers and rapid metastases to highly branched tumors with >10 subclonal drivers and extensive parallel evolution associated with attenuated progression. We identify genetic diversity and chromosomal complexity as determinants of patient outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance. Copyright © 2018 Francis Crick Institute. Published by Elsevier Inc. All rights reserved.

  3. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  4. Dependence of the prompt fission γ-ray spectrum on the entrance channel of compound nucleus: Spontaneous vs. neutron-induced fission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chyzh, A.; Jaffke, P.; Wu, C. Y.

    Prompt γ-ray spectra were measured for the spontaneous fission of 240,242Pu and the neutron-induced fission of 239,241Pu with incident neutron energies ranging from thermal to about 100 keV. Measurements were made using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments using a parallel-plate avalanche counter. The unfolded prompt fission γ-ray energy spectra can be reproduced reasonably well by Monte Carlo Hauser–Feshbach statistical model for the neutron-induced fission channel but not for the spontaneous fission channel. However, this entrance-channel dependence of the prompt fission γ-ray emission can be described qualitatively by themore » model due to the very different fission-fragment mass distributions and a lower average fragment spin for spontaneous fission. The description of measurements and the discussion of results under the framework of a Monte Carlo Hauser–Feshbach statistical approach are presented.« less

  5. Dependence of the prompt fission γ-ray spectrum on the entrance channel of compound nucleus: Spontaneous vs. neutron-induced fission

    DOE PAGES

    Chyzh, A.; Jaffke, P.; Wu, C. Y.; ...

    2018-06-07

    Prompt γ-ray spectra were measured for the spontaneous fission of 240,242Pu and the neutron-induced fission of 239,241Pu with incident neutron energies ranging from thermal to about 100 keV. Measurements were made using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments using a parallel-plate avalanche counter. The unfolded prompt fission γ-ray energy spectra can be reproduced reasonably well by Monte Carlo Hauser–Feshbach statistical model for the neutron-induced fission channel but not for the spontaneous fission channel. However, this entrance-channel dependence of the prompt fission γ-ray emission can be described qualitatively by themore » model due to the very different fission-fragment mass distributions and a lower average fragment spin for spontaneous fission. The description of measurements and the discussion of results under the framework of a Monte Carlo Hauser–Feshbach statistical approach are presented.« less

  6. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  7. Monte Carlo simulation of thermal neutron flux of americium-beryllium source used in neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Didi, Abdessamad; Dadouch, Ahmed; Bencheikh, Mohamed; Jai, Otman

    2017-09-01

    The neutron activation analysis is a method of exclusively elemental analysis. Its implementation of irradiates the sample which can be analyzed by a high neutron flux, this method is widely used in developed countries with nuclear reactors or accelerators of particle. The purpose of this study is to develop a prototype to increase the neutron flux such as americium-beryllium and have the opportunity to produce radioisotopes. Americium-beryllium is a mobile source of neutron activity of 20 curie, and gives a thermal neutron flux of (1.8 ± 0.0007) × 106 n/cm2 s when using water as moderator, when using the paraffin, the thermal neutron flux increases to (2.2 ± 0.0008) × 106 n/cm2 s, in the case of adding two solid beryllium barriers, the distance between them is 24 cm, parallel and symmetrical about the source, the thermal flux is increased to (2.5 ± 0.0008) × 106 n/cm2 s and in the case of multi-source (6 sources), with-out barriers, increases to (1.17 ± 0.0008) × 107 n/cm2 s with a rate of increase equal to 4.3 and with the both barriers flux increased to (1.37 ± 0.0008) × 107 n/cm2 s.

  8. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hep, J.; Konecna, A.; Krysl, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less

  9. Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David

    2015-07-01

    Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or nomore » human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The integration of deterministic calculations using multi-physics analyses (i.e., neutronics, thermal, and thermal-hydraulics) and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermal-hydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies.« less

  10. Fast neutron measurements with 7Li and 6Li enriched CLYC scintillators

    NASA Astrophysics Data System (ADS)

    Giaz, A.; Blasi, N.; Boiano, C.; Brambilla, S.; Camera, F.; Cattadori, C.; Ceruti, S.; Gramegna, F.; Marchi, T.; Mattei, I.; Mentana, A.; Million, B.; Pellegri, L.; Rebai, M.; Riboldi, S.; Salamida, F.; Tardocchi, M.

    2016-07-01

    The recently developed Cs2LiYCl6:Ce (CLYC) crystals are interesting scintillation detectors not only for their gamma energy resolution (<5% at 662 keV) but also for their capability to identify and measure the energy of both gamma rays and fast/thermal neutrons. The thermal neutrons were detected by the 6Li(n,α)t reaction while for the fast neutrons the 35Cl(n,p)35S and 35Cl(n,α)32P neutron-capture reactions were exploited. The energy of the outgoing proton or α particle scales linearly with the incident neutron energy. The kinetic energy of the fast neutrons can be measured using both the Time Of Flight (TOF) technique and using the CLYC energy signal. In this work, the response to monochromatic fast neutrons (1.9-3.8 MeV) of two CLYC 1″×1″ crystals was measured using both the TOF and the energy signal. The observables were combined to identify fast neutrons, to subtract the thermal neutron background and to identify different fast neutron-capture reactions on 35Cl, in other words to understand if the detected particle is an α or a proton. We performed a dedicated measurement at the CN accelerator facility of the INFN Legnaro National Laboratories (Italy), where the fast neutrons were produced by impinging a proton beam (4.5, 5.0 and 5.5 MeV) on a 7LiF target. We tested a CLYC detector 6Li-enriched at about 95%, which is ideal for thermal neutron measurements, in parallel with another CLYC detector 7Li-enriched at more than 99%, which is suitable for fast neutron measurements.

  11. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pautz, Shawn D.; Bailey, Teresa S.

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  12. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE PAGES

    Pautz, Shawn D.; Bailey, Teresa S.

    2016-11-29

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  13. A real-time n/γ digital pulse shape discriminator based on FPGA.

    PubMed

    Li, Shiping; Xu, Xiufeng; Cao, Hongrui; Yuan, Guoliang; Yang, Qingwei; Yin, Zejie

    2013-02-01

    A FPGA-based real-time digital pulse shape discriminator has been employed to distinguish between neutrons (n) and gammas (γ) in the Neutron Flux Monitor (NFM) for International Thermonuclear Experimental Reactor (ITER). The discriminator takes advantages of the Field Programmable Gate Array (FPGA) parallel and pipeline process capabilities to carry out the real-time sifting of neutrons in n/γ mixed radiation fields, and uses the rise time and amplitude inspection techniques simultaneously as the discrimination algorithm to observe good n/γ separation. Some experimental results have been presented which show that this discriminator can realize the anticipated goals of NFM perfectly with its excellent discrimination quality and zero dead time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Retrieval of phase information in neutron reflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Haan, V.; van Well, A.A.; Adenwalla, S.

    Neutron reflectometry can determine unambiguously the chemical depth profile of a thin film if both phase and amplitude of the reflectance are known. The recovery of the phase information is achieved by adding to the unknown layered structure a known ferromagnetic layer. The ferromagnetic layer is magnetized by an external magnetic field in a direction lying in the plane of the layer and subsequently perpendicular to it. The neutrons are polarized either parallel or opposite to the magnetic field. In this way three measurements can be made, with different (and known) scattering-length densities of the ferromagnetic layer. The reflectivity obtainedmore » from each measurement can be represented by a circle in the (complex) reflectance plane. The intersections of these circles provide the reflectance.« less

  15. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  16. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE PAGES

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    2017-01-12

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  17. Preliminary estimates of radiation exposures for manned interplanetary missions from anomalously large solar flare events

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.

    1988-01-01

    Preliminary estimates of radiation exposures for manned interplanetary missions resulting from anomalously large solar flare events are presented. The calculations use integral particle fluences for the February 1956, November 1960, and August 1972 events as inputs into the Langley Research Center nucleon transport code BRYNTRN. This deterministic code transports primary and secondary nucleons (protons and neutrons) through any number of layers of target material of arbitrary thickness and composition. Contributions from target nucleus fragmentation and recoil are also included. Estimates of 5 cm depth doses and dose equivalents in tissue are presented behind various thicknesses of aluminum, water, and composite aluminum/water shields for each of the three solar flare events.

  18. Cyclotron line resonant transfer through neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Wang, John C. L.; Wasserman, Ira M.; Salpeter, Edwin E.

    1988-01-01

    Monte Carlo methods are used to study in detail the resonant radiative transfer of cyclotron line photons with recoil through a purely scattering neutron star atmosphere for both the polarized and unpolarized cases. For each case, the number of scatters, the path length traveled, the escape frequency shift, the escape direction cosine, the emergent frequency spectra, and the angular distribution of escaping photons are investigated. In the polarized case, transfer is calculated using both the cold plasma e- and o-modes and the magnetic vacuum perpendicular and parallel modes.

  19. Twin-variant reorientation strain in Ni-Mn-Ga single crystal during quasi-static mechanical compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pramanick, Abhijit; An, Ke; Stoica, Alexandru Dan

    2011-01-01

    Twin variant reorientation in single crystal Ni-Mn-Ga during quasi-static mechanical compression was studied using in-situ neutron diffraction. The volume fraction of reoriented twin variants for different stress amplitudes were obtained from the changes in integrated intensities of high-order neutron diffraction peaks. It is shown that during compressive loading, ~85% of the twins were reoriented parallel to the loading direction resulting in a maximum macroscopic strain of ~5.5%, which is in agreement with measured macroscopic strain.

  20. Modulated magnetic structure of ScFe 4Al 8 by X-ray, neutron powder diffraction and Mössbauer effect

    NASA Astrophysics Data System (ADS)

    Reċko, Katarzyna; Hauback, Bjørn C.; Dobrzy nski, Ludwik; Szymański, Krzysztof; Satula, Dariusz; Kotur, B. Yu.; Suski, Wojciech

    2004-05-01

    ScFe 4Al 8 alloy belongs to the extensively investigated ThMn 12-type family. The results of Mössbauer experiments are compared with the neutrons data. ScFe 4Al 8 alloy orders around 250 K by forming antiferromagnetic spiral iron sublattice, within the tetragonal basis plane ab and magnetic iron moments close to 1 μ B at 8 K. The spins are rotating in a plane parallel to the wave vector q=( qx, qx,0).

  1. FUEL ASSEMBLY FOR A NEUTRONIC REACTOR

    DOEpatents

    Wigner, E.P.

    1958-04-29

    A fuel assembly for a nuclear reactor of the type wherein liquid coolant is circulated through the core of the reactor in contact with the external surface of the fuel elements is described. In this design a plurality of parallel plates containing fissionable material are spaced about one-tenth of an inch apart and are supported between a pair of spaced parallel side members generally perpendicular to the plates. The plates all have a small continuous and equal curvature in the same direction between the side members.

  2. Advanced Neutronics Tools for BWR Design Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santamarina, A.; Hfaiedh, N.; Letellier, R.

    2006-07-01

    This paper summarizes the developments implemented in the new APOLLO2.8 neutronics tool to meet the required target accuracy in LWR applications, particularly void effects and pin-by-pin power map in BWRs. The Method Of Characteristics was developed to allow efficient LWR assembly calculations in 2D-exact heterogeneous geometry; resonant reaction calculation was improved by the optimized SHEM-281 group mesh, which avoids resonance self-shielding approximation below 23 eV, and the new space-dependent method for resonant mixture that accounts for resonance overlapping. Furthermore, a new library CEA2005, processed from JEFF3.1 evaluations involving feedback from Critical Experiments and LWR P.I.E, is used. The specific '2005-2007more » BWR Plan' settled to demonstrate the validation/qualification of this neutronics tool is described. Some results from the validation process are presented: the comparison of APOLLO2.8 results to reference Monte Carlo TRIPOLI4 results on specific BWR benchmarks emphasizes the ability of the deterministic tool to calculate BWR assembly multiplication factor within 200 pcm accuracy for void fraction varying from 0 to 100%. The qualification process against the BASALA mock-up experiment stresses APOLLO2.8/CEA2005 performances: pin-by-pin power is always predicted within 2% accuracy, reactivity worth of B4C or Hf cruciform control blade, as well as Gd pins, is predicted within 1.2% accuracy. (authors)« less

  3. The shear-Hall instability in newborn neutron stars

    NASA Astrophysics Data System (ADS)

    Kondić, T.; Rüdiger, G.; Hollerbach, R.

    2011-11-01

    Aims: In the first few minutes of a newborn neutron star's life the Hall effect and differential rotation may both be important. We demonstrate that these two ingredients are sufficient for generating a "shear-Hall instability" and for studying its excitation conditions, growth rates, and characteristic magnetic field patterns. Methods: We numerically solve the induction equation in a spherical shell, with a kinematically prescribed differential rotation profile Ω(s), where s is the cylindrical radius. The Hall term is linearized about an imposed uniform axial field. The linear stability of individual azimuthal modes, both axisymmetric and non-axisymmetric, is then investigated. Results: For the shear-Hall instability to occur, the axial field must be parallel to the rotation axis if Ω(s) decreases outward, whereas if Ω(s) increases outward it must be anti-parallel. The instability draws its energy from the differential rotation, and occurs on the short rotational timescale rather than on the much longer Hall timescale. It operates most efficiently if the Hall time is comparable to the diffusion time. Depending on the precise field strengths B0, either axisymmetric or non-axisymmetric modes may be the most unstable. Conclusions: Even if the differential rotation in newborn neutron stars is quenched within minutes, the shear-Hall instability may nevertheless amplify any seed magnetic fields by many orders of magnitude.

  4. Experimental characterization of HOTNES: A new thermal neutron facility with large homogeneity area

    NASA Astrophysics Data System (ADS)

    Bedogni, R.; Sperduti, A.; Pietropaolo, A.; Pillon, M.; Pola, A.; Gómez-Ros, J. M.

    2017-01-01

    A new thermal neutron irradiation facility, called HOTNES (HOmogeneous Thermal NEutron Source), was established in the framework of a collaboration between INFN-LNF and ENEA-Frascati. HOTNES is a polyethylene assembly, with about 70 cmx70 cm square section and 100 cm height, including a large, cylindrical cavity with diameter 30 cm and height 70 cm. The facility is supplied by a 241Am-B source located at the bottom of this cavity. The facility was designed in such a way that the iso-thermal-fluence surfaces, characterizing the irradiation volume, coincide with planes parallel to the cavity bottom. The thermal fluence rate across a given isofluence plane is as uniform as 1% on a disk with 30 cm diameter. Thermal fluence rate values from about 700 cm-2 s-1 to 1000 cm-2 s-1 can be achieved. The facility design, previously optimized by Monte Carlo simulation, was experimentally verified. The following techniques were used: gold activation foils to assess the thermal fluence rate, semiconductor-based active detector for mapping the irradiation volume, and Bonner Sphere Spectrometer to determine the complete neutron spectrum. HOTNES is expected to be attractive for the scientific community involved in neutron metrology, neutron dosimetry and neutron detector testing.

  5. Flow-through compression cell for small-angle and ultra-small-angle neutron scattering measurements

    NASA Astrophysics Data System (ADS)

    Hjelm, Rex P.; Taylor, Mark A.; Frash, Luke P.; Hawley, Marilyn E.; Ding, Mei; Xu, Hongwu; Barker, John; Olds, Daniel; Heath, Jason; Dewers, Thomas

    2018-05-01

    In situ measurements of geological materials under compression and with hydrostatic fluid pressure are important in understanding their behavior under field conditions, which in turn provides critical information for application-driven research. In particular, understanding the role of nano- to micro-scale porosity in the subsurface liquid and gas flow is critical for the high-fidelity characterization of the transport and more efficient extraction of the associated energy resources. In other applications, where parts are produced by the consolidation of powders by compression, the resulting porosity and crystallite orientation (texture) may affect its in-use characteristics. Small-angle neutron scattering (SANS) and ultra SANS are ideal probes for characterization of these porous structures over the nano to micro length scales. Here we show the design, realization, and performance of a novel neutron scattering sample environment, a specially designed compression cell, which provides compressive stress and hydrostatic pressures with effective stress up to 60 MPa, using the neutron beam to probe the effects of stress vectors parallel to the neutron beam. We demonstrate that the neutron optics is suitable for the experimental objectives and that the system is highly stable to the stress and pressure conditions of the measurements.

  6. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    NASA Astrophysics Data System (ADS)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  7. GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac

    2017-03-01

    The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.

  8. Measurement of the Am 242 m neutron-induced reaction cross sections

    DOE PAGES

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; ...

    2017-02-17

    The neutron-induced reaction cross sections of 242mAm were measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. A new neutron-capture cross section was determined, and the absolute scale was set according to a concurrent measurement of the well-known 242mAm(n,f) cross section. The (n,γ) cross section was measured from thermal energy to an incident energy of 1 eV at which point the data quality was limited by the reaction yield in the laboratory. Our new 242mAm fission cross section was normalized to ENDF/B-VII.1 tomore » set the absolute scale, and it agreed well with the (n,f) cross section from thermal energy to 1 keV. Lastly, the average absolute capture-to-fission ratio was determined from thermal energy to E n = 0.1 eV, and it was found to be 26(4)% as opposed to the ratio of 19% from the ENDF/B-VII.1 evaluation.« less

  9. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  10. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  11. RAMONA-3B application to Browns Ferry ATWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slovik, G.C.; Neymotin, L.; Cazzoli, E.

    1984-01-01

    This paper discusses two preliminary MSIV clsoure ATWS calculations done using the RAMONA-3B code and the work being done to create the necessary cross section sets for the Browns Ferry Unit 1 reactor. The RAMONA-3B code employs a three-dimensional neutron kinetics model coupled with one-dimensional, four equation, nonhomogeneous, nonequilibrium thermal hydraulics. To be compatible with 3-D neutron kinetics, the code uses parallel coolant channels in the core. It also includes a boron transport model and all necessary BWR components such as jet pump, recirculation pump, steam separator, steamline with safety and relief valves, main steam isolation valve, turbine stop valve,more » and turbine bypass valve. A summary of RAMONA-3B neutron kinetics and thermal hydraulics models is presented in the Appendix.« less

  12. Parallel theoretical study of the two components of the prompt fission neutrons: Dynamically released at scission and evaporated from fully accelerated fragments

    NASA Astrophysics Data System (ADS)

    Carjan, Nicolae; Rizea, Margarit; Talou, Patrick

    2017-09-01

    Prompt fission neutrons (PFN) angular and energy distributions for the reaction 235U(nth,f) are calculated as a function of the mass asymmetry of the fission fragments using two extreme assumptions: 1) PFN are released during the neck rupture due to the diabatic coupling between the neutron degree of freedom and the rapidly changing neutron-nucleus potential. These unbound neutrons are faster than the separation of the nascent fragments and most of them leave the fissioning system in few 10-21 sec. i.e., at the begining of the acceleration phase. Surrounding the fissioning nucleus by a sphere one can calculate the radial component of the neutron current density. Its time integral gives the angular distribution with respect to the fission axis. The average energy of each emitted neutron is also calculated using the unbound part of each neutron wave packet. The distribution of these average energies gives the general trends of the PFN spectrum: the slope, the range and the average value. 2) PFN are evaporated from fully accelerated, fully equilibrated fission fragments. To follow the de-excitation of these fragments via neutron and γ-ray sequential emissions, a Monte Carlo sampling of the initial conditions and a Hauser-Feshbach statistical approach is used. Recording at each step the emission probability, the energy and the angle of each evaporated neutron one can construct the PFN energy and the PFN angular distribution in the laboratory system. The predictions of these two methods are finally compared with recent experimental results obtained for a given fragment mass ratio.

  13. Spherical harmonic results for the 3D Kobayashi Benchmark suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P N; Chang, B; Hanebutte, U R

    1999-03-02

    Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.

  14. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  15. Analysis of pinching in deterministic particle separation

    NASA Astrophysics Data System (ADS)

    Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German

    2011-11-01

    We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.

  16. Blocked inverted indices for exact clustering of large chemical spaces.

    PubMed

    Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver

    2014-09-22

    The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.

  17. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  18. Calculating the Responses of Self-Powered Radiation Detectors.

    NASA Astrophysics Data System (ADS)

    Thornton, D. A.

    Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.

  19. Representing and computing regular languages on massively parallel networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.I.; O'Sullivan, J.A.; Boysam, B.

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less

  20. Progress in FMIT test assembly development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opperman, E.K.; Vogel, M.A.; Shen, E.J.

    Research and development supporting the completed design of the Fusion Materials Irradiation Test (FMIT) Facility is continuing at the Hanford Engineering Development Laboratory (HEDL) in Richland, Washington. The FMIT, a deuteron accelerator based (d + Li) neutron source, will produce an intense flux of high energy neutrons for use in radiation damage studies of fusion reactor materials. The most intense flux magnitude of greater than 10/sup 15/ n/cm/sup 2/-s is located close to the neutron producing lithium target and is distributed within a volume about the size of an American football. The conceptual design and development of FMIT experiments calledmore » Test Assemblies has progressed over the past five years in parallel with the design of the FMIT. The paper will describe the recent accomplishments made in developing test assemblies appropriate for use in the limited volume close to the FMIT target where high neutron flux and heating rates and the associated spacial gradients significantly impact design considerations.« less

  1. Hippo/crates-in-situ deformation strain and testure studies using neutron time-of-flight diffraction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, S. C.; Hartig, C.; Brissier, T. D.

    2005-01-01

    In situ deformation studies by diffraction allow studying of deformation mechanisms and provide valuable data to validate and improve deformation models. In particular, deformation studies using time-of-flight neutrons provide averages over large numbers of grains and allow to probing the response of lattice planes parallel and perpendicular to the applied load simultaneously. In this paper we describe the load-frame CRATES, designed for the HIPPO neutron time-of-flight diffractometer at LANSCE. The HIPPO/CRATES combination allows probing up to 20 diffraction vectors simultaneously and provides rotation of the sample in the beam while under load. With this, deformation texture, i.e. the change ofmore » grain orientation due to plastic deformation, or strain pole figures may be measured. We report initial results of a validation experiment, comparing deformation of a Zircaloy specimen measured using the NPD neutron diffractometer with results obtained for the same material using HIPPO/CRATES.« less

  2. Maser Emission from Gravitational States on Isolated Neutron Stars

    NASA Astrophysics Data System (ADS)

    Tepliakov, Nikita V.; Vovk, Tatiana A.; Rukhlenko, Ivan D.; Rozhdestvensky, Yuri V.

    2018-04-01

    Despite years of research on neutron stars, the source of their radio emission is still under debate. Here we propose a new coherent mechanism of pulsar radio emission based on transitions between gravitational states of electrons confined above the pulsar atmosphere. Our mechanism assumes that the coherent radiation is generated upon the electric and magnetic dipole transitions of electrons falling onto the polar caps of the pulsar, and predicts that this radiation occurs at radio frequencies—in full agreement with the observed emission spectra. We show that while the linearly polarized electric dipole radiation propagates parallel to the neutron star surface and has a fan-shape angular spectrum, the magnetic dipole emission comes from the magnetic poles of the pulsar in the form of two narrow beams and is elliptically polarized due to the spin–orbit coupling of electrons confined by the magnetic field. By explaining the main observables of the pulsar radio emission, the proposed mechanism indicates that gravitational quantum confinement plays an essential role in the physics of neutron stars.

  3. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  4. Radiography and partial tomography of wood with thermal neutrons

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Fratzscher, D.; Schwabe, A.; Schillinger, B.; Zscherpel, U.; Ewert, U.

    2011-09-01

    The effective high neutron scattering absorption coefficient of hydrogen (48.5 cm 2/g) due to the scattering allows neutrons to reveal hydrocarbon structures with more contrast than X-rays, but at the same time limits the sample size and thickness that can be investigated. Many planar shaped objects, particularly wood samples, are sufficiently thin to allow thermal neutrons to transmit through the sample in a direction perpendicular to the planar face but not in a parallel direction, due to increased thickness. Often, this is an obstacle that prevents some tomographic reconstruction algorithms from obtaining desired results because of inadequate information or presence of distracting artifacts due to missing projections. This can be true for samples such as the distribution of glue in glulam (boards of wooden layers glued together), or the course of partially visible annual rings in trees where the features of interest are parallel to the planar surface of the sample. However, it should be possible to study these features by rotating the specimen within a limited angular range. In principle, this approach has been shown previously in a study with fast neutrons [2]. A study of this kind was performed at the Antares facility of FRM II in Garching with a 2.6×10 7/cm 2 s thermal neutron beam. The limit of penetration was determined for a wooden step wedge carved from a 2 cm×4 cm block of wood in comparison to other materials such as heavy metals and Lucite as specimens rich in hydrogen. The depth of the steps was 1 cm, the height 0.5 cm. The annual ring structures were clearly detectable up to 2 cm thickness. Wooden specimens, i.e. shivers, from a sunken old ship have been subjected to tomography. Not visible from the outside, clear radial structures have been found that are typical for certain kinds of wood. This insight was impaired in a case where the specimen had been soaked with ethylene glycol. In another large sample study, a planar board made of glulam has been studied to show the glued layers. This study shows not only the limits of penetration in wood but also demonstrates access to structures perpendicular to the surface in larger planar objects by tomography with fast neutrons, even with incomplete sets of projection data that covers an angular range of only 90° or even 60°.

  5. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  6. Neutron-Induced Fission Cross Section Measurements for Full Suite of Uranium Isotopes

    NASA Astrophysics Data System (ADS)

    Laptev, Alexander; Tovesson, Fredrik; Hill, Tony

    2010-11-01

    A well established program of neutron-induced fission cross section measurement at Los Alamos Neutron Science Center (LANSCE) is supporting the Fuel Cycle Research program (FC R&D). The incident neutron energy range spans energies from sub-thermal energies up to 200 MeV by measuring both the Lujan Center and the Weapons Neutron Research center (WNR). Conventional parallel-plate fission ionization chambers with actinide deposited foils are used as a fission detector. The time-of-flight method is implemented to measure neutron energy. Counting rate ratio from investigated and standard U-235 foils is translated into fission cross section ratio. Different methods of normalization for measured ratio are employed, namely, using of actinide deposit thicknesses, normalization to evaluated data, etc. Finally, ratios are converted to cross sections based on the standard U-235 fission cross section data file. Preliminary data for newly investigated isotopes U-236 and U-234 will be reported. Those new data complete a full suite of Uranium isotopes, which were investigated with presented experimental approach. When analysis of the new measured data will is completed, data will be delivered to evaluators. Having data for full set of Uranium isotopes will increase theoretical modeling capabilities and make new data evaluations much more reliable.

  7. Production and testing of the ENEA-Bologna VITJEFF32.BOLIB (JEFF-3.2) multi-group (199 n + 42 γ) cross section library in AMPX format for nuclear fission applications

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2017-09-01

    The ENEA-Bologna Nuclear Data Group produced the VITJEFF32.BOLIB multi-group coupled neutron/photon (199 n + 42 γ) cross section library in AMPX format, based on the OECD-NEA Data Bank JEFF-3.2 evaluated nuclear data library. VITJEFF32.BOLIB was conceived for nuclear fission applications as European counterpart of the ORNL VITAMIN-B7 similar library (ENDF/B-VII.0 data). VITJEFF32.BOLIB has the same neutron and photon energy group structure as the former ORNL VITAMIN-B6 reference library (ENDF/B-VI.3 data) and was produced using similar data processing methodologies, based on the LANL NJOY-2012.53 nuclear data processing system for the generation of the nuclide cross section data files in GENDF format. Then the ENEA-Bologna 2007 Revision of the ORNL SCAMPI nuclear data processing system was used for the conversion into the AMPX format. VITJEFF32.BOLIB contains processed cross section data files for 190 nuclides, obtained through the Bondarenko (f-factor) method for the treatment of neutron resonance self-shielding and temperature effects. Collapsed working libraries of self-shielded cross sections in FIDO-ANISN format, used by the deterministic transport codes of the ORNL DOORS system, can be generated from VITJEFF32.BOLIB through the cited SCAMPI version. This paper describes the methodology and specifications of the data processing performed and presents some results of the VITJEFF32.BOLIB validation.

  8. Iso-geometric analysis for neutron diffusion problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, S. K.; Eaton, M. D.; Williams, M. M. R.

    Iso-geometric analysis can be viewed as a generalisation of the finite element method. It permits the exact representation of a wider range of geometries including conic sections. This is possible due to the use of concepts employed in computer-aided design. The underlying mathematical representations from computer-aided design are used to capture both the geometry and approximate the solution. In this paper the neutron diffusion equation is solved using iso-geometric analysis. The practical advantages are highlighted by looking at the problem of a circular fuel pin in a square moderator. For this problem the finite element method requires the geometry tomore » be approximated. This leads to errors in the shape and size of the interface between the fuel and the moderator. In contrast to this iso-geometric analysis allows the interface to be represented exactly. It is found that, due to a cancellation of errors, the finite element method converges more quickly than iso-geometric analysis for this problem. A fuel pin in a vacuum was then considered as this problem is highly sensitive to the leakage across the interface. In this case iso-geometric analysis greatly outperforms the finite element method. Due to the improvement in the representation of the geometry iso-geometric analysis can outperform traditional finite element methods. It is proposed that the use of iso-geometric analysis on neutron transport problems will allow deterministic solutions to be obtained for exact geometries. Something that is only currently possible with Monte Carlo techniques. (authors)« less

  9. Structure of aqueous proline via parallel tempering molecular dynamics and neutron diffraction.

    PubMed

    Troitzsch, R Z; Martyna, G J; McLain, S E; Soper, A K; Crain, J

    2007-07-19

    The structure of aqueous L-proline amino acid has been the subject of much debate centering on the validity of various proposed models, differing widely in the extent to which local and long-range correlations are present. Here, aqueous proline is investigated by atomistic, replica exchange molecular dynamics simulations, and the results are compared to neutron diffraction and small angle neutron scattering (SANS) data, which have been reported recently (McLain, S.; Soper, A.; Terry, A.; Watts, A. J. Phys. Chem. B 2007, 111, 4568). Comparisons between neutron experiments and simulation are made via the static structure factor S(Q) which is measured and computed from several systems with different H/D isotopic compositions at a concentration of 1:20 molar ratio. Several different empirical water models (TIP3P, TIP4P, and SPC/E) in conjunction with the CHARMM22 force field are investigated. Agreement between experiment and simulation is reasonably good across the entire Q range although there are significant model-dependent variations in some cases. In general, agreement is improved slightly upon application of approximate quantum corrections obtained from gas-phase path integral simulations. Dimers and short oligomeric chains formed by hydrogen bonds (frequently bifurcated) coexist with apolar (hydrophobic) contacts. These emerge as the dominant local motifs in the mixture. Evidence for long-range association is more equivocal: No long-range structures form spontaneously in the MD simulations, and no obvious low-Q signature is seen in the SANS data. Moreover, associations introduced artificially to replicate a long-standing proposed mesoscale structure for proline correlations as an initial condition are annealed out by parallel tempering MD simulations. However, some small residual aggregates do remain, implying a greater degree of long-range order than is apparent in the SANS data.

  10. New result for the neutron β -asymmetry parameter A0 from UCNA

    NASA Astrophysics Data System (ADS)

    Brown, M. A.-P.; Dees, E. B.; Adamek, E.; Allgeier, B.; Blatnik, M.; Bowles, T. J.; Broussard, L. J.; Carr, R.; Clayton, S.; Cude-Woods, C.; Currie, S.; Ding, X.; Filippone, B. W.; García, A.; Geltenbort, P.; Hasan, S.; Hickerson, K. P.; Hoagland, J.; Hong, R.; Hogan, G. E.; Holley, A. T.; Ito, T. M.; Knecht, A.; Liu, C.-Y.; Liu, J.; Makela, M.; Martin, J. W.; Melconian, D.; Mendenhall, M. P.; Moore, S. D.; Morris, C. L.; Nepal, S.; Nouri, N.; Pattie, R. W.; Pérez Galván, A.; Phillips, D. G.; Picker, R.; Pitt, M. L.; Plaster, B.; Ramsey, J. C.; Rios, R.; Salvat, D. J.; Saunders, A.; Sondheim, W.; Seestrom, S. J.; Sjue, S.; Slutsky, S.; Sun, X.; Swank, C.; Swift, G.; Tatar, E.; Vogelaar, R. B.; VornDick, B.; Wang, Z.; Wexler, J.; Womack, T.; Wrede, C.; Young, A. R.; Zeck, B. A.; UCNA Collaboration

    2018-03-01

    Background: The neutron β -decay asymmetry parameter A0 defines the angular correlation between the spin of the neutron and the momentum of the emitted electron. Values for A0 permit an extraction of the ratio of the weak axial-vector to vector coupling constants, λ ≡gA/gV , which under assumption of the conserved vector current hypothesis (gV=1 ) determines gA. Precise values for gA are important as a benchmark for lattice QCD calculations and as a test of the standard model. Purpose: The UCNA experiment, carried out at the Ultracold Neutron (UCN) source at the Los Alamos Neutron Science Center, was the first measurement of any neutron β -decay angular correlation performed with UCN. This article reports the most precise result for A0 obtained to date from the UCNA experiment, as a result of higher statistics and reduced key systematic uncertainties, including from the neutron polarization and the characterization of the electron detector response. Methods: UCN produced via the downscattering of moderated spallation neutrons in a solid deuterium crystal were polarized via transport through a 7 T polarizing magnet and a spin flipper, which permitted selection of either spin state. The UCN were then contained within a 3-m long cylindrical decay volume, situated along the central axis of a superconducting 1 T solenoidal spectrometer. With the neutron spins then oriented parallel or anti-parallel to the solenoidal field, an asymmetry in the numbers of emitted decay electrons detected in two electron detector packages located on both ends of the spectrometer permitted an extraction of A0. Results: The UCNA experiment reports a new 0.67% precision result for A0 of A0=-0.12054 (44) stat(68) syst , which yields λ =gA/gV=-1.2783 (22 ) . Combination with the previous UCNA result and accounting for correlated systematic uncertainties produces A0=-0.12015 (34) stat(63) syst and λ =gA/gV=-1.2772 (20 ) . Conclusions: This new result for A0 and gA/gV from the UCNA experiment has provided confirmation of the shift in values for gA/gV that has emerged in the published results from more recent experiments, which are in striking disagreement with the results from older experiments. Individual systematic corrections to the asymmetries in older experiments (published prior to 2002) were >10 %, whereas those in the more recent ones (published after 2002) have been of the scale of <2 %. The impact of these older results on the global average will be minimized should future measurements of A0 reach the 0.1% level of precision with central values near the most recent results.

  11. Development of a simultaneous SANS / FTIR measuring system and its application to polymer cocrystals

    NASA Astrophysics Data System (ADS)

    Kaneko, F.; Seto, N.; Sato, S.; Radulescu, A.; Schiavone, M. M.; Allgaier, J.; Ute, K.

    2016-09-01

    In order to provide plenty of structure information which would assist in the analysis and interpretation of small angle neutron scattering (SANS) profile, a novel method for the simultaneous time-resolved measurement of SANS and Fourier transform infrared (FTIR) spectroscopy has been developed. The method was realized by building a device consisting of a portable FTIR spectrometer and an optical system equipped with two aluminum coated quartz plates that are fully transparent to neutron beams but play as mirrors for infrared radiation. The optical system allows both a neutron beam and an infrared beam pass through the same position of a test specimen coaxially. The device was installed on a small angle neutron diffractometer, KWS2 of the Jülich Centre for Neutron Science (JCNS) outstation at Heinz Maier-Leibnitz Center (MLZ) in Garching, Germany. In order to check the performance of this simultaneous measuring system, the structural changes in the cocrystals of syndiotactic polystyrene during the course of heating were followed. It has been confirmed that FTIR spectra measured in parallel are able to provide information about the behavior of each component and also useful to grasp in real time what is actually happening in the sample system.

  12. Measurement of neutron-induced reactions on 242mAm

    NASA Astrophysics Data System (ADS)

    Buckner, M. Q.; Wu, C.-Y.; Henderson, R. A.; Bucher, B.; Chyzh, A.; Bredeweg, T. A.; Baramsai, B.; Couture, A.; Jandel, M.; Mosby, S.; Ullmann, J. L.; Dance Collaboration

    2016-09-01

    Neutron-induced reaction cross sections of 242mAm were measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. A new neutron-capture cross section was determined relative to a simultaneous measurement of the well-known 242mAm(n,f) cross section. The (n, γ) cross section was measured from thermal to an incident energy of 1 eV. Our new 242mAm fission cross section was normalized to ENDF/B-VII.1 and agreed well with the (n,f) cross section reported in the literature from thermal energy to 1 keV. The capture-to-fission ratio was determined from thermal energy to En = 0.1 eV, and it was found to be (n, γ)/(n,f) = 26(4)% compared to 19% from ENDF/B-VII.1. Our latest results will be reported. US Department of Energy by Lawrence Livermore National Security, LLC Contract DE-AC52-07NA27344 and Los Alamos National Security, LLC Contract DE-AC52-06NA25396 and U.S. DOE/NNSA Office of Defense Nuclear Nonproliferation Research and Development.

  13. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    NASA Astrophysics Data System (ADS)

    Doisneau, François; Arienti, Marco; Oefelein, Joseph C.

    2017-01-01

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier-Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle-particle coupling barely influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.

  14. A semi-Lagrangian transport method for kinetic problems with application to dense-to-dilute polydisperse reacting spray flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doisneau, François, E-mail: fdoisne@sandia.gov; Arienti, Marco, E-mail: marient@sandia.gov; Oefelein, Joseph C., E-mail: oefelei@sandia.gov

    For sprays, as described by a kinetic disperse phase model strongly coupled to the Navier–Stokes equations, the resolution strategy is constrained by accuracy objectives, robustness needs, and the computing architecture. In order to leverage the good properties of the Eulerian formalism, we introduce a deterministic particle-based numerical method to solve transport in physical space, which is simple to adapt to the many types of closures and moment systems. The method is inspired by the semi-Lagrangian schemes, developed for Gas Dynamics. We show how semi-Lagrangian formulations are relevant for a disperse phase far from equilibrium and where the particle–particle coupling barelymore » influences the transport; i.e., when particle pressure is negligible. The particle behavior is indeed close to free streaming. The new method uses the assumption of parcel transport and avoids to compute fluxes and their limiters, which makes it robust. It is a deterministic resolution method so that it does not require efforts on statistical convergence, noise control, or post-processing. All couplings are done among data under the form of Eulerian fields, which allows one to use efficient algorithms and to anticipate the computational load. This makes the method both accurate and efficient in the context of parallel computing. After a complete verification of the new transport method on various academic test cases, we demonstrate the overall strategy's ability to solve a strongly-coupled liquid jet with fine spatial resolution and we apply it to the case of high-fidelity Large Eddy Simulation of a dense spray flow. A fuel spray is simulated after atomization at Diesel engine combustion chamber conditions. The large, parallel, strongly coupled computation proves the efficiency of the method for dense, polydisperse, reacting spray flows.« less

  15. Deterministic seismogenic scenarios based on asperities spatial distribution to assess tsunami hazard on northern Chile (18°S to 24°S)

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.

    2016-12-01

    Southern Peru and northern Chile coastal areas, extended between 12º to 24ºS, have been recognized as a mature seismic gap with a high seismogenic potential associated to seismic moment deficit accumulated since 1877. An important scientific question is which will be the breaking pattern of a future megathrust earthquake, being relevant from hazard assessment perspective. During the last decade, the occurrence of three major subduction earthquakes has given the possibility to acquire outstanding geophysical and geological information to know the behavior of phenomena. An interesting result is the relationship between the maximum slip areas and the spatial distribution of asperities in subduction zones. In this contribution, we propose a methodology to identify a regional pattern of main asperities to construct reliable seismogenic scenarios in a seismic gap. We follow a deterministic approach to explore the distribution of asperities segmentation using geophysical and geodetic data as trench-parallel gravity anomaly (TPGA), interseismic coupling (ISC), b-value, historical moment release, residual bathymetric and gravity anomalies. The combined information represents physical constraints for short and long term suitable regions for future mega earthquakes. To illuminate the asperities distribution, we construct profiles using fault coordinates, along-strike and down-dip direction, of all proxies to define the boundaries of a major asperities (> 100 km). The geometry of a major asperity is useful to define a finite set of future deterministic seismogenic scenarios to evaluate tsunamigenic hazard in main cities of northern zone of Chile (18°S to 24°S).

  16. TEST-HOLE CONSTRUCTION FOR A NEUTRONIC REACTOR

    DOEpatents

    Ohlinger, L.A.; Seitz, F.; Young, G.J.

    1959-02-17

    Test-hole construction is described for a reactor which provides safe and ready access to the neutron flux region for specimen materials which are to be irradiated therein. An elongated tubular thimble adapted to be inserted in the access hole through the wall of the reactor is constructed of aluminum and is provided with a plurality of holes parallel to the axis of the thimble for conveying the test specimens into position for irradiation, and a conduit for the circulation of coolant. A laminated shield formed of alternate layers of steel and pressed wood fiber is disposed lengthwise of the thimble near the outer end thereof.

  17. Estimation of Listeria monocytogenes and Escherichia coli O157:H7 prevalence and levels in naturally contaminated rocket and cucumber samples by deterministic and stochastic approaches.

    PubMed

    Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H

    2015-02-01

    The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples.

  18. Parallel Implementation of Numerical Solution of Few-Body Problem Using Feynman's Continual Integrals

    NASA Astrophysics Data System (ADS)

    Naumenko, Mikhail; Samarin, Viacheslav

    2018-02-01

    Modern parallel computing algorithm has been applied to the solution of the few-body problem. The approach is based on Feynman's continual integrals method implemented in C++ programming language using NVIDIA CUDA technology. A wide range of 3-body and 4-body bound systems has been considered including nuclei described as consisting of protons and neutrons (e.g., 3,4He) and nuclei described as consisting of clusters and nucleons (e.g., 6He). The correctness of the results was checked by the comparison with the exactly solvable 4-body oscillatory system and experimental data.

  19. Neutron Energy Spectra and Yields from the 7Li(p,n) Reaction for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Tessler, M.; Friedman, M.; Schmidt, S.; Shor, A.; Berkovits, D.; Cohen, D.; Feinberg, G.; Fiebiger, S.; Krása, A.; Paul, M.; Plag, R.; Plompen, A.; Reifarth, R.

    2016-01-01

    Neutrons produced by the 7Li(p, n)7Be reaction close to threshold are widely used to measure the cross section of s-process nucleosynthesis reactions. While experiments have been performed so far with Van de Graaff accelerators, the use of RF accelerators with higher intensities is planned to enable investigations on radioactive isotopes. In parallel, high-power Li targets for the production of high-intensity neutrons at stellar energies are developed at Goethe University (Frankfurt, Germany) and SARAF (Soreq NRC, Israel). However, such setups pose severe challenges for the measurement of the proton beam intensity or the neutron fluence. In order to develop appropriate methods, we studied in detail the neutron energy distribution and intensity produced by the thick-target 7Li(p,n)7Be reaction and compared them to state-of- the-art simulation codes. Measurements were performed with the bunched and chopped proton beam at the Van de Graaff facility of the Institute for Reference Materials and Measurements (IRMM) using the time-of-flight (TOF) technique with thin (1/8") and thick (1") detectors. The importance of detailed simulations of the detector structure and geometry for the conversion of TOF to a neutron energy is stressed. The measured neutron spectra are consistent with those previously reported and agree well with Monte Carlo simulations that include experimentally determined 7Li(p,n) cross sections, two-body kinematics and proton energy loss in the Li-target.

  20. Organ and effective dose coefficients for cranial and caudal irradiation geometries: Neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, K. G.; Eckerman, K. F.; Hertel, N. E.

    Dose coefficients based on the recommendations of International Commission on Radiological Protection (ICRP) Publication 103 were reported in ICRP Publication 116, the revision of ICRP Publication 74 and ICRU Publication 57 for the six reference irradiation geometries: anterior–posterior, posterior–anterior, right and left lateral, rotational and isotropic. In this work, dose coefficients for neutron irradiation of the body with parallel beams directed upward from below the feet (caudal) and downward from above the head (cranial) using the ICRP 103 methodology were computed using the MCNP 6.1 radiation transport code. The dose coefficients were determined for neutrons ranging in energy from 10more » –9 MeV to 10 GeV. Here, at energies below about 500 MeV, the cranial and caudal dose coefficients are less than those for the six reference geometries reported in ICRP Publication 116.« less

  1. Organ and effective dose coefficients for cranial and caudal irradiation geometries: Neutrons

    DOE PAGES

    Veinot, K. G.; Eckerman, K. F.; Hertel, N. E.; ...

    2016-08-29

    Dose coefficients based on the recommendations of International Commission on Radiological Protection (ICRP) Publication 103 were reported in ICRP Publication 116, the revision of ICRP Publication 74 and ICRU Publication 57 for the six reference irradiation geometries: anterior–posterior, posterior–anterior, right and left lateral, rotational and isotropic. In this work, dose coefficients for neutron irradiation of the body with parallel beams directed upward from below the feet (caudal) and downward from above the head (cranial) using the ICRP 103 methodology were computed using the MCNP 6.1 radiation transport code. The dose coefficients were determined for neutrons ranging in energy from 10more » –9 MeV to 10 GeV. Here, at energies below about 500 MeV, the cranial and caudal dose coefficients are less than those for the six reference geometries reported in ICRP Publication 116.« less

  2. In situ investigation of deformation mechanisms in magnesium-based metal matrix composites

    NASA Astrophysics Data System (ADS)

    Farkas, Gergely; Choe, Heeman; Máthis, Kristián; Száraz, Zoltán; Noh, Yoonsook; Trojanová, Zuzanka; Minárik, Peter

    2015-07-01

    We studied the effect of short fibers on the mechanical properties of a magnesium alloy. In particular, deformation mechanisms in a Mg-Al-Sr alloy reinforced with short alumina fibers were studied in situ using neutron diffraction and acoustic emission methods. The fibers' plane orientation with respect to the loading axis was found to be a key parameter, which influences the acting deformation processes, such as twinning or dislocation slip. Furthermore, the twinning activity was much more significant in samples with parallel fiber plane orientation, which was confirmed by both acoustic emission and electron backscattering diffraction results. Neutron diffraction was also used to assist in analyzing the acoustic emission and electron backscattering diffraction results. The simultaneous application of the two in situ methods, neutron diffraction and acoustic emission, was found to be beneficial for obtaining complementary datasets about the twinning and dislocation slip in the magnesium alloys and composites used in this study.

  3. Total prompt γ-ray emission in fission

    NASA Astrophysics Data System (ADS)

    Wu, C. Y.; Chyzh, A.; Kwan, E.; Henderson, R. A.; Bredeweg, T. A.; Haight, R. C.; Hayes-Sterbenz, A. C.; Lee, H. Y.; O'Donnell, J. M.; Ullmann, J. L.

    2017-09-01

    The total prompt γ-ray energy distributions were measured for the neutron-induced fission of 235U, 239,241Pu at incident neutron energy of 0.025 eV-100 keV, and the spontaneous fission of 252Cf using the Detector for Advanced Neutron Capture Experiments (DANCE) array in coincidence with the detection of fission fragments by a parallel-plate avalanche counter. Corrections were made to the measured distribution by unfolding the two-dimension spectrum of total prompt γ-ray energy vs multiplicity using a simulated DANCE response matrix. A summary of this work is presented with the emphasis on the comparison of total prompt fission γ-ray energy between our results and previous ones. The mean values of the total prompt γ-ray energy ⟨Eγ,tot⟩, determined from the unfolded distributions, are ˜20% higher than those derived from measurements using single γ-ray detector for all the fissile nuclei studied.

  4. Reflector and Protections in a Sodium-cooled Fast Reactor: Modelling and Optimization

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Fontaine, Bruno

    2017-09-01

    The ASTRID project (Advanced Sodium Technological Reactor for Industrial Demonstration) is a Generation IV nuclear reactor concept under development in France [1]. In this frame, studies are underway to optimize radial reflectors and protections. Considering radial protections made in natural boron carbide, this study is conducted to assess the neutronic performances of the MgO as the reference choice for reflector material, in comparison with other possible materials including a more conventional stainless steel. The analysis is based upon a simplified 1-D and 2-D deterministic modelling of the reactor, providing simplified interfaces between core, reflector and protections. Such models allow examining detailed reaction rate distributions; they also provide physical insights into local spectral effects occurring at the Core-Reflector and at the Reflector-Protection interfaces.

  5. Use of SRIM and Garfield with Geant4 for the characterization of a hybrid 10B/3He neutron detector

    NASA Astrophysics Data System (ADS)

    van der Ende, B. M.; Rand, E. T.; Erlandson, A.; Li, L.

    2018-06-01

    This paper describes a method for more complete neutron detector characterization using Geant4's Monte Carlo methods for characterizing overall detector response rate and Garfield interfaced with SRIM for the simulation of the detector's raw pulses, as applied to a hybrid 10B/3He detector. The Geant4 models characterizing the detector's interaction with a 252Cf point source and parallel beams of mono-energetic neutrons (assuming ISO 8529 reference energy values) compare and agree well with calibrated 252Cf measurements to within 6.4%. Validated Geant4 model outputs serve as input to Garfield+SRIM calculations to provide meaningful pulse height spectra. Modifications to Garfield for this work were necessary to account for simultaneous tracking of electrons resulting from proton and triton reaction products from a single 3He neutron capture event, and it was further necessary to interface Garfield with the energy loss, range, and straggling calculations provided by SRIM. Individual raw pulses generated by Garfield+SRIM are also observed to agree well with experimentally measured raw pulses from the detector.

  6. High-Dose Neutron Detector Development Using 10B Coated Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menlove, Howard Olsen; Henzlova, Daniela

    2016-11-08

    During FY16 the boron-lined parallel-plate technology was optimized to fully benefit from its fast timing characteristics in order to enhance its high count rate capability. To facilitate high count rate capability, a novel fast amplifier with timing and operating properties matched to the detector characteristics was developed and implemented in the 8” boron plate detector that was purchased from PDT. Each of the 6 sealed-cells was connected to a fast amplifier with corresponding List mode readout from each amplifier. The FY16 work focused on improvements in the boron-10 coating materials and procedures at PDT to significantly improve the neutron detectionmore » efficiency. An improvement in the efficiency of a factor of 1.5 was achieved without increasing the metal backing area for the boron coating. This improvement has allowed us to operate the detector in gamma-ray backgrounds that are four orders of magnitude higher than was previously possible while maintaining a relatively high counting efficiency for neutrons. This improvement in the gamma-ray rejection is a key factor in the development of the high dose neutron detector.« less

  7. Absolute measurement of the 242Pu neutron-capture cross section

    DOE PAGES

    Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; ...

    2016-04-21

    Here, the absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. The first direct measurement of the 242Pu(n,γ) cross section was made over the incident neutron energy range from thermal to ≈ 6 keV, and the absolute scale of the (n,γ) cross section was set according to the known 239Pu(n,f) resonance at E n,R = 7.83 eV. This was accomplished by adding a small quantity of 239Pu to the 242Pu sample. The relative scale of themore » cross section, with a range of four orders of magnitude, was determined for incident neutron energies from thermal to ≈ 40 keV. Our data, in general, are in agreement with previous measurements and those reported in ENDF/B-VII.1; the 242Pu(n,γ) cross section at the E n,R = 2.68 eV resonance is within 2.4% of the evaluated value. However, discrepancies exist at higher energies; our data are ≈30% lower than the evaluated data at E n ≈ 1 keV and are approximately 2σ away from the previous measurement at E n ≈ 20 keV.« less

  8. Multi-Strain Deterministic Chaos in Dengue Epidemiology, A Challenge for Computational Mathematics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Kooi, Bob W.; Stollenwerk, Nico

    2009-09-01

    Recently, we have analysed epidemiological models of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever, known as antibody dependent enhancement (ADE). These models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Especially, Lyapunov spectra, which quantify the predictability horizon in the epidemiological system, are computationally very demanding. We show ways to speed up computations of such Lyapunov spectra by a factor of more than ten by parallelizing previously used sequential C programs. Such fast computations of Lyapunov spectra will be especially of use in future investigations of seasonally forced versions of the present models, as they are needed for data analysis.

  9. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  10. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  11. Scalable Replay with Partial-Order Dependencies for Message-Logging Fault Tolerance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Meneses, Esteban; Menon, Harshita

    2014-09-22

    Deterministic replay of a parallel application is commonly used for discovering bugs or to recover from a hard fault with message-logging fault tolerance. For message passing programs, a major source of overhead during forward execution is recording the order in which messages are sent and received. During replay, this ordering must be used to deterministically reproduce the execution. Previous work in replay algorithms often makes minimal assumptions about the programming model and application in order to maintain generality. However, in many cases, only a partial order must be recorded due to determinism intrinsic in the code, ordering constraints imposed bymore » the execution model, and events that are commutative (their relative execution order during replay does not need to be reproduced exactly). In this paper, we present a novel algebraic framework for reasoning about the minimum dependencies required to represent the partial order for different concurrent orderings and interleavings. By exploiting this theory, we improve on an existing scalable message-logging fault tolerance scheme. The improved scheme scales to 131,072 cores on an IBM BlueGene/P with up to 2x lower overhead than one that records a total order.« less

  12. Branson: A Mini-App for Studying Parallel IMC, Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Alex

    This code solves the gray thermal radiative transfer (TRT) equations in parallel using simple opacities and Cartesian meshes. Although Branson solves the TRT equations it is not designed to model radiation transport: Branson contains simple physics and does not have a multigroup treatment, nor can it use physical material data. The opacities have are simple polynomials in temperature there is a limited ability to specify complex geometries and sources. Branson was designed only to capture the computational demands of production IMC codes, especially in large parallel runs. It was also intended to foster collaboration with vendors, universities and other DOEmore » partners. Branson is similar in character to the neutron transport proxy-app Quicksilver from LLNL, which was recently open-sourced.« less

  13. The Dripping Handrail Model: Transient Chaos in Accretion Systems

    NASA Technical Reports Server (NTRS)

    Young, Karl; Scargle, Jeffrey D.; Cuzzi, Jeffrey (Technical Monitor)

    1995-01-01

    We define and study a simple dynamical model for accretion systems, the "dripping handrail" (DHR). The time evolution of this spatially extended system is a mixture of periodic and apparently random (but actually deterministic) behavior. The nature of this mixture depends on the values of its physical parameters - the accretion rate, diffusion coefficient, and density threshold. The aperiodic component is a special kind of deterministic chaos called transient chaos. The model can simultaneously exhibit both the quasiperiodic oscillations (QPO) and very low frequency noise (VLFN) that characterize the power spectra of fluctuations of several classes of accretion systems in astronomy. For this reason, our model may be relevant to many such astrophysical systems, including binary stars with accretion onto a compact object - white dwarf, neutron star, or black hole - as well as active galactic nuclei. We describe the systematics of the DHR's temporal behavior, by exploring its physical parameter space using several diagnostics: power spectra, wavelet "scalegrams," and Lyapunov exponents. In addition, we note that for large accretion rates the DHR has periodic modes; the effective pulse shapes for these modes - evaluated by folding the time series at the known period - bear a resemblance to the similarly- determined shapes for some x-ray pulsars. The pulsing observed in some of these systems may be such periodic-mode accretion, and not due to pure rotation as in the standard pulsar model.

  14. Anisotropic dynamics of water ultra-confined in macroscopically oriented channels of single-crystal beryl: A multi-frequency analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anovitz, Lawrence; Mamontov, Eugene; Ishai, Paul ben

    2013-01-01

    The properties of fluids can be significantly altered by the geometry of their confining environments. While there has been significant work on the properties of such confined fluids, the properties of fluids under ultraconfinement, environments where, at least in one plane, the dimensions of the confining environment are similar to that of the confined molecule, have not been investigated. This paper investigates the dynamic properties of water in beryl (Be3Al2Si6O18), the structure of which contains approximately 5-A-diam channels parallel to the c axis. Three techniques, inelastic neutron scattering, quasielastic neutron scattering, and dielectric spectroscopy, have been used to quantify thesemore » properties over a dynamic range covering approximately 16 orders of magnitude. Because beryl can be obtained in large single crystals we were able to quantify directional variations, perpendicular and parallel to the channel directions, in the dynamics of the confined fluid. These are significantly anisotropic and, somewhat counterintuitively, show that vibrations parallel to the c-axis channels are significantly more hindered than those perpendicular to the channels. The effective potential for vibrations in the c direction is harder than the potential in directions perpendicular to it. There is evidence of single-file diffusion of water molecules along the channels at higher temperatures, but below 150 K this diffusion is strongly suppressed. No such suppression, however, has been observed in the channel-perpendicular direction. Inelastic neutron scattering spectra include an intramolecular stretching O-H peak at 465 meV. As this is nearly coincident with that known for free water molecules and approximately 30 meV higher than that in liquid water or ice, this suggests that there is no hydrogen bonding constraining vibrations between the channel water and the beryl structure. However, dielectric spectroscopic measurements at higher temperatures and lower frequencies yield an activation energy for the dipole reorientation of 16.4 0.14 kJ/mol, close to the energy required to break a hydrogen bond in bulk water. This may suggest the presence of some other form of bonding between the water molecules and the structure, but the resolution of the apparent contradiction between the inelastic neutron and dielectric spectroscopic results remains uncertain.« less

  15. Method and system for optical figuring by imagewise heating of a solvent

    DOEpatents

    Rushford, Michael C.

    2005-08-30

    A method and system of imagewise etching the surface of a substrate, such as thin glass, in a parallel process. The substrate surface is placed in contact with an etchant solution which increases in etch rate with temperature. A local thermal gradient is then generated in each of a plurality of selected local regions of a boundary layer of the etchant solution to imagewise etch the substrate surface in a parallel process. In one embodiment, the local thermal gradient is a local heating gradient produced at selected addresses chosen from an indexed array of addresses. The activation of each of the selected addresses is independently controlled by a computer processor so as to imagewise etch the substrate surface at region-specific etch rates. Moreover, etching progress is preferably concurrently monitored in real time over the entire surface area by an interferometer so as to deterministically control the computer processor to image-wise figure the substrate surface where needed.

  16. Structure and phase transitions of monolayers of intermediate-length n-alkanes on graphite studied by neutron diffraction and molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Diama, A.; Matthies, B.; Herwig, K. W.; Hansen, F. Y.; Criswell, L.; Mo, H.; Bai, M.; Taub, H.

    2009-08-01

    We present evidence from neutron diffraction measurements and molecular dynamics (MD) simulations of three different monolayer phases of the intermediate-length alkanes tetracosane (n-C24H50 denoted as C24) and dotriacontane (n-C32H66 denoted as C32) adsorbed on a graphite basal-plane surface. Our measurements indicate that the two monolayer films differ principally in the transition temperatures between phases. At the lowest temperatures, both C24 and C32 form a crystalline monolayer phase with a rectangular-centered (RC) structure. The two sublattices of the RC structure each consists of parallel rows of molecules in their all-trans conformation aligned with their long axis parallel to the surface and forming so-called lamellas of width approximately equal to the all-trans length of the molecule. The RC structure is uniaxially commensurate with the graphite surface in its [110] direction such that the distance between molecular rows in a lamella is 4.26 Å=√3 ag, where ag=2.46 Å is the lattice constant of the graphite basal plane. Molecules in adjacent rows of a lamella alternate in orientation between the carbon skeletal plane being parallel and perpendicular to the graphite surface. Upon heating, the crystalline monolayers transform to a "smectic" phase in which the inter-row spacing within a lamella expands by ˜10% and the molecules are predominantly oriented with the carbon skeletal plane parallel to the graphite surface. In the smectic phase, the MD simulations show evidence of broadening of the lamella boundaries as a result of molecules diffusing parallel to their long axis. At still higher temperatures, they indicate that the introduction of gauche defects into the alkane chains drives a melting transition to a monolayer fluid phase as reported previously.

  17. Structure and phase transitions of monolayers of intermediate-length n-alkanes on graphite studied by neutron diffraction and molecular dynamics simulation.

    PubMed

    Diama, A; Matthies, B; Herwig, K W; Hansen, F Y; Criswell, L; Mo, H; Bai, M; Taub, H

    2009-08-28

    We present evidence from neutron diffraction measurements and molecular dynamics (MD) simulations of three different monolayer phases of the intermediate-length alkanes tetracosane (n-C(24)H(50) denoted as C24) and dotriacontane (n-C(32)H(66) denoted as C32) adsorbed on a graphite basal-plane surface. Our measurements indicate that the two monolayer films differ principally in the transition temperatures between phases. At the lowest temperatures, both C24 and C32 form a crystalline monolayer phase with a rectangular-centered (RC) structure. The two sublattices of the RC structure each consists of parallel rows of molecules in their all-trans conformation aligned with their long axis parallel to the surface and forming so-called lamellas of width approximately equal to the all-trans length of the molecule. The RC structure is uniaxially commensurate with the graphite surface in its [110] direction such that the distance between molecular rows in a lamella is 4.26 A=sqrt[3a(g)], where a(g)=2.46 A is the lattice constant of the graphite basal plane. Molecules in adjacent rows of a lamella alternate in orientation between the carbon skeletal plane being parallel and perpendicular to the graphite surface. Upon heating, the crystalline monolayers transform to a "smectic" phase in which the inter-row spacing within a lamella expands by approximately 10% and the molecules are predominantly oriented with the carbon skeletal plane parallel to the graphite surface. In the smectic phase, the MD simulations show evidence of broadening of the lamella boundaries as a result of molecules diffusing parallel to their long axis. At still higher temperatures, they indicate that the introduction of gauche defects into the alkane chains drives a melting transition to a monolayer fluid phase as reported previously.

  18. Curved Waveguide Based Nuclear Fission for Small, Lightweight Reactors

    NASA Technical Reports Server (NTRS)

    Coker, Robert; Putnam, Gabriel

    2012-01-01

    The focus of the presented work is on the creation of a system of grazing incidence, supermirror waveguides for the capture and reuse of fission sourced neutrons. Within research reactors, neutron guides are a well known tool for directing neutrons from the confined and hazardous central core to a more accessible testing or measurement location. Typical neutron guides have rectangular, hollow cross sections, which are crafted as thin, mirrored waveguides plated with metal (commonly nickel). Under glancing angles with incoming neutrons, these waveguides can achieve nearly lossless transport of neutrons to distant instruments. Furthermore, recent developments have created supermirror surfaces which can accommodate neutron grazing angles up to four times as steep as nickel. A completed system will form an enclosing ring or spherical resonator system to a coupled neutron source for the purpose of capturing and reusing free neutrons to sustain and/or accelerate fission. While grazing incidence mirrors are a known method of directing and safely using neutrons, no method has been disclosed for capture and reuse of neutrons or sustainment of fission using a circular waveguide structure. The presented work is in the process of fabricating a functional, highly curved, neutron supermirror using known methods of Ni-Ti layering capable of achieving incident reflection angles up to four times steeper than nickel alone. Parallel work is analytically investigating future geometries, mirror compositions, and sources for enabling sustained fission with applicability to the propulsion and energy goals of NASA and other agencies. Should research into this concept prove feasible, it would lead to development of a high energy density, low mass power source potentially capable of sustaining fission with a fraction of the standard critical mass for a given material and a broadening of feasible materials due to reduced rates of release, absorption, and non-fission for neutrons. This advance could be applied to direct propulsion through guided fission products or as a secondary energy source for high impulse electric propulsion. It would help meet national needs for highly efficient energy sources with limited dependence on fossil fuels or conflict materials, and it would improve the use of low grade fissile materials which would help reduce national stockpiles and waste.

  19. Etude des performances de solveurs deterministes sur un coeur rapide a caloporteur sodium

    NASA Astrophysics Data System (ADS)

    Bay, Charlotte

    The reactors of next generation, in particular SFR model, represent a true challenge for current codes and solvers, used mainly for thermic cores. There is no guarantee that their competences could be straight adapted to fast neutron spectrum, or to major design differences. Thus it is necessary to assess the validity of solvers and their potential shortfall in the case of fast neutron reactors. As part of an internship with CEA (France), and at the instigation of EPM Nuclear Institute, this study concerns the following codes : DRAGON/DONJON, ERANOS, PARIS and APOLLO3. The precision assessment has been performed using Monte Carlo code TRIPOLI4. Only core calculation was of interest, namely numerical methods competences in precision and rapidity. Lattice code was not part of the study, that is to say nuclear data, self-shielding, or isotopic compositions. Nor was tackled burnup or time evolution effects. The study consists in two main steps : first evaluating the sensitivity of each solver to calculation parameters, and obtain its optimal calculation set ; then compare their competences in terms of precision and rapidity, by collecting usual quantities (effective multiplication factor, reaction rates map), but also more specific quantities which are crucial to the SFR design, namely control rod worth and sodium void effect. The calculation time is also a key factor. Whatever conclusion or recommendation that could be drawn from this study, they must first of all be applied within similar frameworks, that is to say small fast neutron cores with hexagonal geometry. Eventual adjustments for big cores will have to be demonstrated in developments of this study.

  20. 180 MW/180 KW pulse modulator for S-band klystron of LUE-200 linac of IREN installation of JINR

    NASA Astrophysics Data System (ADS)

    Su, Kim Dong; Sumbaev, A. P.; Shvetsov, V. N.

    2014-09-01

    The offer on working out of the pulse modulator with 180 MW pulse power and 180 kW average power for pulse S-band klystrons of LUE-200 linac of IREN installation at the Laboratory of neutron physics (FLNP) at JINR is formulated. Main requirements, key parameters and element base of the modulator are presented. The variant of the basic scheme on the basis of 14 (or 11) stage 2 parallel PFN with the thyratron switchboard (TGI2-10K/50) and six parallel high voltage power supplies (CCPS Power Supply) is considered.

  1. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  2. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  3. Covariance generation and uncertainty propagation for thermal and fast neutron induced fission yields

    NASA Astrophysics Data System (ADS)

    Terranova, Nicholas; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Sumini, Marco

    2017-09-01

    Fission product yields (FY) are fundamental nuclear data for several applications, including decay heat, shielding, dosimetry, burn-up calculations. To be safe and sustainable, modern and future nuclear systems require accurate knowledge on reactor parameters, with reduced margins of uncertainty. Present nuclear data libraries for FY do not provide consistent and complete uncertainty information which are limited, in many cases, to only variances. In the present work we propose a methodology to evaluate covariance matrices for thermal and fast neutron induced fission yields. The semi-empirical models adopted to evaluate the JEFF-3.1.1 FY library have been used in the Generalized Least Square Method available in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation) to generate covariance matrices for several fissioning systems such as the thermal fission of U235, Pu239 and Pu241 and the fast fission of U238, Pu239 and Pu240. The impact of such covariances on nuclear applications has been estimated using deterministic and Monte Carlo uncertainty propagation techniques. We studied the effects on decay heat and reactivity loss uncertainty estimation for simplified test case geometries, such as PWR and SFR pin-cells. The impact on existing nuclear reactors, such as the Jules Horowitz Reactor under construction at CEA-Cadarache, has also been considered.

  4. Twisting Neutron Waves

    NASA Astrophysics Data System (ADS)

    Pushin, Dmitry

    Most waves encountered in nature can be given a ``twist'', so that their phase winds around an axis parallel to the direction of wave propagation. Such waves are said to possess orbital angular momentum (OAM). For quantum particles such as photons, atoms, and electrons, this corresponds to the particle wavefunction having angular momentum of Lℏ along its propagation axis. Controlled generation and detection of OAM states of photons began in the 1990s, sparking considerable interest in applications of OAM in light and matter waves. OAM states of photons have found diverse applications such as broadband data multiplexing, massive quantum entanglement, optical trapping, microscopy, quantum state determination and teleportation, and interferometry. OAM states of electron beams have been used to rotate nanoparticles, determine the chirality of crystals and for magnetic microscopy. Here I discuss the first demonstration of OAM control of neutrons. Using neutron interferometry with a spatially incoherent input beam, we show the addition and conservation of quantum angular momenta, entanglement between quantum path and OAM degrees of freedom. Neutron-based quantum information science heretofore limited to spin, path, and energy degrees of freedom, now has access to another quantized variable, and OAM modalities of light, x-ray, and electron beams are extended to a massive, penetrating neutral particle. The methods of neutron phase imprinting demonstrated here expand the toolbox available for development of phase-sensitive techniques of neutron imaging. Financial support provided by the NSERC Create and Discovery programs, CERC and the NIST Quantum Information Program is acknowledged.

  5. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  6. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  7. The magnetic structure of Co(NCNH)₂ as determined by (spin-polarized) neutron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Philipp; Houben, Andreas; Senyshyn, Anatoliy

    The magnetic structure of Co(NCNH)₂ has been studied by neutron diffraction data below 10 K using the SPODI and DNS instruments at FRM II, Munich. There is an intensity change in the (1 1 0) and (0 2 0) reflections around 4 K, to be attributed to the onset of a magnetic ordering of the Co²⁺ spins. Four different spin orientations have been evaluated on the basis of Rietveld refinements, comprising antiferromagnetic as well as ferromagnetic ordering along all three crystallographic axes. Both residual values and supplementary susceptibility measurements evidence that only a ferromagnetic ordering with all Co²⁺ spins parallelmore » to the c axis is a suitable description of the low-temperature magnetic ground state of Co(NCNH)₂. The deviation of the magnetic moment derived by the Rietveld refinement from the expectancy value may be explained either by an incomplete saturation of the moment at temperatures slightly below the Curie temperature or by a small Jahn–Teller distortion. - Graphical abstract: The magnetic ground state of Co(NCNH)₂ has been clarified by (spin-polarized) neutron diffraction data at low temperatures. Intensity changes below 4 K arise due to the onset of ferromagnetic ordering of the Co²⁺ spins parallel to the c axis, corroborated by various (magnetic) Rietveld refinements. Highlights: • Powderous Co(NCNH)₂ has been subjected to (spin-polarized) neutron diffraction. • Magnetic susceptibility data of Co(NCNH)₂ have been collected. • Below 4 K, the magnetic moments align ferromagnetically with all Co²⁺ spins parallel to the c axis. • The magnetic susceptibility data yield an effective magnetic moment of 4.68 and a Weiss constant of -13(2) K. • The ferromagnetic Rietveld refinement leads to a magnetic moment of 2.6 which is close to the expectancy value of 3.« less

  8. A parallel approach of COFFEE objective function to multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Zafalon, G. F. D.; Visotaky, J. M. V.; Amorim, A. R.; Valêncio, C. R.; Neves, L. A.; de Souza, R. C. G.; Machado, J. M.

    2015-09-01

    The computational tools to assist genomic analyzes show even more necessary due to fast increasing of data amount available. With high computational costs of deterministic algorithms for sequence alignments, many works concentrate their efforts in the development of heuristic approaches to multiple sequence alignments. However, the selection of an approach, which offers solutions with good biological significance and feasible execution time, is a great challenge. Thus, this work aims to show the parallelization of the processing steps of MSA-GA tool using multithread paradigm in the execution of COFFEE objective function. The standard objective function implemented in the tool is the Weighted Sum of Pairs (WSP), which produces some distortions in the final alignments when sequences sets with low similarity are aligned. Then, in studies previously performed we implemented the COFFEE objective function in the tool to smooth these distortions. Although the nature of COFFEE objective function implies in the increasing of execution time, this approach presents points, which can be executed in parallel. With the improvements implemented in this work, we can verify the execution time of new approach is 24% faster than the sequential approach with COFFEE. Moreover, the COFFEE multithreaded approach is more efficient than WSP, because besides it is slightly fast, its biological results are better.

  9. High-order Spatio-temporal Schemes for Coupled, Multi-physics Reactor Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mr. Vijay S. Mahadevan; Dr. Jean C. Ragusa

    2008-09-01

    This report summarizes the work done in the summer of 08 by the Ph.D. student Vijay Mahadevan. The main focus of the work was to coupled 3-D neutron difusion to 3-D heat conduction in parallel with accuracy greater than or equal to 2nd order in space and time. Results show that the goal was attained.

  10. Final report. Superconducting materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Ruvalds

    1999-09-11

    Our group has discovered a many body effect that explains the surprising divergence of the spin susceptibility which has been measured by neutron scattering experiments on high temperature superconductors and vanadium oxide metals. Electron interactions on nested - i.e., nearly parallel paths - have been analyzed extensively by our group, and such processes provide a physical explanation for many anomalous features that distinguish cuprate superconductors from ordinary metals.

  11. Simulation and optimization of a new focusing polarizing bender for the diffuse neutrons scattering spectrometer DNS at MLZ

    NASA Astrophysics Data System (ADS)

    Nemkovski, K.; Ioffe, A.; Su, Y.; Babcock, E.; Schweika, W.; Brückel, Th

    2017-06-01

    We present the concept and the results of the simulations of a new polarizer for the diffuse neutron scattering spectrometer DNS at MLZ. The concept of the polarizer is based on the idea of a bender made from the stack of the silicon wafers with a double-side supermirror polarizing coating and absorbing spacers in between. Owing to its compact design, such a system provides more free space for the arrangement of other instrument components. To reduce activation of the polarizer in the high intensity neutron beam of the DNS spectrometer we plan to use the Fe/Si supermirrors instead of currently used FeCoV/Ti:N ones. Using the VITESS simulation package we have performed simulations for horizontally focusing polarizing benders with different geometries in the combination with the double-focusing crystal monochromator of DNS. Neutron transmission and polarization efficiency as well as the effects of the focusing for convergent conventional C-benders and S-benders have been analyzed both for wedge-like and plane-parallel convergent geometries of the channels. The results of these simulations and the advantages/disadvantages of the various configurations are discussed.

  12. Measuring Light-ion Production and Fission Cross Sections Normalised to H(n,p) Scattering at the Upcoming NFS Facility

    NASA Astrophysics Data System (ADS)

    Jansson, K.; Gustavsson, C.; Pomp, S.; Prokofiev, A. V.; Scian, G.; Tarrío, D.

    2014-05-01

    The Medley detector setup is planned to be moved to and used at the new neutron facility NFS where measurements of light-ion production and fission cross-sections are planned at 1-40 MeV. Medley has eight detector telescopes providing ΔE-ΔE-E data, each consisting of two silicon detectors and a CsI(Tl) detector at the back. The telescope setup can be rotated and arranged to cover any angle. Medley has previously been used in many measurements at The Svedberg Laboratory (TSL) in Uppsala mainly with a quasi-mono-energetic neutron beam at 96 and 175 MeV. To be able to do measurements at NFS, which will have a white neutron beam, Medley needs to detect the reaction products with a high timing resolution providing the ToF of the primary neutron. In this paper we discuss the design of the Medley upgrade along with simulations of the setup. We explore the use of Parallel Plate Avalanche Counters (PPACs) which work very well for detecting fission fragments but require more consideration for detecting deeply penetrating particles.

  13. CAD-Based Shielding Analysis for ITER Port Diagnostics

    NASA Astrophysics Data System (ADS)

    Serikov, Arkady; Fischer, Ulrich; Anthoine, David; Bertalot, Luciano; De Bock, Maartin; O'Connor, Richard; Juarez, Rafael; Krasilnikov, Vitaly

    2017-09-01

    Radiation shielding analysis conducted in support of design development of the contemporary diagnostic systems integrated inside the ITER ports is relied on the use of CAD models. This paper presents the CAD-based MCNP Monte Carlo radiation transport and activation analyses for the Diagnostic Upper and Equatorial Port Plugs (UPP #3 and EPP #8, #17). The creation process of the complicated 3D MCNP models of the diagnostics systems was substantially accelerated by application of the CAD-to-MCNP converter programs MCAM and McCad. High performance computing resources of the Helios supercomputer allowed to speed-up the MCNP parallel transport calculations with the MPI/OpenMP interface. The found shielding solutions could be universal, reducing ports R&D costs. The shield block behind the Tritium and Deposit Monitor (TDM) optical box was added to study its influence on Shut-Down Dose Rate (SDDR) in Port Interspace (PI) of EPP#17. Influence of neutron streaming along the Lost Alpha Monitor (LAM) on the neutron energy spectra calculated in the Tangential Neutron Spectrometer (TNS) of EPP#8. For the UPP#3 with Charge eXchange Recombination Spectroscopy (CXRS-core), an excessive neutron streaming along the CXRS shutter, which should be prevented in further design iteration.

  14. A Computational Approach for Modeling Neutron Scattering Data from Lipid Bilayers

    DOE PAGES

    Carrillo, Jan-Michael Y.; Katsaras, John; Sumpter, Bobby G.; ...

    2017-01-12

    Biological cell membranes are responsible for a range of structural and dynamical phenomena crucial to a cell's well-being and its associated functions. Due to the complexity of cell membranes, lipid bilayer systems are often used as biomimetic models. These systems have led to signficant insights into vital membrane phenomena such as domain formation, passive permeation and protein insertion. Experimental observations of membrane structure and dynamics are, however, limited in resolution, both spatially and temporally. Importantly, computer simulations are starting to play a more prominent role in interpreting experimental results, enabling a molecular under- standing of lipid membranes. Particularly, the synergymore » between scattering experiments and simulations offers opportunities for new discoveries in membrane physics, as the length and time scales probed by molecular dynamics (MD) simulations parallel those of experiments. We also describe a coarse-grained MD simulation approach that mimics neutron scattering data from large unilamellar lipid vesicles over a range of bilayer rigidity. Specfically, we simulate vesicle form factors and membrane thickness fluctuations determined from small angle neutron scattering (SANS) and neutron spin echo (NSE) experiments, respectively. Our simulations accurately reproduce trends from experiments and lay the groundwork for investigations of more complex membrane systems.« less

  15. Colloquium: Laser probing of neutron-rich nuclei in light atoms

    NASA Astrophysics Data System (ADS)

    Lu, Z.-T.; Mueller, P.; Drake, G. W. F.; Nörtershäuser, W.; Pieper, Steven C.; Yan, Z.-C.

    2013-10-01

    The neutron-rich He6 and He8 isotopes exhibit an exotic nuclear structure that consists of a tightly bound He4-like core with additional neutrons orbiting at a relatively large distance, forming a halo. Recent experimental efforts have succeeded in laser trapping and cooling these short-lived, rare helium atoms and have measured the atomic isotope shifts along the He4-He6-He8 chain by performing laser spectroscopy on individual trapped atoms. Meanwhile, the few-electron atomic structure theory, including relativistic and QED corrections, has reached a comparable degree of accuracy in the calculation of the isotope shifts. In parallel efforts, also by measuring atomic isotope shifts, the nuclear charge radii of lithium and beryllium isotopes have been studied. The techniques employed were resonance ionization spectroscopy on neutral, thermal lithium atoms and collinear laser spectroscopy on beryllium ions. Combining advances in both atomic theory and laser spectroscopy, the charge radii of these light halo nuclei have now been determined for the first time independent of nuclear structure models. The results are compared with the values predicted by a number of nuclear structure calculations and are used to guide our understanding of the nuclear forces in the extremely neutron-rich environment.

  16. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  17. Self-Organized Criticality and Scaling in Lifetime of Traffic Jams

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-01-01

    The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime of traffic jams scales as \\cong Lν with ν=0.65±0.04. It is shown that the cumulative distribution Nm (L) of lifetimes satisfies the finite-size scaling form Nm (L) \\cong L-1 f(m/Lν).

  18. A Simplified Model of Local Structure in Aqueous Proline Amino Acid Revealed by First-Principles Molecular Dynamics Simulations

    PubMed Central

    Troitzsch, Raphael Z.; Tulip, Paul R.; Crain, Jason; Martyna, Glenn J.

    2008-01-01

    Aqueous proline solutions are deceptively simple as they can take on complex roles such as protein chaperones, cryoprotectants, and hydrotropic agents in biological processes. Here, a molecular level picture of proline/water mixtures is developed. Car-Parrinello ab initio molecular dynamics (CPAIMD) simulations of aqueous proline amino acid at the B-LYP level of theory, performed using IBM's Blue Gene/L supercomputer and massively parallel software, reveal hydrogen-bonding propensities that are at odds with the predictions of the CHARMM22 empirical force field but are in better agreement with results of recent neutron diffraction experiments. In general, the CPAIMD (B-LYP) simulations predict a simplified structural model of proline/water mixtures consisting of fewer distinct local motifs. Comparisons of simulation results to experiment are made by direct evaluation of the neutron static structure factor S(Q) from CPAIMD (B-LYP) trajectories as well as to the results of the empirical potential structure refinement reverse Monte Carlo procedure applied to the neutron data. PMID:18790850

  19. Developing Discontinuous Galerkin Methods for Solving Multiphysics Problems in General Relativity

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence; Field, Scott; Teukolsky, Saul; Foucart, Francois; SXS Collaboration

    2016-03-01

    Multi-messenger observations of the merger of black hole-neutron star and neutron star-neutron star binaries, and of supernova explosions will probe fundamental physics inaccessible to terrestrial experiments. Modeling these systems requires a relativistic treatment of hydrodynamics, including magnetic fields, as well as neutrino transport and nuclear reactions. The accuracy, efficiency, and robustness of current codes that treat all of these problems is not sufficient to keep up with the observational needs. We are building a new numerical code that uses the Discontinuous Galerkin method with a task-based parallelization strategy, a promising combination that will allow multiphysics applications to be treated both accurately and efficiently on petascale and exascale machines. The code will scale to more than 100,000 cores for efficient exploration of the parameter space of potential sources and allowed physics, and the high-fidelity predictions needed to realize the promise of multi-messenger astronomy. I will discuss the current status of the development of this new code.

  20. A simplified model of local structure in aqueous proline amino acid revealed by first-principles molecular dynamics simulations.

    PubMed

    Troitzsch, Raphael Z; Tulip, Paul R; Crain, Jason; Martyna, Glenn J

    2008-12-01

    Aqueous proline solutions are deceptively simple as they can take on complex roles such as protein chaperones, cryoprotectants, and hydrotropic agents in biological processes. Here, a molecular level picture of proline/water mixtures is developed. Car-Parrinello ab initio molecular dynamics (CPAIMD) simulations of aqueous proline amino acid at the B-LYP level of theory, performed using IBM's Blue Gene/L supercomputer and massively parallel software, reveal hydrogen-bonding propensities that are at odds with the predictions of the CHARMM22 empirical force field but are in better agreement with results of recent neutron diffraction experiments. In general, the CPAIMD (B-LYP) simulations predict a simplified structural model of proline/water mixtures consisting of fewer distinct local motifs. Comparisons of simulation results to experiment are made by direct evaluation of the neutron static structure factor S(Q) from CPAIMD (B-LYP) trajectories as well as to the results of the empirical potential structure refinement reverse Monte Carlo procedure applied to the neutron data.

  1. Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweezy, Jeremy Ed

    2016-01-21

    The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less

  2. Measurement and modeling of polarized specular neutron reflectivity in large magnetic fields.

    PubMed

    Maranville, Brian B; Kirby, Brian J; Grutter, Alexander J; Kienzle, Paul A; Majkrzak, Charles F; Liu, Yaohua; Dennis, Cindi L

    2016-08-01

    The presence of a large applied magnetic field removes the degeneracy of the vacuum energy states for spin-up and spin-down neutrons. For polarized neutron reflectometry, this must be included in the reference potential energy of the Schrödinger equation that is used to calculate the expected scattering from a magnetic layered structure. For samples with magnetization that is purely parallel or antiparallel to the applied field which defines the quantization axis, there is no mixing of the spin states (no spin-flip scattering) and so this additional potential is constant throughout the scattering region. When there is non-collinear magnetization in the sample, however, there will be significant scattering from one spin state into the other, and the reference potentials will differ between the incoming and outgoing wavefunctions, changing the angle and intensities of the scattering. The theory of the scattering and recommended experimental practices for this type of measurement are presented, as well as an example measurement.

  3. Measurement and modeling of polarized specular neutron reflectivity in large magnetic fields

    PubMed Central

    Maranville, Brian B.; Kirby, Brian J.; Grutter, Alexander J.; Kienzle, Paul A.; Majkrzak, Charles F.; Liu, Yaohua; Dennis, Cindi L.

    2016-01-01

    The presence of a large applied magnetic field removes the degeneracy of the vacuum energy states for spin-up and spin-down neutrons. For polarized neutron reflectometry, this must be included in the reference potential energy of the Schrödinger equation that is used to calculate the expected scattering from a magnetic layered structure. For samples with magnetization that is purely parallel or antiparallel to the applied field which defines the quantization axis, there is no mixing of the spin states (no spin-flip scattering) and so this additional potential is constant throughout the scattering region. When there is non-collinear magnetization in the sample, however, there will be significant scattering from one spin state into the other, and the reference potentials will differ between the incoming and outgoing wavefunctions, changing the angle and intensities of the scattering. The theory of the scattering and recommended experimental practices for this type of measurement are presented, as well as an example measurement. PMID:27504074

  4. Measurement and modeling of polarized specular neutron reflectivity in large magnetic fields

    DOE PAGES

    Maranville, Brian B.; Kirby, Brian J.; Grutter, Alexander J.; ...

    2016-06-09

    The presence of a large applied magnetic field removes the degeneracy of the vacuum energy states for spin-up and spin-down neutrons. For polarized neutron reflectometry, this must be included in the reference potential energy of the Schrödinger equation that is used to calculate the expected scattering from a magnetic layered structure. For samples with magnetization that is purely parallel or antiparallel to the applied field which defines the quantization axis, there is no mixing of the spin states (no spin-flip scattering) and so this additional potential is constant throughout the scattering region. When there is non-collinear magnetization in the sample,more » however, there will be significant scattering from one spin state into the other, and the reference potentials will differ between the incoming and outgoing wavefunctions, changing the angle and intensities of the scattering. In conclusion, the theory of the scattering and recommended experimental practices for this type of measurement are presented, as well as an example measurement.« less

  5. A crack model of the Hiroshima atomic bomb: explanation of the contradiction of "Dosimetry system 1986".

    PubMed

    Hoshi, M; Endo, S; Takada, J; Ishikawa, M; Nitta, Y; Iwatani, K; Oka, T; Fujita, S; Shizuma, K; Hasai, H

    1999-12-01

    There has been a large discrepancy between the Dosimetry system 1986 (DS86) and measured data, some of which data in Hiroshima at about 1.5 km ground distance from the hypocenter are about 10 times larger than the calculation. Therefore its causes have long been discussed, since it will change the estimated radiation risks obtained based on the Hiroshima and Nagasaki data. In this study the contradiction was explained by a bare-fission-neutron leakage model through a crack formed at the time of neutron emission. According to the present calculation, the crack has a 3 cm parallel spacing, which is symmetric with respect to the polar axis from the hypocenter to the epicenter of the atomic bomb. We made also an asymmetric opening closing 3/4 of this symmetric geometry, because there are some data which shows asymmetry. In addition, the height of the neutron emission point was elevated 90 m. By using the asymmetric calculation, especially for long distant data located more than 1 km, it was verified that all of the activity data induced by thermal and fast neutrons, were simultaneously explained within the data scattering. The neutron kerma at a typical 1.5 km ground distance increases 3 and 8 times more than DS86 based on the symmetric and asymmetric model, respectively.

  6. Hydration of Caffeine at High Temperature by Neutron Scattering and Simulation Studies.

    PubMed

    Tavagnacco, L; Brady, J W; Bruni, F; Callear, S; Ricci, M A; Saboungi, M L; Cesàro, A

    2015-10-22

    The solvation of caffeine in water is examined with neutron diffraction experiments at 353 K. The experimental data, obtained by taking advantage of isotopic H/D substitution in water, were analyzed by empirical potential structure refinement (EPSR) in order to extract partial structure factors and site-site radial distribution functions. In parallel, molecular dynamics (MD) simulations were carried out to interpret the data and gain insight into the intermolecular interactions in the solutions and the solvation process. The results obtained with the two approaches evidence differences in the individual radial distribution functions, although both confirm the presence of caffeine stacks at this temperature. The two approaches point to different accessibility of water to the caffeine sites due to different stacking configurations.

  7. Neutron scattering investigation of a macroscopic single crystal of a lyotropic Lα phase

    NASA Astrophysics Data System (ADS)

    Goecking, K. D.; Monkenbusch, M.

    1998-07-01

    Water-rich lamellar samples of the quaternary microemulsion SDS-pentanol-water-dodecane have been prepared in form of 1 mm×10 mm×20 mm macroscopic mono domains. The shape is given by the quartz cuvette containing the sample, the layer planes are parallel to the cuvette walls. Diffraction patterns and "rocking curves" have been obtained by neutron diffraction using a triple-axis spectrometer. Three "pseudo-Bragg peaks" have been observed, their (relative) intensities yield a new experimental access to estimate the product of the elastic constants η-2 propto Bκ resulting in a lower value than obtained from synchrotron investigation using peak shape fitting (Roux D. et al., Micelles, Membranes, Microemulsions and Monolayers (Springer, New York, Berlin) 1994).

  8. The magnetic order of GdMn₂Ge₂ studied by neutron diffraction and x-ray resonant magnetic scattering.

    PubMed

    Granovsky, S A; Kreyssig, A; Doerr, M; Ritter, C; Dudzik, E; Feyerherm, R; Canfield, P C; Loewenhaupt, M

    2010-06-09

    The magnetic structure of GdMn₂Ge₂ (tetragonal I4/mmm) has been studied by hot neutron powder diffraction and x-ray resonant magnetic scattering techniques. These measurements, along with the results of bulk experiments, confirm the collinear ferrimagnetic structure with moment direction parallel to the c-axis below T(C) = 96 K and the collinear antiferromagnetic phase in the temperature region T(C) < T < T(N) = 365 K. In the antiferromagnetic phase, x-ray resonant magnetic scattering has been detected at Mn K and Gd L₂ absorption edges. The Gd contribution is a result of an induced Gd 5d electron polarization caused by the antiferromagnetic order of Mn-moments.

  9. Parallelization of KENO-Va Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Ramón, Javier; Peña, Jorge

    1995-07-01

    KENO-Va is a code integrated within the SCALE system developed by Oak Ridge that solves the transport equation through the Monte Carlo Method. It is being used at the Consejo de Seguridad Nuclear (CSN) to perform criticality calculations for fuel storage pools and shipping casks. Two parallel versions of the code: one for shared memory machines and other for distributed memory systems using the message-passing interface PVM have been generated. In both versions the neutrons of each generation are tracked in parallel. In order to preserve the reproducibility of the results in both versions, advanced seeds for random numbers were used. The CONVEX C3440 with four processors and shared memory at CSN was used to implement the shared memory version. A FDDI network of 6 HP9000/735 was employed to implement the message-passing version using proprietary PVM. The speedup obtained was 3.6 in both cases.

  10. Magnetic behaviour of synthetic Co(2)SiO(4).

    PubMed

    Sazonov, Andrew; Meven, Martin; Hutanu, Vladimir; Heger, Gernot; Hansen, Thomas; Gukasov, Arsen

    2009-12-01

    Synthetic Co(2)SiO(4) crystallizes in the olivine structure (space group Pnma) with two crystallographically non-equivalent Co positions and shows antiferromagnetic ordering below 50 K. We have investigated the temperature variation of the Co(2)SiO(4) magnetic structure by means of non-polarized and polarized neutron diffraction for single crystals. Measurements with non-polarized neutrons were made at 2.5 K (below T(N)), whereas polarized neutron diffraction experiments were carried out at 70 and 150 K (above T(N)) in an external magnetic field of 7 T parallel to the b axis. Additional accurate non-polarized powder diffraction studies were performed in a broad temperature range from 5 to 500 K with small temperature increments. Detailed symmetry analysis of the Co(2)SiO(4) magnetic structure shows that it corresponds to the magnetic (Shubnikov) group Pnma, which allows the antiferromagnetic configuration (G(x), C(y), A(z)) for the 4a site with inversion symmetry 1 (Co1 position) and (0,C(y),0) for the 4c site with mirror symmetry m (Co2 position). The temperature dependence of the Co1 and Co2 magnetic moments obtained from neutron diffraction experiments was fitted in a modified molecular-field model. The polarized neutron study of the magnetization induced by an applied field shows a non-negligible amount of magnetic moment on the oxygen positions, indicating a delocalization of the magnetic moment from Co towards neighbouring O owing to superexchange coupling. The relative strength of the exchange interactions is discussed based on the non-polarized and polarized neutron data.

  11. Material Implementation of Hyperincursive Field on Slime Mold Computer

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Gunji, Yukio-Pegio

    2004-08-01

    "Elementary Conflictable Cellular Automaton (ECCA)" was introduced by Aono and Gunji as a problematic computational syntax embracing the non-deterministic/non-algorithmic property due to its hyperincursivity and nonlocality. Although ECCA's hyperincursive evolution equation indicates the occurrence of the deadlock/infinite-loop, we do not consider that this problem declares the fundamental impossibility of implementing ECCA materially. Dubois proposed to call a computing system where uncertainty/contradiction occurs "the hyperincursive field". In this paper we introduce a material implementation of the hyperincursive field by using plasmodia of the true slime mold Physarum polycephalum. The amoeboid organism is adopted as a computing media of ECCA slime mold computer (ECCA-SMC) mainly because; it is a parallel non-distributed system whose locally branched tips (components) can act in parallel with asynchronism and nonlocal correlation. A notable characteristic of ECCA-SMC is that a cell representing a spatio-temporal segment of computation is occupied (overlapped) redundantly by multiple spatially adjacent computing operations and by temporally successive computing events. The overlapped time representation may contribute to the progression of discussions on unconventional notions of the time.

  12. Final Technical Report: Application of in situ Neutron Diffraction to Understand the Mechanism of Phase Transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandran, Ravi

    In this research, phase transitions in the bulk electrodes for Li-ion batteries were investigated using neutron diffraction (ND) as well as neutron imaging techniques. The objectives of this research is to design of a novel in situ electrochemical cell to obtain Rietveld refinable neutron diffraction experiments using small volume electrodes of various laboratory/research-scale electrodes intended for Li-ion batteries. This cell is also to be used to investigate the complexity of phase transitions in Li(Mg) alloy electrodes, either by diffraction or by neutron imaging, which occur under electrochemical lithiation and delithiation, and to determine aspects of phase transition that enable/limit energymore » storage capacity. Additional objective is to investigate the phase transitions in electrodes made of etched micro-columns of silicon and investigate the effect of particle/column size on phase transitions and nonequilibrium structures. An in situ electrochemical cell was designed successfully and was used to study the phase transitions under in-situ neutron diffraction in both the electrodes (anode/cathode) simultaneously in graphite/LiCoO 2 and in graphite/LiMn 2O 4 cells each with two cells. The diffraction patterns fully validated the working of the in situ cell. Additional experimental were performed using the Si micro-columnar electrodes. The results revealed new lithiation phenomena, as evidenced by mosaicity formation in silicon electrode. These experiments were performed in Vulcan diffractometer at SNS, Oak Ridge National Laboratory. In parallel, the spatial distribution of Li during lithiation and delithiation processes in Li-battery electrodes were investigated. For this purpose, neutron tomographic imaging technique has been used for 3D mapping of Li distribution in bulk Li(Mg) alloy electrodes. It was possible to observe the phase boundary of Li(Mg) alloy indicating phase transition from Li-rich BCC β-phase to Li-lean α-phase. These experiments have been performed at CG-1D Neutron Imaging Prototype Station at SNS.« less

  13. SINQ layout, operation, applications and R&D to high power

    NASA Astrophysics Data System (ADS)

    Bauer, G. S.; Dai, Y.; Wagner, W.

    2002-09-01

    Since 1997, the Paul Scherrer Institut (PSI) is operating a 1 MW class research spallation neutron source, named SINQ. SINQ is driven by a cascade of three accelerators, the final stage being a 590 MeV isochronous ring cyclotron which delivers a beam current of 1.8 mA at an rf-frequency of 51 MHz. Since for neutron production this is essentially a dc-device, SINQ is a continuous neutron source and is optimized in its design for high time average neutron flux. This makes the facility similar to a research reactor in terms of utilization, but, in terms of beam power, it is, by a large margin, the most powerful spallation neutron source currently in operation world wide. As a consequence, target load levels prevail in SINQ which are beyond the realm of existing experience, demanding a careful approach to the design and operation of a high power target. While the best neutronic performance of the source is expected for a liquid lead-bismuth eutectic target, no experience with such systems exists. For this reason a staged approach has been embarked upon, starting with a heavy water cooled rod target of Zircaloy-2 and proceeding via steel clad lead rods towards the final goal of a target optimised in both, neutronic performance and service life time. Experience currently accruing with a test target containing sample rods with different materials specimens will help to select the proper structural material and make dependable life time estimates accounting for the real operating conditions that prevail in the facility. In parallel, both theoretical and experimental work is going on within the MEGAPIE (MEGAwatt Pilot Experiment) project, a joint initiative by six European research institutions and JAERI (Japan), DOE (USA) and KAERI (Korea), to design, build, operate and explore a liquid lead-bismuth spallation target for 1MW of beam power, taking advantage of the existing spallation neutron facility SINQ.

  14. Single crystal polarized neutron diffraction study of the magnetic structure of HoFeO3.

    PubMed

    Chatterji, T; Stunault, A; Brown, P J

    2017-09-27

    Polarised neutron diffraction measurements have been made on HoFeO 3 single crystals magnetised in both the [0 0 1] and [1 0 0] directions (Pbnm setting). The polarisation dependencies of Bragg reflection intensities were measured both with a high field of [Formula: see text] T parallel to [0 0 1] at [Formula: see text] K and with the lower field [Formula: see text] T parallel to [1 0 0] at [Formula: see text] K. A Fourier projection of magnetization induced parallel to [0 0 1], made using the hk0 reflections measured in 9 T, indicates that almost all of it is due to alignment of Ho moments. Further analysis of the asymmetries of general reflections in these data showed that although, at 70 K, 9 T applied parallel to [0 0 1] hardly perturbs the antiferromagnetic order of the Fe sublattices, it induces significant antiferromagnetic order of the Ho sublattices in the [Formula: see text] plane, with the antiferromagnetic components of moment having the same order of magnitude as the induced ferromagnetic ones. Strong intensity asymmetries measured in the low temperature [Formula: see text] structure with a lower field, 0.5 T [Formula: see text] [1 0 0] allowed the variation of the ordered components of the Ho and Fe moments to be followed. Their absolute orientations, in the [Formula: see text] domain stabilised by the field were determined relative to the distorted perovskite structure. This relationship fixes the sign of the Dzyalshinski-Moriya (D-M) interaction which leads to the weak ferromagnetism. Our results indicate that the combination of strong y-axis anisotropy of the Ho moments and Ho-Fe exchange interactions breaks the centrosymmetry of the structure and could lead to ferroelectric polarization.

  15. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  16. Bubble-detector measurements of neutron radiation in the international space station: ISS-34 to ISS-37

    PubMed Central

    Smith, M. B.; Khulapko, S.; Andrews, H. R.; Arkhangelsky, V.; Ing, H.; Koslowksy, M. R.; Lewis, B. J.; Machrafi, R.; Nikolaev, I.; Shurshakov, V.

    2016-01-01

    Bubble detectors have been used to characterise the neutron dose and energy spectrum in several modules of the International Space Station (ISS) as part of an ongoing radiation survey. A series of experiments was performed during the ISS-34, ISS-35, ISS-36 and ISS-37 missions between December 2012 and October 2013. The Radi-N2 experiment, a repeat of the 2009 Radi-N investigation, included measurements in four modules of the US orbital segment: Columbus, the Japanese experiment module, the US laboratory and Node 2. The Radi-N2 dose and spectral measurements are not significantly different from the Radi-N results collected in the same ISS locations, despite the large difference in solar activity between 2009 and 2013. Parallel experiments using a second set of detectors in the Russian segment of the ISS included the first characterisation of the neutron spectrum inside the tissue-equivalent Matroshka-R phantom. These data suggest that the dose inside the phantom is ∼70 % of the dose at its surface, while the spectrum inside the phantom contains a larger fraction of high-energy neutrons than the spectrum outside the phantom. The phantom results are supported by Monte Carlo simulations that provide good agreement with the empirical data. PMID:25899609

  17. APPARATUS FOR MEASURING TOTAL NEUTRON CROSS SECTIONS

    DOEpatents

    Cranberg, L.

    1959-10-13

    An apparatus is described for measuring high-resolution total neutron cross sections at high counting rate in the range above 50-kev neutron energy. The pulsed-beam time-of-flight technique is used to identify the neutrons of interest which are produced in the target of an electrostatic accelerator. Energy modulation of the accelerator . makes it possible to make observations at 100 energy points simultaneously. 761O An apparatus is described for monitoring the proton resonance of a liquid which is particulariy useful in the continuous purity analysis of heavy water. A hollow shell with parallel sides defines a meander chamber positioned within a uniform magnetic fieid. The liquid passes through an inlet at the outer edge of the chamber and through a spiral channel to the central region of the chamber where an outlet tube extends into the chamber perpendicular to the magnetic field. The radiofrequency energy for the monitor is coupled to a coil positioned coaxially with the outlet tube at its entrance point within the chamber. The improvement lies in the compact mechanical arrangement of the monitor unit whereby the liquid under analysis is subjected to the same magnetic field in the storage and sensing areas, and the entire unit is shielded from external electrostatic influences.

  18. Asymmetric neutrino reaction and pulsar kick in magnetized proto-neutron stars in fully relativistic framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maruyama, Tomoyuki; Kajino, Toshitaka; Yasutake, Nobutoshi

    2012-11-12

    We calculate neutrino scattering and absorption on the hot and dense neutron-star matter with hyperons under the strong magnetic field using a perturbative approach. We find that the absorption cross-sections show a remarkable angular dependence. Its strength is reduced in the direction parallel to the magnetic field and enhanced in the opposite direction. This asymmetric variation becomes maximally 2.2 % of entire neutrino momentum when the magnetic field is assumed as about 2 Multiplication-Sign 10{sup 17} G. Since the pulsar kick after the supernova explosion may have relationships to this asymmetry, detailed discussions about the pulsar kick and the asymmetrymore » are presented with the comparison to the observed kick velocities in a fully relativistic approach.« less

  19. A novel digital neutron flux monitor for international thermonuclear experimental reactor

    NASA Astrophysics Data System (ADS)

    Xiang, ZHOU; Zihao, LIU; Chao, CHEN; Renjie, ZHU; Li, ZHAO; Lingfeng, WEI; Zejie, YIN

    2018-04-01

    A novel full-digital real-time neutron flux monitor (NFM) has been developed for the International Thermonuclear Experimental Reactor. A measurement range of 109 counts per second is achieved with 3 different sensitive fission chambers. The Counting mode and Campbelling mode have been combined as a means to achieve higher measurement range. The system is based on high speed as well as parallel and pipeline processing of the field programmable gate array and has the ability to upload raw-data of analog-to-digital converter in real-time through the PXIe platform. With the advantages of the measurement range, real time performance and the ability of raw-data uploading, the digital NFM has been tested in HL-2A experiments and reflected good experimental performance.

  20. Protein hydration in solution: Experimental observation by x-ray and neutron scattering

    PubMed Central

    Svergun, D. I.; Richard, S.; Koch, M. H. J.; Sayers, Z.; Kuprin, S.; Zaccai, G.

    1998-01-01

    The structure of the protein–solvent interface is the subject of controversy in theoretical studies and requires direct experimental characterization. Three proteins with known atomic resolution crystal structure (lysozyme, Escherichia coli thioredoxin reductase, and protein R1 of E. coli ribonucleotide reductase) were investigated in parallel by x-ray and neutron scattering in H2O and D2O solutions. The analysis of the protein–solvent interface is based on the significantly different contrasts for the protein and for the hydration shell. The results point to the existence of a first hydration shell with an average density ≈10% larger than that of the bulk solvent in the conditions studied. Comparisons with the results of other studies suggest that this may be a general property of aqueous interfaces. PMID:9482874

  1. Early Results from the Advanced Radiation Protection Thick GCR Shielding Project

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Clowdsley, Martha; Slaba, Tony; Heilbronn, Lawrence; Zeitlin, Cary; Kenny, Sean; Crespo, Luis; Giesy, Daniel; Warner, James; McGirl, Natalie; hide

    2017-01-01

    The Advanced Radiation Protection Thick Galactic Cosmic Ray (GCR) Shielding Project leverages experimental and modeling approaches to validate a predicted minimum in the radiation exposure versus shielding depth curve. Preliminary results of space radiation models indicate that a minimum in the dose equivalent versus aluminum shielding thickness may exist in the 20-30 g/cm2 region. For greater shield thickness, dose equivalent increases due to secondary neutron and light particle production. This result goes against the long held belief in the space radiation shielding community that increasing shielding thickness will decrease risk to crew health. A comprehensive modeling effort was undertaken to verify the preliminary modeling results using multiple Monte Carlo and deterministic space radiation transport codes. These results verified the preliminary findings of a minimum and helped drive the design of the experimental component of the project. In first-of-their-kind experiments performed at the NASA Space Radiation Laboratory, neutrons and light ions were measured between large thicknesses of aluminum shielding. Both an upstream and a downstream shield were incorporated into the experiment to represent the radiation environment inside a spacecraft. These measurements are used to validate the Monte Carlo codes and derive uncertainty distributions for exposure estimates behind thick shielding similar to that provided by spacecraft on a Mars mission. Preliminary results for all aspects of the project will be presented.

  2. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes

    NASA Astrophysics Data System (ADS)

    Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.

  3. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    NASA Astrophysics Data System (ADS)

    Tudora, A.; Hambsch, F.-J.

    2017-08-01

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for which the experimental information is completely missing. The PbP treatment can also provide input parameters of the improved Los Alamos model with non-equal residual temperature distributions recently reported by Madland and Kahler, especially for fissioning nuclei without any experimental information concerning the prompt emission.

  4. Quantized orbits in weakly coupled Belousov-Zhabotinsky reactors

    NASA Astrophysics Data System (ADS)

    Weiss, S.; Deegan, R. D.

    2015-06-01

    Using numerical and experimental tools, we study the motion of two coupled spiral cores in a light-sensitive variant of the Belousov-Zhabotinsky reaction. Each core resides on a separate two-dimensional domain, and is coupled to the other by light. When both spirals have the same sense of rotation, the cores are attracted to a circular trajectory with a diameter quantized in integer units of the spiral wavelength λ. When the spirals have opposite senses of rotation, the cores are attracted towards different but parallel straight trajectories, separated by an integer multiple of λ/2. We present a model that explains this behavior as the result of a spiral wavefront-core interaction that produces a deterministic displacement of the core and a retardation of its phase.

  5. IMPROVED TYPE OF FUEL ELEMENT

    DOEpatents

    Monson, H.O.

    1961-01-24

    A radiator-type fuel block assembly is described. It has a hexagonal body of neutron fissionable material having a plurality of longitudinal equal- spaced coolant channels therein aligned in rows parallel to each face of the hexagonal body. Each of these coolant channels is hexagonally shaped with the corners rounded and enlarged and the assembly has a maximum temperature isothermal line around each channel which is approximately straight and equidistant between adjacent channels.

  6. Investigation of a 129Xe magnetometer for the Neutron Electric Dipole Moment Experiment at TRIUMF

    NASA Astrophysics Data System (ADS)

    Lang, Michael; Nedm At Triumf Collaboration

    2016-03-01

    A non-zero neutron electric dipole moment (nEDM) would signify a previously unknown source of CP (or T) violation. New sources of CP violation are believed to be required to explain the baryon asymmetry of the universe. Employing a newly developed high-density UCN source, an experiment at TRIUMF aims to measure the nEDM to the level of 10-27 e . cm in its initial phase. Precession frequency differences for UCN stored in a bottle subject to parallel and anti-parallel E and B fields signify a permanent nEDM. Magnetic field instability and inhomogeneity, as well as field changes resulting from leakage currents (correlated with E fields) are the dominant systematic effects in nEDM measurements. To address this, passive and active magnetic shielding are in development along with a dual species (129Xe and 199Hg) atomic comagnetometer. Simultaneously introducing both atomic species into the UCN cell, the comagnetometer can mitigate false EDMs. 199Hg precession will be detected by Faraday rotation spectroscopy, and 129Xe precession will measured via two-photon excitation and emission. The present comagnetometer progress will be discussed, with focus on polarized 129Xe production and delivery. Work supported by the Natural Sciences and Engineering Research Council of Canada.

  7. Application of an impedance matching transformer to a plasma focus.

    PubMed

    Bures, B L; James, C; Krishnan, M; Adler, R

    2011-10-01

    A plasma focus was constructed using an impedance matching transformer to improve power transfer between the pulse power and the dynamic plasma load. The system relied on two switches and twelve transformer cores to produce a 100 kA pulse in short circuit on the secondary at 27 kV on the primary with 110 J stored. With the two transformer systems in parallel, the Thevenin equivalent circuit parameters on the secondary side of the driver are: C = 10.9 μF, V(0) = 4.5 kV, L = 17 nH, and R = 5 mΩ. An equivalent direct drive circuit would require a large number of switches in parallel, to achieve the same Thevenin equivalent. The benefits of this approach are replacement of consumable switches with non-consumable transformer cores, reduction of the driver inductance and resistance as viewed by the dynamic load, and reduction of the stored energy to produce a given peak current. The system is designed to operate at 100 Hz, so minimizing the stored energy results in less load on the thermal management system. When operated at 1 Hz, the neutron yield from the transformer matched plasma focus was similar to the neutron yield from a conventional (directly driven) plasma focus at the same peak current.

  8. Neutron Powder Diffraction Study on the Magnetic Structure of NdPd 5 Al 2

    DOE PAGES

    Metoki, Naoto; Yamauchi, Hiroki; Kitazawa, Hideaki; ...

    2017-02-24

    The magnetic structure of NdPd 5Al 2 has been studied by neutron powder diffraction. Here, we observed the magnetic reflections with the modulation vector q=(1/2,0,0)q=(1/2,0,0) below the ordering temperature T N. We also found a collinear magnetic structure with a Nd moment of 2.7(3) μB at 0.5 K parallel to the c-axis, where the ferromagnetically ordered a-planes stack with a four-Nd-layer period having a ++-- sequence along the a-direction with the distance between adjacent Nd layers equal to a/2 (magnetic space group P anma). This “stripe”-like modulation is very similar to that in CePd 5Al 2 with q=(0.235,0.235,0)q=(0.235,0.235,0) with themore » Ce moment parallel to the c-axis. These structures with in-plane modulation are a consequence of the two-dimensional nature of the Fermi surface topology in this family, originating from the unique crystal structure with a very long tetragonal unit cell and a large distance of >7 Å between the rare-earth layers separated by two Pd and one Al layers.« less

  9. Effect of Extrusion Temperature on the Plastic Deformation of an Mg-Y-Zn Alloy Containing LPSO Phase Using In Situ Neutron Diffraction

    NASA Astrophysics Data System (ADS)

    Garces, G.; Perez, P.; Cabeza, S.; Kabra, S.; Gan, W.; Adeva, P.

    2017-11-01

    The evolution of the internal strains during in situ tension and compression tests has been measured in an MgY2Zn1 alloy containing long-period stacking ordered (LPSO) phase using neutron diffraction. The alloy was extruded at two different temperatures to study the influence of the microstructure and texture of the magnesium and the LPSO phases on the deformation mechanisms. The alloy extruded at 623 K (350 °C) exhibits a strong fiber texture with the basal plane parallel to the extrusion direction due to the presence of areas of coarse non-recrystallised grains. However, at 723 K (450 °C), the magnesium phase is fully recrystallised with grains randomly oriented. On the other hand, at the two extrusion temperatures, the LPSO phase orients their basal plane parallel to the extrusion direction. Yield stress is always slightly higher in compression than in tension. Independently on the stress sign and the extrusion temperature, the beginning of plasticity is controlled by the activation of the basal slip system in the dynamic recrystallized grains. Therefore, the elongated fiber-shaped LPSO phase which behaves as the reinforcement in a metal matrix composite is responsible for this tension-compression asymmetry.

  10. Neutron Powder Diffraction Study on the Magnetic Structure of NdPd 5 Al 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metoki, Naoto; Yamauchi, Hiroki; Kitazawa, Hideaki

    The magnetic structure of NdPd 5Al 2 has been studied by neutron powder diffraction. Here, we observed the magnetic reflections with the modulation vector q=(1/2,0,0)q=(1/2,0,0) below the ordering temperature T N. We also found a collinear magnetic structure with a Nd moment of 2.7(3) μB at 0.5 K parallel to the c-axis, where the ferromagnetically ordered a-planes stack with a four-Nd-layer period having a ++-- sequence along the a-direction with the distance between adjacent Nd layers equal to a/2 (magnetic space group P anma). This “stripe”-like modulation is very similar to that in CePd 5Al 2 with q=(0.235,0.235,0)q=(0.235,0.235,0) with themore » Ce moment parallel to the c-axis. These structures with in-plane modulation are a consequence of the two-dimensional nature of the Fermi surface topology in this family, originating from the unique crystal structure with a very long tetragonal unit cell and a large distance of >7 Å between the rare-earth layers separated by two Pd and one Al layers.« less

  11. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  12. Parallelizing quantum circuit synthesis

    NASA Astrophysics Data System (ADS)

    Di Matteo, Olivia; Mosca, Michele

    2016-03-01

    Quantum circuit synthesis is the process in which an arbitrary unitary operation is decomposed into a sequence of gates from a universal set, typically one which a quantum computer can implement both efficiently and fault-tolerantly. As physical implementations of quantum computers improve, the need is growing for tools that can effectively synthesize components of the circuits and algorithms they will run. Existing algorithms for exact, multi-qubit circuit synthesis scale exponentially in the number of qubits and circuit depth, leaving synthesis intractable for circuits on more than a handful of qubits. Even modest improvements in circuit synthesis procedures may lead to significant advances, pushing forward the boundaries of not only the size of solvable circuit synthesis problems, but also in what can be realized physically as a result of having more efficient circuits. We present a method for quantum circuit synthesis using deterministic walks. Also termed pseudorandom walks, these are walks in which once a starting point is chosen, its path is completely determined. We apply our method to construct a parallel framework for circuit synthesis, and implement one such version performing optimal T-count synthesis over the Clifford+T gate set. We use our software to present examples where parallelization offers a significant speedup on the runtime, as well as directly confirm that the 4-qubit 1-bit full adder has optimal T-count 7 and T-depth 3.

  13. Particular features of ternary fission induced by polarized neutrons in the major actinides U,235233 and Pu,241239

    NASA Astrophysics Data System (ADS)

    Gagarski, A.; Gönnenwein, F.; Guseva, I.; Jesinger, P.; Kopatch, Yu.; Kuzmina, T.; Lelièvre-Berna, E.; Mutterer, M.; Nesvizhevsky, V.; Petrov, G.; Soldner, T.; Tiourine, G.; Trzaska, W. H.; Zavarukhina, T.

    2016-05-01

    Ternary fission in (n ,f ) reactions was studied with polarized neutrons for the isotopes U,235233 and Pu,241239. A cold longitudinally polarized neutron beam was available at the High Flux Reactor of the Institut Laue-Langevin in Grenoble, France. The beam was hitting the fissile targets mounted at the center of a reaction chamber. Detectors for fission fragments and ternary particles were installed in a plane perpendicular to the beam. In earlier work it was discovered that the angular correlations between neutron spin and the momenta of fragments and ternary particles were very different for 233U or 235U. These correlations could now be shown to be simultaneously present in all of the above major actinides though with different weights. For one of the correlations it was observed that up to scission the compound nucleus is rotating with the axis of rotation parallel to the neutron beam polarization. Entrained by the fragments also the trajectories of ternary particles are turned away albeit by a smaller angle. The difference in turning angles becomes observable upon reversing the sense of rotation by flipping neutron spin. All turning angles are smaller than 1∘. The phenomenon was called the ROT effect. As a distinct second phenomenon it was found that for fission induced by polarized neutrons an asymmetry in the emission probability of ternary particles relative to a plane formed by fragment momentum and neutron spin appears. The asymmetry is attributed to the Coriolis force present in the nucleus while it is rotating up to scission. The size of the asymmetry is typically 10-3. This asymmetry was termed the TRI effect. The interpretation of both effects is based on the transition state model. Both effects are shown to be steered by the properties of the collective (J ,K ) transition states which are specific for any of the reactions studied. The study of asymmetries of ternary particle emission in fission induced by slow polarized neutrons provides a new method for the spectroscopy of transition states (J ,K ) near the fission barrier. Implications of collective rotation on fragment angular momenta are discussed.

  14. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.

  15. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  16. Electric field control of the skyrmion lattice in Cu2OSeO3

    NASA Astrophysics Data System (ADS)

    White, J. S.; Levatić, I.; Omrani, A. A.; Egetenmeyer, N.; Prša, K.; Živković, I.; Gavilano, J. L.; Kohlbrecher, J.; Bartkowiak, M.; Berger, H.; Rønnow, H. M.

    2012-10-01

    Small-angle neutron scattering has been employed to study the influence of applied electric (E-)fields on the skyrmion lattice in the chiral lattice magnetoelectric Cu2OSeO3. Using an experimental geometry with the E-field parallel to the [111] axis, and the magnetic field parallel to the [1\\bar {1}0] axis, we demonstrate that the effect of applying an E-field is to controllably rotate the skyrmion lattice around the magnetic field axis. Our results are an important first demonstration for a microscopic coupling between applied E-fields and the skyrmions in an insulator, and show that the general emergent properties of skyrmions may be tailored according to the properties of the host system.

  17. Structural Studies of Three-Arm Star Block Copolymers Exposed to Extreme Stretch Suggests a Persistent Polymer Tube

    NASA Astrophysics Data System (ADS)

    Mortensen, Kell; Borger, Anine L.; Kirkensgaard, Jacob J. K.; Garvey, Christopher J.; Almdal, Kristoffer; Dorokhin, Andriy; Huang, Qian; Hassager, Ole

    2018-05-01

    We present structural small-angle neutron scattering studies of a three-armed polystyrene star polymer with short deuterated segments at the end of each arm. We show that the form factor of the three-armed star molecules in the relaxed state agrees with that of the random phase approximation of Gaussian chains. Upon exposure to large extensional flow conditions, the star polymers change conformation resulting in a highly stretched structure that mimics a fully extended three-armed tube model. All three arms are parallel to the flow, one arm being either in positive or negative stretching direction, while the two other arms are oriented parallel, right next to each other in the direction opposite to the first arm.

  18. Radiation Characterization Summary: ACRR Polyethylene-Lead-Graphite (PLG) Bucket Located in the Central Cavity on the 32-Inch Pedestal at the Core Centerline (ACRR-PLG-CC-32-cl).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parma, Edward J.,; Vehar, David W.; Lippert, Lance L.

    2015-06-01

    This document presents the facility-recommended characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the Annular Core Research Reactor (ACRR) for the polyethylene-lead-graphite (PLG) bucket in the central cavity on the 32-inch pedestal at the core centerline. The designation for this environment is ACRR-PLG-CC-32-cl. The neutron, prompt gamma-ray, and delayed gamma-ray energy spectra, uncertainties, and covariance matrices are presented as well as radial and axial neutron and gamma-ray fluence profiles within the experiment area of the bucket. Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulsemore » operations are presented with conversion examples. Acknowledgements The authors wish to thank the Annular Core Research Reactor staff and the Radiation Metrology Laboratory staff for their support of this work. Also thanks to David Ames for his assistance in running MCNP on the Sandia parallel machines.« less

  19. Radiation Characterization Summary: ACRR Central Cavity Free-Field Environment with the 32-Inch Pedestal at the Core Centerline (ACRR-FF-CC-32-cl).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vega, Richard Manuel; Parma, Edward J.; Naranjo, Gerald E.

    2015-08-01

    This document presents the facilit y - recommended characteri zation o f the neutron, prompt gamma - ray, and delayed gamma - ray radiation fields in the Annular Core Research Reactor ( ACRR ) for the cen tral cavity free - field environment with the 32 - inch pedestal at the core centerline. The designation for this environmen t is ACRR - FF - CC - 32 - cl. The neutron, prompt gamma - ray , and delayed gamma - ray energy spectra , uncertainties, and covariance matrices are presented as well as radial and axial neutron and gamma -more » ray fluence profiles within the experiment area of the cavity . Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulse operations are presented with conversion examples . Acknowledgements The authors wish to th ank the Annular Core Research Reactor staff and the Radiation Metrology Laboratory staff for their support of this work . Also thanks to David Ames for his assistance in running MCNP on the Sandia parallel machines.« less

  20. The cosmic matrix in the 50th anniversary of relativistic astrophysics

    NASA Astrophysics Data System (ADS)

    Ruffini, R.; Aimuratov, Y.; Becerra, L.; Bianco, C. L.; Karlica, M.; Kovacevic, M.; Melon Fuksman, J. D.; Moradi, R.; Muccino, M.; Penacchioni, A. V.; Pisani, G. B.; Primorac, D.; Rueda, J. A.; Shakeri, S.; Vereshchagin, G. V.; Wang, Y.; Xue, S.-S.

    Our concept of induced gravitational collapse (IGC paradigm) starting from a supernova occurring with a companion neutron star, has unlocked the understanding of seven different families of gamma ray bursts (GRBs), indicating a path for the formation of black holes in the universe. An authentic laboratory of relativistic astrophysics has been unveiled in which new paradigms have been introduced in order to advance knowledge of the most energetic, distant and complex systems in our universe. A novel cosmic matrix paradigm has been introduced at a relativistic cosmic level, which parallels the concept of an S-matrix introduced by Feynmann, Wheeler and Heisenberg in the quantum world of microphysics. Here the “in” states are represented by a neutron star and a supernova, while the “out” states, generated within less than a second, are a new neutron star and a black hole. This novel field of research needs very powerful technological observations in all wavelengths ranging from radio through optical, X-ray and gamma ray radiation all the way up to ultra-high-energy cosmic rays.

  1. Comparison of quartz crystallographic preferred orientations identified with optical fabric analysis, electron backscatter and neutron diffraction techniques.

    PubMed

    Hunter, N J R; Wilson, C J L; Luzin, V

    2017-02-01

    Three techniques are used to measure crystallographic preferred orientations (CPO) in a naturally deformed quartz mylonite: transmitted light cross-polarized microscopy using an automated fabric analyser, electron backscatter diffraction (EBSD) and neutron diffraction. Pole figure densities attributable to crystal-plastic deformation are variably recognizable across the techniques, particularly between fabric analyser and diffraction instruments. Although fabric analyser techniques offer rapid acquisition with minimal sample preparation, difficulties may exist when gathering orientation data parallel with the incident beam. Overall, we have found that EBSD and fabric analyser techniques are best suited for studying CPO distributions at the grain scale, where individual orientations can be linked to their source grain or nearest neighbours. Neutron diffraction serves as the best qualitative and quantitative means of estimating the bulk CPO, due to its three-dimensional data acquisition, greater sample area coverage, and larger sample size. However, a number of sampling methods can be applied to FA and EBSD data to make similar approximations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  2. Self-assembled iron oxide nanoparticle multilayer: x-ray and polarized neutron reflectivity.

    PubMed

    Mishra, D; Benitez, M J; Petracic, O; Badini Confalonieri, G A; Szary, P; Brüssing, F; Theis-Bröhl, K; Devishvili, A; Vorobiev, A; Konovalov, O; Paulus, M; Sternemann, C; Toperverg, B P; Zabel, H

    2012-02-10

    We have investigated the structure and magnetism of self-assembled, 20 nm diameter iron oxide nanoparticles covered by an oleic acid shell for scrutinizing their structural and magnetic correlations. The nanoparticles were spin-coated on an Si substrate as a single monolayer and as a stack of 5 ML forming a multilayer. X-ray scattering (reflectivity and grazing incidence small-angle scattering) confirms high in-plane hexagonal correlation and a good layering property of the nanoparticles. Using polarized neutron reflectivity we have also determined the long range magnetic correlations parallel and perpendicular to the layers in addition to the structural ones. In a field of 5 kOe we determine a magnetization value of about 80% of the saturation value. At remanence the global magnetization is close to zero. However, polarized neutron reflectivity reveals the existence of regions in which magnetic moments of nanoparticles are well aligned, while losing order over longer distances. These findings confirm that in the nanoparticle assembly the magnetic dipole-dipole interaction is rather strong, dominating the collective magnetic properties at room temperature.

  3. End-compensated magnetostatic cavity for polarized 3He neutron spin filters.

    PubMed

    McIver, J W; Erwin, R; Chen, W C; Gentile, T R

    2009-06-01

    We have expanded upon the "Magic Box" concept, a coil driven magnetic parallel plate capacitor constructed out of mu-metal, by introducing compensation sections at the ends of the box that are tuned to limit end-effects similar to those of short solenoids. This ability has reduced the length of the magic box design without sacrificing any loss in field homogeneity, making the device far more applicable to the often space limited neutron beam line. The appeal of the design beyond affording longer polarized 3He lifetimes is that it provides a vertical guide field, which facilitates neutron spin transport for typical polarized beam experiments. We have constructed two end-compensated magic boxes of dimensions 28.4 x 40 x 15 cm3 (length x width x height) with measured, normalized volume-averaged transverse field gradients ranging from 3.3 x 10(-4) to 6.3 x 10(-4) cm(-1) for cell sizes ranging from 8.1 x 6.0 to 12.0 x 7.9 cm2 (diameter x length), respectively.

  4. Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.

    PubMed

    Krishnamurthy, V; Krishnamurthy, E V

    1999-03-01

    A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.

  5. Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

    2014-10-01

    A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.

  6. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  8. Invited Parallel Talk: Lattice results on nucleon/roper properties

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen

    2009-12-01

    In this proceeding, I review the attempts to calculate the Nucleon resonance (including Roper as first radially excited state of nucleon and other excited states) using lattice quantum chromodynamics (QCD). The latest preliminary results from Hadron Spectrum Collaboration (HSC) with mπ thickapprox 380 MeV are reported. The Sachs electric form factor of the proton and neutron and their transition with the Roper at large Q2 are also updated in this work.

  9. The Early Stage of Neutron Tomography for Cultural Heritage Study in Thailand

    NASA Astrophysics Data System (ADS)

    Khaweerat, S.; Ratanatongchai, W.; S. Wonglee; Schillinger, B.

    In parallel to the upgrade of neutron imaging facility at TRR-1/M1 since 2015, the practice on image processing software has led to implementation of neutron tomography (NT). The current setup provides a thermal neutron flux of 1.08×106 cm-2sec-1 at the exposure position. In general, the sample was fixed on a plate at the top of rotary stage controlled by Labview 2009 Version 9.0.1. The incremental step can be adjusted from 0.45 to 7.2 degree. A 16 bit CCD camera assembled with a Nikkor 50 mm f/1.2 lens was used to record light from 6LiF/ZnS (green) neutron converter screen. The exposure time for each shot was 60 seconds, resulting in the acquisition time of approximately three hours for completely turning the sample around. Afterwards, the batch of two dimensional neutron images of the sample was read into the reconstruction and visualization software Octopus reconstruction 8.8 and Octopus visualization 2.0, respectively. The results revealed that the system alignment is important. Maintaining the stability of heavy sample at every particular angle of rotation is important. Previous alignment showed instability of the supporting plane while tilting the sample. This study showed that the sample stage should be replaced. Even though the NT is a lengthy process and involves large data processing, it offers an opportunity to better understand features of an object in more details than with neutron radiography. The digital NT also allows us to separate inner features that appear superpositioned in radiography by cross-sectioning the 3D data set of an object without destruction. As a result, NT is a significant tool for revealing hidden information included in the inner structure of cultural heritage objects, providing great benefits in archaeological study, conservation process and authenticity investigating.

  10. Measurement of the np total cross section difference Δ σ L(np) at 1.39, 1.69, 1.89 and 1.99 GeV

    NASA Astrophysics Data System (ADS)

    Sharov, V. I.; Anischenko, N. G.; Antonenko, V. G.; Averichev, S. A.; Azhgirey, L. S.; Bartenev, V. D.; Bazhanov, N. A.; Belyaev, A. A.; Blinov, N. A.; Borisov, N. S.; Borzakov, S. B.; Borzunov, Yu T.; Bushuev, Yu P.; Chernenko, L. P.; Chernykh, E. V.; Chumakov, V. F.; Dolgii, S. A.; Fedorov, A. N.; Fimushkin, V. V.; Finger, M.; Finger, M.; Golovanov, L. B.; Gurevich, G. M.; Janata, A.; Kirillov, A. D.; Kolomiets, V. G.; Komogorov, E. V.; Kovalenko, A. D.; Kovalev, A. I.; Krasnov, V. A.; Krstonoshich, P.; Kuzmin, E. S.; Ladygin, V. P.; Lazarev, A. B.; Lehar, F.; de Lesquen, A.; Liburg, M. Yu; Livanov, A. N.; Lukhanin, A. A.; Maniakov, P. K.; Matafonov, V. N.; Matyushevsky, E. A.; Moroz, V. D.; Morozov, A. A.; Neganov, A. B.; Nikolaevsky, G. P.; Nomofilov, A. A.; Panteleev, Tz; Pilipenko, Yu K.; Pisarev, I. L.; Plis, Yu A.; Polunin, Yu P.; Prokofiev, A. N.; Prytkov, V. Yu; Rukoyatkin, P. A.; Schedrov, V. A.; Schevelev, O. N.; Shilov, S. N.; Shindin, R. A.; Slunečka, M.; Slunečková, V.; Starikov, A. Yu; Stoletov, G. D.; Strunov, L. N.; Svetov, A. L.; Usov, Yu A.; Vasiliev, T.; Volkov, V. I.; Vorobiev, E. I.; Yudin, I. P.; Zaitsev, I. V.; Zhdanov, A. A.; Zhmyrov, V. N.

    2004-09-01

    New accurate results of the neutron-proton spin-dependent total cross section difference Δσ_L(np) at the neutron beam kinetic energies 1.39, 1.69, 1.89 and 1.99 GeV are presented. Measurements were carried out in 2001 at the Synchrophasotron of the Veksler and Baldin Laboratory of High Energies of the Joint Institute for Nuclear Research. A quasi-monochromatic neutron beam was produced by break-up of extracted polarized deuterons. The deuteron (and hence neutron) polarization direction was flipped every accelerator burst. The vertical neutron polarization direction was rotated onto the neutron beam direction and longitudinally (L) polarized neutrons were transmitted through a large proton L-polarized target. The target polarization vector was inverted after 1-2 days of measurements. The data were recorded for four different combinations of the beam and target parallel and antiparallel polarization directions at each energy. A fast decrease of Δσ_L(np) with increasing energy above 1.1 GeV was confirmed. The structure in the Δσ_L(np) energy dependence around 1.8 GeV, first observed from our previous data, seems to be well pronounced. The new results are also compared with model predictions and with phase shift analysis fits. The Δσ_L quantities for isosinglet state I = 0, deduced from the measured Δσ_L(np) values and the known Δσ_L(pp) data, are also given. The results were completed by the measurements of unpolarized total cross sections σ_{0tot}(np) at 1.3, 1.4 and 1.5 GeV and σ_{0tot}(nC) at 1.4 and 1.5 GeV. These data were obtained using the same apparatus and high intensity unpolarized deuteron beams were extracted either from the Synchrophasotron, or from the Nuclotron.

  11. Synthesis of blind source separation algorithms on reconfigurable FPGA platforms

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Szu, Harold H.

    2005-03-01

    Recent advances in intelligence technology have boosted the development of micro- Unmanned Air Vehicles (UAVs) including Sliver Fox, Shadow, and Scan Eagle for various surveillance and reconnaissance applications. These affordable and reusable devices have to fit a series of size, weight, and power constraints. Cameras used on such micro-UAVs are therefore mounted directly at a fixed angle without any motion-compensated gimbals. This mounting scheme has resulted in the so-called jitter effect in which jitter is defined as sub-pixel or small amplitude vibrations. The jitter blur caused by the jitter effect needs to be corrected before any other processing algorithms can be practically applied. Jitter restoration has been solved by various optimization techniques, including Wiener approximation, maximum a-posteriori probability (MAP), etc. However, these algorithms normally assume a spatial-invariant blur model that is not the case with jitter blur. Szu et al. developed a smart real-time algorithm based on auto-regression (AR) with its natural generalization of unsupervised artificial neural network (ANN) learning to achieve restoration accuracy at the sub-pixel level. This algorithm resembles the capability of the human visual system, in which an agreement between the pair of eyes indicates "signal", otherwise, the jitter noise. Using this non-statistical method, for each single pixel, a deterministic blind sources separation (BSS) process can then be carried out independently based on a deterministic minimum of the Helmholtz free energy with a generalization of Shannon's information theory applied to open dynamic systems. From a hardware implementation point of view, the process of jitter restoration of an image using Szu's algorithm can be optimized by pixel-based parallelization. In our previous work, a parallelly structured independent component analysis (ICA) algorithm has been implemented on both Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) using standard-height cells. ICA is an algorithm that can solve BSS problems by carrying out the all-order statistical, decorrelation-based transforms, in which an assumption that neighborhood pixels share the same but unknown mixing matrix A is made. In this paper, we continue our investigation on the design challenges of firmware approaches to smart algorithms. We think two levels of parallelization can be explored, including pixel-based parallelization and the parallelization of the restoration algorithm performed at each pixel. This paper focuses on the latter and we use ICA as an example to explain the design and implementation methods. It is well known that the capacity constraints of single FPGA have limited the implementation of many complex algorithms including ICA. Using the reconfigurability of FPGA, we show, in this paper, how to manipulate the FPGA-based system to provide extra computing power for the parallelized ICA algorithm with limited FPGA resources. The synthesis aiming at the pilchard re-configurable FPGA platform is reported. The pilchard board is embedded with single Xilinx VIRTEX 1000E FPGA and transfers data directly to CPU on the 64-bit memory bus at the maximum frequency of 133MHz. Both the feasibility performance evaluations and experimental results validate the effectiveness and practicality of this synthesis, which can be extended to the spatial-variant jitter restoration for micro-UAV deployment.

  12. Residual stress measurements via neutron diffraction of additive manufactured stainless steel 17-4 PH.

    PubMed

    Masoomi, Mohammad; Shamsaei, Nima; Winholtz, Robert A; Milner, Justin L; Gnäupel-Herold, Thomas; Elwany, Alaa; Mahmoudi, Mohamad; Thompson, Scott M

    2017-08-01

    Neutron diffraction was employed to measure internal residual stresses at various locations along stainless steel (SS) 17-4 PH specimens additively manufactured via laser-powder bed fusion (L-PBF). Of these specimens, two were rods (diameter=8 mm, length=80 mm) built vertically upward and one a parallelepiped (8×80×9 mm 3 ) built with its longest edge parallel to ground. One rod and the parallelepiped were left in their as-built condition, while the other rod was heat treated. Data presented provide insight into the microstructural characteristics of typical L-PBF SS 17-4 PH specimens and their dependence on build orientation and post-processing procedures such as heat treatment. Data have been deposited in the Data in Brief Dataverse repository (doi:10.7910/DVN/T41S3V).

  13. An analog neural hardware implementation using charge-injection multipliers and neutron-specific gain control.

    PubMed

    Massengill, L W; Mundie, D B

    1992-01-01

    A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes.

  14. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  15. Designing an upgrade of the Medley setup for light-ion production and fission cross-section measurements

    NASA Astrophysics Data System (ADS)

    Jansson, K.; Gustavsson, C.; Al-Adili, A.; Hjalmarsson, A.; Andersson-Sundén, E.; Prokofiev, A. V.; Tarrío, D.; Pomp, S.

    2015-09-01

    Measurements of neutron-induced fission cross-sections and light-ion production are planned in the energy range 1-40 MeV at the upcoming Neutrons For Science (NFS) facility. In order to prepare our detector setup for the neutron beam with continuous energy spectrum, a simulation software was written using the Geant4 toolkit for both measurement situations. The neutron energy range around 20 MeV is troublesome when it comes to the cross-sections used by Geant4 since data-driven cross-sections are only available below 20 MeV but not above, where they are based on semi-empirical models. Several customisations were made to the standard classes in Geant4 in order to produce consistent results over the whole simulated energy range. Expected uncertainties are reported for both types of measurements. The simulations have shown that a simultaneous precision measurement of the three standard cross-sections H(n,n), 235U(n,f) and 238U(n,f) relative to each other is feasible using a triple layered target. As high resolution timing detectors for fission fragments we plan to use Parallel Plate Avalanche Counters (PPACs). The simulation results have put some restrictions on the design of these detectors as well as on the target design. This study suggests a fissile target no thicker than 2 μm (1.7 mg/cm2) and a PPAC foil thickness preferably less than 1 μm. We also comment on the usability of Geant4 for simulation studies of neutron reactions in this energy range.

  16. Multi-Grid detector for neutron spectroscopy: results obtained on time-of-flight spectrometer CNCS

    NASA Astrophysics Data System (ADS)

    Anastasopoulos, M.; Bebb, R.; Berry, K.; Birch, J.; Bryś, T.; Buffet, J.-C.; Clergeau, J.-F.; Deen, P. P.; Ehlers, G.; van Esch, P.; Everett, S. M.; Guerard, B.; Hall-Wilton, R.; Herwig, K.; Hultman, L.; Höglund, C.; Iruretagoiena, I.; Issa, F.; Jensen, J.; Khaplanov, A.; Kirstein, O.; Lopez Higuera, I.; Piscitelli, F.; Robinson, L.; Schmidt, S.; Stefanescu, I.

    2017-04-01

    The Multi-Grid detector technology has evolved from the proof-of-principle and characterisation stages. Here we report on the performance of the Multi-Grid detector, the MG.CNCS prototype, which has been installed and tested at the Cold Neutron Chopper Spectrometer, CNCS at SNS. This has allowed a side-by-side comparison to the performance of 3He detectors on an operational instrument. The demonstrator has an active area of 0.2 m2. It is specifically tailored to the specifications of CNCS. The detector was installed in June 2016 and has operated since then, collecting neutron scattering data in parallel to the He-3 detectors of CNCS. In this paper, we present a comprehensive analysis of this data, in particular on instrument energy resolution, rate capability, background and relative efficiency. Stability, gamma-ray and fast neutron sensitivity have also been investigated. The effect of scattering in the detector components has been measured and provides input to comparison for Monte Carlo simulations. All data is presented in comparison to that measured by the 3He detectors simultaneously, showing that all features recorded by one detector are also recorded by the other. The energy resolution matches closely. We find that the Multi-Grid is able to match the data collected by 3He, and see an indication of a considerable advantage in the count rate capability. Based on these results, we are confident that the Multi-Grid detector will be capable of producing high quality scientific data on chopper spectrometers utilising the unprecedented neutron flux of the ESS.

  17. Neutron organ dose and the influence of adipose tissue

    NASA Astrophysics Data System (ADS)

    Simpkins, Robert Wayne

    Neutron fluence to dose conversion coefficients have been assessed considering the influences of human adipose tissue. Monte Carlo code MCNP4C was used to simulate broad parallel beam monoenergetic neutrons ranging in energy from thermal to 10 MeV. Simulated Irradiations were conducted for standard irradiation geometries. The targets were on gender specific mathematical anthropomorphic phantoms modified to approximate human adipose tissue distributions. Dosimetric analysis compared adipose tissue influence against reference anthropomorphic phantom characteristics. Adipose Male and Post-Menopausal Female Phantoms were derived introducing interstitial adipose tissue to account for 22 and 27 kg additional body mass, respectively, each demonstrating a Body Mass Index (BMI) of 30. An Adipose Female Phantom was derived introducing specific subcutaneous adipose tissue accounting for 15 kg of additional body mass demonstrating a BMI of 26. Neutron dose was shielded in the superficial tissues; giving rise to secondary photons which dominated the effective dose for Incident energies less than 100 keV. Adipose tissue impact on the effective dose was a 25% reduction at the anterior-posterior incidence ranging to a 10% increase at the lateral incidences. Organ dose impacts were more distinctive; symmetrically situated organs demonstrated a 15% reduction at the anterior-posterior Incidence ranging to a 2% increase at the lateral incidences. Abdominal or asymmetrically situated organs demonstrated a 50% reduction at the anterior-posterior incidence ranging to a 25% increase at the lateral incidences.

  18. Bubble-detector measurements of neutron radiation in the international space station: ISS-34 to ISS-37.

    PubMed

    Smith, M B; Khulapko, S; Andrews, H R; Arkhangelsky, V; Ing, H; Koslowksy, M R; Lewis, B J; Machrafi, R; Nikolaev, I; Shurshakov, V

    2016-02-01

    Bubble detectors have been used to characterise the neutron dose and energy spectrum in several modules of the International Space Station (ISS) as part of an ongoing radiation survey. A series of experiments was performed during the ISS-34, ISS-35, ISS-36 and ISS-37 missions between December 2012 and October 2013. The Radi-N2 experiment, a repeat of the 2009 Radi-N investigation, included measurements in four modules of the US orbital segment: Columbus, the Japanese experiment module, the US laboratory and Node 2. The Radi-N2 dose and spectral measurements are not significantly different from the Radi-N results collected in the same ISS locations, despite the large difference in solar activity between 2009 and 2013. Parallel experiments using a second set of detectors in the Russian segment of the ISS included the first characterisation of the neutron spectrum inside the tissue-equivalent Matroshka-R phantom. These data suggest that the dose inside the phantom is ∼70% of the dose at its surface, while the spectrum inside the phantom contains a larger fraction of high-energy neutrons than the spectrum outside the phantom. The phantom results are supported by Monte Carlo simulations that provide good agreement with the empirical data. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data

    DOE PAGES

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...

    2017-02-23

    Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less

  20. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.

    Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less

  1. Parallel mapping of optical near-field interactions by molecular motor-driven quantum dots.

    PubMed

    Groß, Heiko; Heil, Hannah S; Ehrig, Jens; Schwarz, Friedrich W; Hecht, Bert; Diez, Stefan

    2018-04-30

    In the vicinity of metallic nanostructures, absorption and emission rates of optical emitters can be modulated by several orders of magnitude 1,2 . Control of such near-field light-matter interaction is essential for applications in biosensing 3 , light harvesting 4 and quantum communication 5,6 and requires precise mapping of optical near-field interactions, for which single-emitter probes are promising candidates 7-11 . However, currently available techniques are limited in terms of throughput, resolution and/or non-invasiveness. Here, we present an approach for the parallel mapping of optical near-field interactions with a resolution of <5 nm using surface-bound motor proteins to transport microtubules carrying single emitters (quantum dots). The deterministic motion of the quantum dots allows for the interpolation of their tracked positions, resulting in an increased spatial resolution and a suppression of localization artefacts. We apply this method to map the near-field distribution of nanoslits engraved into gold layers and find an excellent agreement with finite-difference time-domain simulations. Our technique can be readily applied to a variety of surfaces for scalable, nanometre-resolved and artefact-free near-field mapping using conventional wide-field microscopes.

  2. Morphological Diversity and the Roles of Contingency, Chance and Determinism in African Cichlid Radiations

    PubMed Central

    Young, Kyle A.; Snoeks, Jos; Seehausen, Ole

    2009-01-01

    Background Deterministic evolution, phylogenetic contingency and evolutionary chance each can influence patterns of morphological diversification during adaptive radiation. In comparative studies of replicate radiations, convergence in a common morphospace implicates determinism, whereas non-convergence suggests the importance of contingency or chance. Methodology/Principal Findings The endemic cichlid fish assemblages of the three African great lakes have evolved similar sets of ecomorphs but show evidence of non-convergence when compared in a common morphospace, suggesting the importance of contingency and/or chance. We then analyzed the morphological diversity of each assemblage independently and compared their axes of diversification in the unconstrained global morphospace. We find that despite differences in phylogenetic composition, invasion history, and ecological setting, the three assemblages are diversifying along parallel axes through morphospace and have nearly identical variance-covariance structures among morphological elements. Conclusions/Significance By demonstrating that replicate adaptive radiations are diverging along parallel axes, we have shown that non-convergence in the common morphospace is associated with convergence in the global morphospace. Applying these complimentary analyses to future comparative studies will improve our understanding of the relationship between morphological convergence and non-convergence, and the roles of contingency, chance and determinism in driving morphological diversification. PMID:19270732

  3. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.

    PubMed

    Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  5. Processing and validation of JEFF-3.1.1 and ENDF/B-VII.0 group-wise cross section libraries for shielding calculations

    NASA Astrophysics Data System (ADS)

    Pescarini, M.; Sinitsa, V.; Orsi, R.; Frisoni, M.

    2013-03-01

    This paper presents a synthesis of the ENEA-Bologna Nuclear Data Group programme dedicated to generate and validate group-wise cross section libraries for shielding and radiation damage deterministic calculations in nuclear fission reactors, following the data processing methodology recommended in the ANSI/ANS-6.1.2-1999 (R2009) American Standard. The VITJEFF311.BOLIB and VITENDF70.BOLIB finegroup coupled n-γ (199 n + 42 γ - VITAMIN-B6 structure) multi-purpose cross section libraries, based on the Bondarenko method for neutron resonance self-shielding and respectively on JEFF-3.1.1 and ENDF/B-VII.0 evaluated nuclear data, were produced in AMPX format using the NJOY-99.259 and the ENEA-Bologna 2007 Revision of the SCAMPI nuclear data processing systems. Two derived broad-group coupled n-γ (47 n + 20 γ - BUGLE-96 structure) working cross section libraries in FIDO-ANISN format for LWR shielding and pressure vessel dosimetry calculations, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, were generated by the revised version of SCAMPI, through problem-dependent cross section collapsing and self-shielding from the cited fine-group libraries. The validation results on the criticality safety benchmark experiments for the fine-group libraries and the preliminary validation results for the broad-group working libraries on the PCA-Replica and VENUS-3 engineering neutron shielding benchmark experiments are reported in synthesis.

  6. Teaching: the Wave Mechanics of McLeods' Stringy Electron, Explicit Nucleons, and Through-the-Earth Projections of Constellations' Stick Figures

    NASA Astrophysics Data System (ADS)

    McLeod, Roger David; McLeod, David Matthew

    2012-02-01

    This shows how Hooke's law, for electron, proton and neutron, 2D and 3D, strings, builds electromagnetic string-waves, extending, and pleasing, Schr"odinger. These are composed of spirally linked, parallel, north-pole oriented, neutrino and antineutrino strings, stable by magnetic repulsions. Their Dumbo Proton is antineutrino-scissor cut, and compressed in the vicinity of a neutron star, where electrostatic marriage occurs with a neutrino-scissor cut, and compressed, electron, so a Mickey Neutron emerges. Strings predict: electron charge is - 1/3 e, Dumbo P is 25 % longer than Mickey N, and Hooke says relaxing springs fuel three, separate, non-eternal, inflations, after Big Bangs. Gravity is strings, longitudinally linked. Einstein says Herman Grid's black diagonals prove human vision reads its information from algebraically-signed electromagnetic field distributions, (diffraction) patterns, easily known by ray-tracing, not requiring difficult Spatial Fourier Transformation. High-schoolers understand its application to Wave Mechanics, agreeing that positive-numbered probabilities do not enter, to possibly displease God. Detected stick-figure forms of constellations: like Phoenix, Leo, Canis Major, and especially Orion, fool some observers into false beliefs in things like UFHumanoids, or Kokopelli, Pele and Pamola!

  7. Quantifying Listeria monocytogenes prevalence and concentration in minced pork meat and estimating performance of three culture media from presence/absence microbiological testing using a deterministic and stochastic approach.

    PubMed

    Andritsos, Nikolaos D; Mataragas, Marios; Paramithiotis, Spiros; Drosinos, Eleftherios H

    2013-12-01

    Listeria monocytogenes poses a serious threat to public health, and the majority of cases of human listeriosis are associated with contaminated food. Reliable microbiological testing is needed for effective pathogen control by food industry and competent authorities. The aims of this work were to estimate the prevalence and concentration of L. monocytogenes in minced pork meat by the application of a Bayesian modeling approach, and also to determine the performance of three culture media commonly used for detecting L. monocytogenes in foods from a deterministic and stochastic perspective. Samples (n = 100) collected from local markets were tested for L. monocytogenes using in parallel the PALCAM, ALOA and RAPID'L.mono selective media according to ISO 11290-1:1996 and 11290-2:1998 methods. Presence of the pathogen was confirmed by conducting biochemical and molecular tests. Independent experiments (n = 10) for model validation purposes were performed. Performance attributes were calculated from the presence-absence microbiological test results by combining the results obtained from the culture media and confirmative tests. Dirichlet distribution, the multivariate expression of a Beta distribution, was used to analyze the performance data from a stochastic perspective. No L. monocytogenes was enumerated by direct-plating (<10 CFU/g), though the pathogen was detected in 22% of the samples. L. monocytogenes concentration was estimated at 14-17 CFU/kg. Validation showed good agreement between observed and predicted prevalence (error = -2.17%). The results showed that all media were best at ruling in L. monocytogenes presence than ruling it out. Sensitivity and specificity varied depending on the culture-dependent method. None of the culture media was perfect in detecting L. monocytogenes in minced pork meat alone. The use of at least two culture media in parallel enhanced the efficiency of L. monocytogenes detection. Bayesian modeling may reduce the time needed to draw conclusions regarding L. monocytogenes presence and the uncertainty of the results obtained. Furthermore, the problem of observing zero counts may be overcome by applying Bayesian analysis, making the determination of a test performance feasible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  9. Target assembly

    DOEpatents

    Lewis, Richard A.

    1980-01-01

    A target for a proton beam which is capable of generating neutrons for absorption in a breeding blanket includes a plurality of solid pins formed of a neutron emissive target material disposed parallel to the path of the beam and which are arranged axially in a plurality of layers so that pins in each layer are offset with respect to pins in all other layers, enough layers being used so that each proton in the beam will strike at least one pin with means being provided to cool the pins. For a 300 mA, 1 GeV beam (300 MW), stainless steel pins, 12 inches long and 0.23 inches in diameter are arranged in triangular array in six layers with one sixth of the pins in each layer, the number of pins being such that the entire cross sectional area of the beam is covered by the pins with minimum overlap of pins.

  10. Beam ion acceleration by ICRH in JET discharges

    NASA Astrophysics Data System (ADS)

    Budny, R. V.; Gorelenkova, M.; Bertelli, N.; JET Collaboration

    2015-11-01

    The ion Monte-Carlo orbit integrator NUBEAM, used in TRANSP has been enhanced to include an ``RF-kick'' operator to simulate the interaction of RF fields and fast ions. The RF quasi-linear operator (localized in space) uses a second R-Z orbit integrator. We apply this to analysis of recent JET discharges using ICRH with the ITER-like first wall. An example of results for a high performance Hybrid discharge for which standard TRANSP analysis simulated the DD neutron emission rate below measurements, re-analysis using the RF-kick operator results in increased beam parallel and perpendicular energy densities (~=40% and 15% respectively), and increased beam-thermal neutron emission (~= 35%), making the total rate closer to the measurement. Checks of the numerics, comparisons with measurements, and ITER implications will be presented. Supported in part by the US DoE contract DE-AC02-09CH11466 and by EUROfusion No 633053.

  11. Chemical detection system and related methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caffrey, Augustine J.; Chichester, David L.; Egger, Ann E.

    2017-06-27

    A chemical detection system includes a frame, an emitter coupled to the frame, and a detector coupled to the frame proximate the emitter. The system also includes a shielding system coupled to the frame and positioned at least partially between the emitter and the detector, wherein the frame positions a sensing surface of the detector in a direction substantially parallel to a plane extending along a front portion of the frame. A method of analyzing composition of a suspect object includes directing neutrons at the object, detecting gamma rays emitted from the object, and communicating spectrometer information regarding the gammamore » rays. The method also includes presenting a GUI to a user with a dynamic status of an ongoing neutron spectroscopy process. The dynamic status includes a present confidence for a plurality of compounds being present in the suspect object responsive to changes in the spectrometer information during the ongoing process.« less

  12. NEUTRONIC REACTOR FUEL ELEMENT AND CORE SYSTEM

    DOEpatents

    Moore, W.T.

    1958-09-01

    This patent relates to neutronic reactors and in particular to an improved fuel element and a novel reactor core system for facilitating removal of contaminating fission products, as they are fermed, from association with the flssionable fuel, so as to mitigate the interferent effects of such fission products during reactor operation. The fuel elements are comprised of tubular members impervious to fluid and contatning on their interior surfaces a thin layer of fissionable material providing a central void. The core structure is comprised of a plurality of the tubular fuel elements arranged in parallel and a closed manifold connected to their ends. In the reactor the core structure is dispersed in a water moderator and coolant within a pressure vessel, and a means connected to said manifuld is provided for withdrawing and disposing of mobile fission product contamination from the interior of the feel tubes and manifold.

  13. Polarized Neutron Diffraction to Probe Local Magnetic Anisotropy of a Low-Spin Fe(III) Complex.

    PubMed

    Ridier, Karl; Mondal, Abhishake; Boilleau, Corentin; Cador, Olivier; Gillon, Béatrice; Chaboussant, Grégory; Le Guennic, Boris; Costuas, Karine; Lescouëzec, Rodrigue

    2016-03-14

    We have determined by polarized neutron diffraction (PND) the low-temperature molecular magnetic susceptibility tensor of the anisotropic low-spin complex PPh4 [Fe(III) (Tp)(CN)3]⋅H2O. We found the existence of a pronounced molecular easy magnetization axis, almost parallel to the C3 pseudo-axis of the molecule, which also corresponds to a trigonal elongation direction of the octahedral coordination sphere of the Fe(III) ion. The PND results are coherent with electron paramagnetic resonance (EPR) spectroscopy, magnetometry, and ab initio investigations. Through this particular example, we demonstrate the capabilities of PND to provide a unique, direct, and straightforward picture of the magnetic anisotropy and susceptibility tensors, offering a clear-cut way to establish magneto-structural correlations in paramagnetic molecular complexes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Benchmark solution for the Spencer-Lewis equation of electron transport theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.

    As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less

  15. Air shower simulation for WASAVIES: warning system for aviation exposure to solar energetic particles.

    PubMed

    Sato, T; Kataoka, R; Yasuda, H; Yashiro, S; Kuwabara, T; Shiota, D; Kubo, Y

    2014-10-01

    WASAVIES, a warning system for aviation exposure to solar energetic particles (SEPs), is under development by collaboration between several institutes in Japan and the USA. It is designed to deterministically forecast the SEP fluxes incident on the atmosphere within 6 h after flare onset using the latest space weather research. To immediately estimate the aircrew doses from the obtained SEP fluxes, the response functions of the particle fluxes generated by the incidence of monoenergetic protons into the atmosphere were developed by performing air shower simulations using the Particle and Heavy Ion Transport code system. The accuracy of the simulation was well verified by calculating the increase count rates of a neutron monitor during a ground-level enhancement, combining the response function with the SEP fluxes measured by the PAMELA spectrometer. The response function will be implemented in WASAVIES and used to protect aircrews from additional SEP exposure. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. 1393 Ring Bus at JPL: Description and Status

    NASA Technical Reports Server (NTRS)

    Wysocky, Terry R.

    2007-01-01

    Completed Ring Bus IC V&V Phase - Ring Bus Test Plan Completed for SIM Project - Applicable to Other Projects Implemented a Avionics Bus Based upon the IEEE 1393 Standard - Excellent Starting Point for a General Purpose High-Speed Spacecraft Bus - Designed to Meet SIM Requirements for - Real-time deterministic, distributed systems. - Control system requirements - Fault detection and recovery Other JPL Projects Considering Implementation F'light Software Ring Bus Driver Module Began in 2006, Continues Participating in Standard Revision. Search for Earth-like planets orbiting nearby stars and measure the masses and orbits of the planets it finds. Survey 2000 nearby stars for planetary systems to learn whether our Solar System is unusual, or typical. Make a new catalog of star position 100 times more accurate than current measurements. Learn how our galaxy formed and will evolve by studying the dynamics of its stars. Critically test models of exactly how stars shine, including exotic objects like black holes, neutron stars and white dwarfs.

  17. Optimization of a Boiling Water Reactor Loading Pattern Using an Improved Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2003-08-15

    A search method based on genetic algorithms (GA) using deterministic operators has been developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). The search method uses an Improved GA operator, that is, crossover, mutation, and selection. The handling of the encoding technique and constraint conditions is designed so that the GA reflects the peculiar characteristics of the BWR. In addition, some strategies such as elitism and self-reproduction are effectively used to improve the search speed. LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and three-dimensional-dependent constraints have alwaysmore » necessitated the use of three-dimensional core simulators for BWRs, so that an optimization method is required for computational efficiency. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant applying the Haling technique. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  18. 2009.1 Revision of the Evaluated Nuclear Data Library (ENDL2009.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, I. J.; Beck, B.; Descalles, M. A.

    LLNL’s Computational Nuclear Data and Theory Group have created a 2009.1 revised release of the Evaluated Nuclear Data Library (ENDL2009.1). This library is designed to support LLNL’s current and future nuclear data needs and will be employed in nuclear reactor, nuclear security and stockpile stewardship simulations with ASC codes. The ENDL2009 database was the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles. It was assembled with strong support from the ASC PEM and Attribution programs, leveraged with support from Campaign 4 and the DOE/Office of Science’s US Nuclear Data Program. This document listsmore » the revisions and fixes made in a new release called ENDL2009.1, by comparing with the existing data in the original release which is now called ENDL2009.0. These changes are made in conjunction with the revisions for ENDL2011.1, so that both the .1 releases are as free as possible of known defects.« less

  19. 2009.3 Revision of the Evaluated Nuclear Data Library (ENDL2009.3)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, I. J.; Beck, B.; Descalle, M. A.

    LLNL's Computational Nuclear Data and Theory Group have created a 2009.3 revised release of the Evaluated Nuclear Data Library (ENDL2009.3). This library is designed to support LLNL's current and future nuclear data needs and will be employed in nuclear reactor, nuclear security and stockpile stewardship simulations with ASC codes. The ENDL2009 database was the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles. It was assembled with strong support from the ASC PEM and Attribution programs, leveraged with support from Campaign 4 and the DOE/Office of Science's US Nuclear Data Program. This document listsmore » the revisions and fixes made in a new release called ENDL2009.3, by com- paring with the existing data in the previous release ENDL2009.2. These changes are made in conjunction with the revisions for ENDL2011.3, so that both the .3 releases are as free as possible of known defects.« less

  20. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W

    2013-01-01

    The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and themore » SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.« less

  1. DENSITY CONTROL IN A REACTOR

    DOEpatents

    Marshall, J. Jr.

    1961-10-24

    A reactor is described in which natural-uranium bodies are located in parallel channels which extend through the graphite mass in a regular lattice. The graphite mass has additional channels that are out of the lattice and contain no uranium. These additional channels decrease in number per unit volume of graphite from the center of the reactor to the exterior and have the effect of reducing the density of the graphite more at the center than at the exterior, thereby spreading neutron activity throughout the reactor. (AEC)

  2. VASP-4096: a very high performance programmable device for digital media processing applications

    NASA Astrophysics Data System (ADS)

    Krikelis, Argy

    2001-03-01

    Over the past few years, technology drivers for microprocessors have changed significantly. Media data delivery and processing--such as telecommunications, networking, video processing, speech recognition and 3D graphics--is increasing in importance and will soon dominate the processing cycles consumed in computer-based systems. This paper presents the architecture of the VASP-4096 processor. VASP-4096 provides high media performance with low energy consumption by integrating associative SIMD parallel processing with embedded microprocessor technology. The major innovations in the VASP-4096 is the integration of thousands of processing units in a single chip that are capable of support software programmable high-performance mathematical functions as well as abstract data processing. In addition to 4096 processing units, VASP-4096 integrates on a single chip a RISC controller that is an implementation of the SPARC architecture, 128 Kbytes of Data Memory, and I/O interfaces. The SIMD processing in VASP-4096 implements the ASProCore architecture, which is a proprietary implementation of SIMD processing, operates at 266 MHz with program instructions issued by the RISC controller. The device also integrates a 64-bit synchronous main memory interface operating at 133 MHz (double-data rate), and a 64- bit 66 MHz PCI interface. VASP-4096, compared with other processors architectures that support media processing, offers true performance scalability, support for deterministic and non-deterministic data processing on a single device, and software programmability that can be re- used in future chip generations.

  3. A real time sorting algorithm to time sort any deterministic time disordered data stream

    NASA Astrophysics Data System (ADS)

    Saini, J.; Mandal, S.; Chakrabarti, A.; Chattopadhyay, S.

    2017-12-01

    In new generation high intensity high energy physics experiments, millions of free streaming high rate data sources are to be readout. Free streaming data with associated time-stamp can only be controlled by thresholds as there is no trigger information available for the readout. Therefore, these readouts are prone to collect large amount of noise and unwanted data. For this reason, these experiments can have output data rate of several orders of magnitude higher than the useful signal data rate. It is therefore necessary to perform online processing of the data to extract useful information from the full data set. Without trigger information, pre-processing on the free streaming data can only be done with time based correlation among the data set. Multiple data sources have different path delays and bandwidth utilizations and therefore the unsorted merged data requires significant computational efforts for real time manifestation of sorting before analysis. Present work reports a new high speed scalable data stream sorting algorithm with its architectural design, verified through Field programmable Gate Array (FPGA) based hardware simulation. Realistic time based simulated data likely to be collected in an high energy physics experiment have been used to study the performance of the algorithm. The proposed algorithm uses parallel read-write blocks with added memory management and zero suppression features to make it efficient for high rate data-streams. This algorithm is best suited for online data streams with deterministic time disorder/unsorting on FPGA like hardware.

  4. Direct comparison of elastic incoherent neutron scattering experiments with molecular dynamics simulations of DMPC phase transitions.

    PubMed

    Aoun, Bachir; Pellegrini, Eric; Trapp, Marcus; Natali, Francesca; Cantù, Laura; Brocca, Paola; Gerelli, Yuri; Demé, Bruno; Marek Koza, Michael; Johnson, Mark; Peters, Judith

    2016-04-01

    Neutron scattering techniques have been employed to investigate 1,2-dimyristoyl-sn -glycero-3-phosphocholine (DMPC) membranes in the form of multilamellar vesicles (MLVs) and deposited, stacked multilamellar-bilayers (MLBs), covering transitions from the gel to the liquid phase. Neutron diffraction was used to characterise the samples in terms of transition temperatures, whereas elastic incoherent neutron scattering (EINS) demonstrates that the dynamics on the sub-macromolecular length-scale and pico- to nano-second time-scale are correlated with the structural transitions through a discontinuity in the observed elastic intensities and the derived mean square displacements. Molecular dynamics simulations have been performed in parallel focussing on the length-, time- and temperature-scales of the neutron experiments. They correctly reproduce the structural features of the main gel-liquid phase transition. Particular emphasis is placed on the dynamical amplitudes derived from experiment and simulations. Two methods are used to analyse the experimental data and mean square displacements. They agree within a factor of 2 irrespective of the probed time-scale, i.e. the instrument utilized. Mean square displacements computed from simulations show a comparable level of agreement with the experimental values, albeit, the best match with the two methods varies for the two instruments. Consequently, experiments and simulations together give a consistent picture of the structural and dynamical aspects of the main lipid transition and provide a basis for future, theoretical modelling of dynamics and phase behaviour in membranes. The need for more detailed analytical models is pointed out by the remaining variation of the dynamical amplitudes derived in two different ways from experiments on the one hand and simulations on the other.

  5. Dose distribution of secondary radiation in a water phantom for a proton pencil beam-EURADOS WG9 intercomparison exercise.

    PubMed

    Stolarczyk, L; Trinkl, S; Romero-Expósito, M; Mojżeszek, N; Ambrozova, I; Domingo, C; Davídková, M; Farah, J; Kłodowska, M; Knežević, Ž; Liszka, M; Majer, M; Miljanić, S; Ploc, O; Schwarz, M; Harrison, R M; Olko, P

    2018-04-19

    Systematic 3D mapping of out-of-field doses induced by a therapeutic proton pencil scanning beam in a 300  ×  300  ×  600 mm 3 water phantom was performed using a set of thermoluminescence detectors (TLDs): MTS-7 ( 7 LiF:Mg,Ti), MTS-6 ( 6 LiF:Mg,Ti), MTS-N ( nat LiF:Mg,Ti) and TLD-700 ( 7 LiF:Mg,Ti), radiophotoluminescent (RPL) detectors GD-352M and GD-302M, and polyallyldiglycol carbonate (PADC)-based (C 12 H 18 O 7 ) track-etched detectors. Neutron and gamma-ray doses, as well as linear energy transfer distributions, were experimentally determined at 200 points within the phantom. In parallel, the Geant4 Monte Carlo code was applied to calculate neutron and gamma radiation spectra at the position of each detector. For the cubic proton target volume of 100  ×  100  ×  100 mm 3 (spread out Bragg peak with a modulation of 100 mm) the scattered photon doses along the main axis of the phantom perpendicular to the primary beam were approximately 0.5 mGy Gy -1 at a distance of 100 mm and 0.02 mGy Gy -1 at 300 mm from the center of the target. For the neutrons, the corresponding values of dose equivalent were found to be ~0.7 and ~0.06 mSv Gy -1 , respectively. The measured neutron doses were comparable with the out-of-field neutron doses from a similar experiment with 20 MV x-rays, whereas photon doses for the scanning proton beam were up to three orders of magnitude lower.

  6. Dose distribution of secondary radiation in a water phantom for a proton pencil beam—EURADOS WG9 intercomparison exercise

    NASA Astrophysics Data System (ADS)

    Stolarczyk, L.; Trinkl, S.; Romero-Expósito, M.; Mojżeszek, N.; Ambrozova, I.; Domingo, C.; Davídková, M.; Farah, J.; Kłodowska, M.; Knežević, Ž.; Liszka, M.; Majer, M.; Miljanić, S.; Ploc, O.; Schwarz, M.; Harrison, R. M.; Olko, P.

    2018-04-01

    Systematic 3D mapping of out-of-field doses induced by a therapeutic proton pencil scanning beam in a 300  ×  300  ×  600 mm3 water phantom was performed using a set of thermoluminescence detectors (TLDs): MTS-7 (7LiF:Mg,Ti), MTS-6 (6LiF:Mg,Ti), MTS-N (natLiF:Mg,Ti) and TLD-700 (7LiF:Mg,Ti), radiophotoluminescent (RPL) detectors GD-352M and GD-302M, and polyallyldiglycol carbonate (PADC)-based (C12H18O7) track-etched detectors. Neutron and gamma-ray doses, as well as linear energy transfer distributions, were experimentally determined at 200 points within the phantom. In parallel, the Geant4 Monte Carlo code was applied to calculate neutron and gamma radiation spectra at the position of each detector. For the cubic proton target volume of 100  ×  100  ×  100 mm3 (spread out Bragg peak with a modulation of 100 mm) the scattered photon doses along the main axis of the phantom perpendicular to the primary beam were approximately 0.5 mGy Gy‑1 at a distance of 100 mm and 0.02 mGy Gy‑1 at 300 mm from the center of the target. For the neutrons, the corresponding values of dose equivalent were found to be ~0.7 and ~0.06 mSv Gy‑1, respectively. The measured neutron doses were comparable with the out-of-field neutron doses from a similar experiment with 20 MV x-rays, whereas photon doses for the scanning proton beam were up to three orders of magnitude lower.

  7. The TFTR E Parallel B Spectrometer for Mass and Energy Resolved Multi-Ion Charge Exchange Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A.L. Roquemore; S.S. Medley

    1998-01-01

    The Charge Exchange Neutral Analyzer diagnostic for the Tokamak Fusion Test Reactor was designed to measure the energy distributions of both the thermal ions and the supra thermal populations arising from neutral-beam injection and ion cyclotron radio-frequency heating. These measurements yield the plasma ion temperature, as well as several other plasma parameters necessary to provide an understanding of the plasma condition and the performance of the auxiliary heating methods. For this application, a novel charge-exchange spectrometer using a dee-shaped region of parallel electric and magnetic fields was developed at the Princeton Plasma Physics Laboratory. The design and performance of thismore » spectrometer is described in detail, including the effects of exposure of the microchannel plate detector to magnetic fields, neutrons, and tritium.« less

  8. Real-time multi-mode neutron multiplicity counter

    DOEpatents

    Rowland, Mark S; Alvarez, Raymond A

    2013-02-26

    Embodiments are directed to a digital data acquisition method that collects data regarding nuclear fission at high rates and performs real-time preprocessing of large volumes of data into directly useable forms for use in a system that performs non-destructive assaying of nuclear material and assemblies for mass and multiplication of special nuclear material (SNM). Pulses from a multi-detector array are fed in parallel to individual inputs that are tied to individual bits in a digital word. Data is collected by loading a word at the individual bit level in parallel, to reduce the latency associated with current shift-register systems. The word is read at regular intervals, all bits simultaneously, with no manipulation. The word is passed to a number of storage locations for subsequent processing, thereby removing the front-end problem of pulse pileup. The word is used simultaneously in several internal processing schemes that assemble the data in a number of more directly useable forms. The detector includes a multi-mode counter that executes a number of different count algorithms in parallel to determine different attributes of the count data.

  9. Phonon coupling to dynamic short-range polar order in a relaxor ferroelectric near the morphotropic phase boundary

    DOE PAGES

    John A. Schneeloch; Xu, Zhijun; Winn, B.; ...

    2015-12-28

    We report neutron inelastic scattering experiments on single-crystal PbMg 1/3Nb 2/3O 3 doped with 32% PbTiO 3, a relaxor ferroelectric that lies close to the morphotropic phase boundary. When cooled under an electric field E∥ [001] into tetragonal and monoclinic phases, the scattering cross section from transverse acoustic (TA) phonons polarized parallel to E weakens and shifts to higher energy relative to that under zero-field-cooled conditions. Likewise, the scattering cross section from transverse optic (TO) phonons polarized parallel to E weakens for energy transfers 4 ≤ ℏω ≤ 9 meV. However, TA and TO phonons polarized perpendicular to E showmore » no change. This anisotropic field response is similar to that of the diffuse scattering cross section, which, as previously reported, is suppressed when polarized parallel to E but not when polarized perpendicular to E. Lastly, our findings suggest that the lattice dynamics and dynamic short-range polar correlations that give rise to the diffuse scattering are coupled.« less

  10. Investigation of HZETRN 2010 as a Tool for Single Event Effect Qualification of Avionics Systems

    NASA Technical Reports Server (NTRS)

    Rojdev, Kristina; Atwell, William; Boeder, Paul; Koontz, Steve

    2014-01-01

    NASA's future missions are focused on deep space for human exploration that do not provide a simple emergency return to Earth. In addition, the deep space environment contains a constant background Galactic Cosmic Ray (GCR) radiation exposure, as well as periodic Solar Particle Events (SPEs) that can produce intense amounts of radiation in a short amount of time. Given these conditions, it is important that the avionics systems for deep space human missions are not susceptible to Single Event Effects (SEE) that can occur from radiation interactions with electronic components. The typical process to minimizing SEE effects is through using heritage hardware and extensive testing programs that are very costly. Previous work by Koontz, et al. [1] utilized an analysis-based method for investigating electronic component susceptibility. In their paper, FLUKA, a Monte Carlo transport code, was used to calculate SEE and single event upset (SEU) rates. This code was then validated against in-flight data. In addition, CREME-96, a deterministic code, was also compared with FLUKA and in-flight data. However, FLUKA has a long run-time (on the order of days), and CREME-96 has not been updated in several years. This paper will investigate the use of HZETRN 2010, a deterministic transport code developed at NASA Langley Research Center, as another tool that can be used to analyze SEE and SEU rates. The benefits to using HZETRN over FLUKA and CREME-96 are that it has a very fast run time (on the order of minutes) and has been shown to be of similar accuracy as other deterministic and Monte Carlo codes when considering dose [2, 3, 4]. The 2010 version of HZETRN has updated its treatment of secondary neutrons and thus has improved its accuracy over previous versions. In this paper, the Linear Energy Transfer (LET) spectra are of interest rather than the total ionizing dose. Therefore, the LET spectra output from HZETRN 2010 will be compared with the FLUKA and in-flight data to validate HZETRN 2010 as a computational tool for SEE qualification by analysis. Furthermore, extrapolation of these data to interplanetary environments at 1 AU will be investigated to determine whether HZETRN 2010 can be used successfully and confidently for deep space mission analyses.

  11. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  12. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGES

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  13. VVER-440 and VVER-1000 reactor dosimetry benchmark - BUGLE-96 versus ALPAN VII.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duo, J. I.

    2011-07-01

    Document available in abstract form only, full text of document follows: Analytical results of the vodo-vodyanoi energetichesky reactor-(VVER-) 440 and VVER-1000 reactor dosimetry benchmarks developed from engineering mockups at the Nuclear Research Inst. Rez LR-0 reactor are discussed. These benchmarks provide accurate determination of radiation field parameters in the vicinity and over the thickness of the reactor pressure vessel. Measurements are compared to calculated results with two sets of tools: TORT discrete ordinates code and BUGLE-96 cross-section library versus the newly Westinghouse-developed RAPTOR-M3G and ALPAN VII.0. The parallel code RAPTOR-M3G enables detailed neutron distributions in energy and space in reducedmore » computational time. ALPAN VII.0 cross-section library is based on ENDF/B-VII.0 and is designed for reactor dosimetry applications. It uses a unique broad group structure to enhance resolution in thermal-neutron-energy range compared to other analogous libraries. The comparison of fast neutron (E > 0.5 MeV) results shows good agreement (within 10%) between BUGLE-96 and ALPAN VII.O libraries. Furthermore, the results compare well with analogous results of participants of the REDOS program (2005). Finally, the analytical results for fast neutrons agree within 15% with the measurements, for most locations in all three mockups. In general, however, the analytical results underestimate the attenuation through the reactor pressure vessel thickness compared to the measurements. (authors)« less

  14. A feasibility study on the use of phantoms with statistical lung masses for determining the uncertainty in the dose absorbed by the lung from broad beams of incident photons and neutrons

    PubMed Central

    Khankook, Atiyeh Ebrahimi; Hakimabad, Hashem Miri

    2017-01-01

    Abstract Computational models of the human body have gradually become crucial in the evaluation of doses absorbed by organs. However, individuals may differ considerably in terms of organ size and shape. In this study, the authors sought to determine the energy-dependent standard deviations due to lung size of the dose absorbed by the lung during external photon and neutron beam exposures. One hundred lungs with different masses were prepared and located in an adult male International Commission on Radiological Protection (ICRP) reference phantom. Calculations were performed using the Monte Carlo N-particle code version 5 (MCNP5). Variation in the lung mass caused great uncertainty: ~90% for low-energy broad parallel photon beams. However, for high-energy photons, the lung-absorbed dose dependency on the anatomical variation was reduced to <1%. In addition, the results obtained indicated that the discrepancy in the lung-absorbed dose varied from 0.6% to 8% for neutron beam exposure. Consequently, the relationship between absorbed dose and organ volume was found to be significant for low-energy photon sources, whereas for higher energy photon sources the organ-absorbed dose was independent of the organ volume. In the case of neutron beam exposure, the maximum discrepancy (of 8%) occurred in the energy range between 0.1 and 5 MeV. PMID:28077627

  15. Associative memory in an analog iterated-map neural network

    NASA Astrophysics Data System (ADS)

    Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.

    1990-03-01

    The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.

  16. Directional photofluidization lithography: micro/nanostructural evolution by photofluidic motions of azobenzene materials.

    PubMed

    Lee, Seungwoo; Kang, Hong Suk; Park, Jung-Ki

    2012-04-24

    This review demonstrates directional photofluidization lithography (DPL), which makes it possible to fabricate a generic and sophisticated micro/nanoarchitecture that would be difficult or impossible to attain with other methods. In particular, DPL differs from many of the existing micro/nanofabrication methods in that the post-treatment (i.e., photofluidization), after the preliminary fabrication process of the original micro/nanostructures, plays a pivotal role in the various micro/nanostructural evolutions including the deterministic reshaping of architectures, the reduction of structural roughness, and the dramatic enhancement of pattern resolution. Also, DPL techniques are directly compatible with a parallel and scalable micro/nanofabrication. Thus, DPL with such extraordinary advantages in micro/nanofabrication could provide compelling opportunities for basic micro/nanoscale science as well as for general technology applications. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Field emission from isolated individual vertically aligned carbon nanocones

    NASA Astrophysics Data System (ADS)

    Baylor, L. R.; Merkulov, V. I.; Ellis, E. D.; Guillorn, M. A.; Lowndes, D. H.; Melechko, A. V.; Simpson, M. L.; Whealton, J. H.

    2002-04-01

    Field emission from isolated individual vertically aligned carbon nanocones (VACNCs) has been measured using a small-diameter moveable probe. The probe was scanned parallel to the sample plane to locate the VACNCs, and perpendicular to the sample plane to measure the emission turn-on electric field of each VACNC. Individual VACNCs can be good field emitters. The emission threshold field depends on the geometric aspect ratio (height/tip radius) of the VACNC and is lowest when a sharp tip is present. VACNCs exposed to a reactive ion etch process demonstrate a lowered emission threshold field while maintaining a similar aspect ratio. Individual VACNCs can have low emission thresholds, carry high current densities, and have long emission lifetime. This makes them very promising for various field emission applications for which deterministic placement of the emitter with submicron accuracy is needed.

  18. Abstract quantum computing machines and quantum computational logics

    NASA Astrophysics Data System (ADS)

    Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto

    2016-06-01

    Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.

  19. The mathematical statement for the solving of the problem of N-version software system design

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.

  20. Using NCAR Yellowstone for PhotoVoltaic Power Forecasts with Artificial Neural Networks and an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle Monache, L.

    2016-12-01

    A methodology based on Artificial Neural Networks (ANN) and an Analog Ensemble (AnEn) is presented to generate 72-hour deterministic and probabilistic forecasts of power generated by photovoltaic (PV) power plants using input from a numerical weather prediction model and computed astronomical variables. ANN and AnEn are used individually and in combination to generate forecasts for three solar power plant located in Italy. The computational scalability of the proposed solution is tested using synthetic data simulating 4,450 PV power stations. The NCAR Yellowstone supercomputer is employed to test the parallel implementation of the proposed solution, ranging from 1 node (32 cores) to 4,450 nodes (141,140 cores). Results show that a combined AnEn + ANN solution yields best results, and that the proposed solution is well suited for massive scale computation.

  1. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  2. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  3. Development and construction of a neutron beam line for accelerator-based boron neutron capture synovectomy.

    PubMed

    Gierga, D P; Yanch, J C; Shefer, R E

    2000-01-01

    A potential application of the 10B(n, alpha)7Li nuclear reaction for the treatment of rheumatoid arthritis, termed Boron Neutron Capture Synovectomy (BNCS), is under investigation. In an arthritic joint, the synovial lining becomes inflamed and is a source of great pain and discomfort for the afflicted patient. The goal of BNCS is to ablate the synovium, thereby eliminating the symptoms of the arthritis. A BNCS treatment would consist of an intra-articular injection of boron followed by neutron irradiation of the joint. Monte Carlo radiation transport calculations have been used to develop an accelerator-based epithermal neutron beam line for BNCS treatments. The model includes a moderator/reflector assembly, neutron producing target, target cooling system, and arthritic joint phantom. Single and parallel opposed beam irradiations have been modeled for the human knee, human finger, and rabbit knee joints. Additional reflectors, placed to the side and back of the joint, have been added to the model and have been shown to improve treatment times and skin doses by about a factor of 2. Several neutron-producing charged particle reactions have been examined for BNCS, including the 9Be(p,n) reaction at proton energies of 4 and 3.7 MeV, the 9Be(d,n) reaction at deuteron energies of 1.5 and 2.6 MeV, and the 7Li(p,n) reaction at a proton energy of 2.5 MeV. For an accelerator beam current of 1 mA and synovial boron uptake of 1000 ppm, the time to deliver a therapy dose of 10,000 RBEcGy ranges from 3 to 48 min, depending on the treated joint and the neutron producing charged particle reaction. The whole-body effective dose that a human would incur during a knee treatment has been estimated to be 3.6 rem or 0.75 rem, for 1000 ppm or 19,000 ppm synovial boron uptake, respectively, although the shielding configuration has not yet been optimized. The Monte Carlo design process culminated in the construction, installation, and testing of a dedicated BNCS beam line on the high-current tandem electrostatic accelerator at the Laboratory for Accelerator Beam Applications at the Massachusetts Institute of Technology.

  4. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  5. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  6. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  7. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysicsmore » simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)« less

  8. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  9. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  10. Modern spandrels: the roles of genetic drift, gene flow and natural selection in the evolution of parallel clines.

    PubMed

    Santangelo, James S; Johnson, Marc T J; Ness, Rob W

    2018-05-16

    Urban environments offer the opportunity to study the role of adaptive and non-adaptive evolutionary processes on an unprecedented scale. While the presence of parallel clines in heritable phenotypic traits is often considered strong evidence for the role of natural selection, non-adaptive evolutionary processes can also generate clines, and this may be more likely when traits have a non-additive genetic basis due to epistasis. In this paper, we use spatially explicit simulations modelled according to the cyanogenesis (hydrogen cyanide, HCN) polymorphism in white clover ( Trifolium repens ) to examine the formation of phenotypic clines along urbanization gradients under varying levels of drift, gene flow and selection. HCN results from an epistatic interaction between two Mendelian-inherited loci. Our results demonstrate that the genetic architecture of this trait makes natural populations susceptible to decreases in HCN frequencies via drift. Gradients in the strength of drift across a landscape resulted in phenotypic clines with lower frequencies of HCN in strongly drifting populations, giving the misleading appearance of deterministic adaptive changes in the phenotype. Studies of heritable phenotypic change in urban populations should generate null models of phenotypic evolution based on the genetic architecture underlying focal traits prior to invoking selection's role in generating adaptive differentiation. © 2018 The Author(s).

  11. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  12. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  13. Multi-Dimensional, Mesoscopic Monte Carlo Simulations of Inhomogeneous Reaction-Drift-Diffusion Systems on Graphics-Processing Units

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001

  14. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  15. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  16. Gravitational waves: search results, data analysis and parameter estimation: Amaldi 10 Parallel session C2.

    PubMed

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G

    The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  17. Gravitational Waves: Search Results, Data Analysis and Parameter Estimation. Amaldi 10 Parallel Session C2

    NASA Technical Reports Server (NTRS)

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi

    2015-01-01

    The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  18. Fast Monte Carlo simulation of a dispersive sample on the SEQUOIA spectrometer at the SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granroth, Garrett E; Chen, Meili; Kohl, James Arthur

    2007-01-01

    Simulation of an inelastic scattering experiment, with a sample and a large pixilated detector, usually requires days of time because of finite processor speeds. We report simulations on an SNS (Spallation Neutron Source) instrument, SEQUOIA, that reduce the time to less than 2 hours by using parallelization and the resources of the TeraGrid. SEQUOIA is a fine resolution (∆E/Ei ~ 1%) chopper spectrometer under construction at the SNS. It utilizes incident energies from Ei = 20 meV to 2 eV and will have ~ 144,000 detector pixels covering 1.6 Sr of solid angle. The full spectrometer, including a 1-D dispersivemore » sample, has been simulated using the Monte Carlo package McStas. This paper summarizes the method of parallelization for and results from these simulations. In addition, limitations of and proposed improvements to current analysis software will be discussed.« less

  19. For AAPT: Teaching the Wave Mechanics of McLeods' Stringy Electron, Explicit Nucleons, and Through-the-Earth Projections of Constellations' Stick Figures

    NASA Astrophysics Data System (ADS)

    McLeod, David Matthew

    2011-11-01

    McLeods' NEF11#22 submission is from their same-title INVITED presentation at Frontiers in Optics 2011, San Jose, CA. It shows how Hooke's law for electron, proton and neutron strings build electromagnetic waves from strings. These are composed of spirally linked, parallel, north-pole oriented, neutrino and antineutrino strings, stable because of magnetic repulsions. Their Dumbo Proton is antineutrino-scissor cut, and compressed in the vicinity of a neutron star, where electrostatic marriage occurs with a neutrino-scissor cut, and compressed, electron, so a Mickey Neutron emerges. Strings then predict electron charge is -- 1/3 e, Dumbo P is 25 % longer than Mickey N, and Hooke says relaxing springs fuel three separate inflations after each Big Bang oscillation. Gravity can be strings longitudinally linked. Einstein says Herman Grid's black diagonals prove human vision reads its information from algebraically-signed electromagnetic field diffraction patterns known by ray-tracing, not difficult Spatial Fourier Transformation. High-schoolers understand its application to Wave Mechanics, and agree that positive-numbered probabilities do not enter to possibly displease God. Stick figure constellations detected, like Phoenix, Leo, Canis Major, and especially Orion, fool some observers into false beliefs in things like UFHumanoids, or Kokopelli, Pele and Pamola!

  20. A Signature Distinguishing Fissile From Non-Fissile Materials Using Linearly Polarized Gamma Rays

    NASA Astrophysics Data System (ADS)

    Mueller, J. M.; Ahmed, M. W.; Karwowski, H. J.; Myers, L. S.; Sikora, M. H.; Stave, S.; Tompkins, J. R.; Zimmerman, W. R.; Weller, H. R.

    2013-04-01

    Photofission of ^233,235,238U, ^239,240Pu, and ^232Th was induced by nearly 100% linearly polarized, high intensity (˜10^7 γs per second), and nearly-monoenergetic γ-ray beams of energies between 5.6 and 7.3 MeV at the High Intensity γ-ray Source (HIγS). An array of 18 liquid scintillating detectors was used to measure prompt fission neutron angular distributions. The ratio of prompt fission neutron yields parallel to the plane of beam polarization to the yields perpendicular to this plane was measured as a function of beam and neutron energy, as described in a recent publication showing results from ^235,238U, ^239Pu, and ^232Th [1]. A ratio near unity was found for ^233,235U and ^239Pu while a significant ratio (˜1.5-3) was found for ^238U, ^240Pu, and ^232Th. This large difference could be used to distinguish fissile isotopes (such as ^233,235U and ^239Pu) from non-fissile isotopes (such as ^238U, ^240Pu, and ^232Th). Polarization ratios as a function of the relative abundance of fissile to non-fissile isotopes will be presented. [4pt] [1] J. M. Mueller et al., Phys. Rev. C 85, 014605 (2012).

  1. Magnetic small-angle neutron scattering of bulk ferromagnets.

    PubMed

    Michels, Andreas

    2014-09-24

    We summarize recent theoretical and experimental work in the field of magnetic small-angle neutron scattering (SANS) of bulk ferromagnets. The response of the magnetization to spatially inhomogeneous magnetic anisotropy and magnetostatic stray fields is computed using linearized micromagnetic theory, and the ensuing spin-misalignment SANS is deduced. Analysis of experimental magnetic-field-dependent SANS data of various nanocrystalline ferromagnets corroborates the usefulness of the approach, which provides important quantitative information on the magnetic-interaction parameters such as the exchange-stiffness constant, the mean magnetic anisotropy field, and the mean magnetostatic field due to jumps ΔM of the magnetization at internal interfaces. Besides the value of the applied magnetic field, it turns out to be the ratio of the magnetic anisotropy field Hp to ΔM, which determines the properties of the magnetic SANS cross-section of bulk ferromagnets; specifically, the angular anisotropy on a two-dimensional detector, the asymptotic power-law exponent, and the characteristic decay length of spin-misalignment fluctuations. For the two most often employed scattering geometries where the externally applied magnetic field H0 is either perpendicular or parallel to the wave vector k0 of the incoming neutron beam, we provide a compilation of the various unpolarized, half-polarized (SANSPOL), and uniaxial fully-polarized (POLARIS) SANS cross-sections of magnetic materials.

  2. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Page, R.; Jones, J.R.

    1997-07-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation toolsmore » is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.« less

  3. Mobile Pit verification system design based on passive special nuclear material verification in weapons storage facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, J. N.; Chin, M. R.; Sjoden, G. E.

    2013-07-01

    A mobile 'drive by' passive radiation detection system to be applied in special nuclear materials (SNM) storage facilities for validation and compliance purposes has been designed through the use of computational modeling and new radiation detection methods. This project was the result of work over a 1 year period to create optimal design specifications to include creation of 3D models using both Monte Carlo and deterministic codes to characterize the gamma and neutron leakage out each surface of SNM-bearing canisters. Results were compared and agreement was demonstrated between both models. Container leakages were then used to determine the expected reactionmore » rates using transport theory in the detectors when placed at varying distances from the can. A 'typical' background signature was incorporated to determine the minimum signatures versus the probability of detection to evaluate moving source protocols with collimation. This established the criteria for verification of source presence and time gating at a given vehicle speed. New methods for the passive detection of SNM were employed and shown to give reliable identification of age and material for highly enriched uranium (HEU) and weapons grade plutonium (WGPu). The finalized 'Mobile Pit Verification System' (MPVS) design demonstrated that a 'drive-by' detection system, collimated and operating at nominally 2 mph, is capable of rapidly verifying each and every weapon pit stored in regularly spaced, shelved storage containers, using completely passive gamma and neutron signatures for HEU and WGPu. This system is ready for real evaluation to demonstrate passive total material accountability in storage facilities. (authors)« less

  4. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    NASA Astrophysics Data System (ADS)

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the reaction types as contiguous as possible and removes completed histories from the transport cycle. The sort reduces the amount of divergence in GPU ``thread blocks,'' keeps the SIMD units as full as possible, and eliminates using memory bandwidth to check if a neutron in the batch has been terminated or not. Using a remapping vector means the data access pattern is irregular, but this is mitigated by using large batch sizes where the GPU can effectively eliminate the high cost of irregular global memory access. WARP modifies the standard unionized energy grid implementation to reduce memory traffic. Instead of storing a matrix of pointers indexed by reaction type and energy, WARP stores three matrices. The first contains cross section values, the second contains pointers to angular distributions, and a third contains pointers to energy distributions. This linked list type of layout increases memory usage, but lowers the number of data loads that are needed to determine a reaction by eliminating a pointer load to find a cross section value. Optimized, high-performance GPU code libraries are also used by WARP wherever possible. The CUDA performance primitives (CUDPP) library is used to perform the parallel reductions, sorts and sums, the CURAND library is used to seed the linear congruential random number generators, and the OptiX ray tracing framework is used for geometry representation. OptiX is a highly-optimized library developed by NVIDIA that automatically builds hierarchical acceleration structures around user-input geometry so only surfaces along a ray line need to be queried in ray tracing. WARP also performs material and cell number queries with OptiX by using a point-in-polygon like algorithm. WARP has shown that GPUs are an effective platform for performing Monte Carlo neutron transport with continuous energy cross sections. Currently, WARP is the most detailed and feature-rich program in existence for performing continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs, but compared to production codes like Serpent and MCNP, WARP has limited capabilities. Despite WARP's lack of features, its novel algorithm implementations show that high performance can be achieved on a GPU despite the inherently divergent program flow and sparse data access patterns. WARP is not ready for everyday nuclear reactor calculations, but is a good platform for further development of GPU-accelerated Monte Carlo neutron transport. In it's current state, it may be a useful tool for multiplication factor searches, i.e. determining reactivity coefficients by perturbing material densities or temperatures, since these types of calculations typically do not require many flux tallies. (Abstract shortened by UMI.)

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  6. A feasibility study on the use of phantoms with statistical lung masses for determining the uncertainty in the dose absorbed by the lung from broad beams of incident photons and neutrons.

    PubMed

    Khankook, Atiyeh Ebrahimi; Hakimabad, Hashem Miri; Motavalli, Laleh Rafat

    2017-05-01

    Computational models of the human body have gradually become crucial in the evaluation of doses absorbed by organs. However, individuals may differ considerably in terms of organ size and shape. In this study, the authors sought to determine the energy-dependent standard deviations due to lung size of the dose absorbed by the lung during external photon and neutron beam exposures. One hundred lungs with different masses were prepared and located in an adult male International Commission on Radiological Protection (ICRP) reference phantom. Calculations were performed using the Monte Carlo N-particle code version 5 (MCNP5). Variation in the lung mass caused great uncertainty: ~90% for low-energy broad parallel photon beams. However, for high-energy photons, the lung-absorbed dose dependency on the anatomical variation was reduced to <1%. In addition, the results obtained indicated that the discrepancy in the lung-absorbed dose varied from 0.6% to 8% for neutron beam exposure. Consequently, the relationship between absorbed dose and organ volume was found to be significant for low-energy photon sources, whereas for higher energy photon sources the organ-absorbed dose was independent of the organ volume. In the case of neutron beam exposure, the maximum discrepancy (of 8%) occurred in the energy range between 0.1 and 5 MeV. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  7. Stem cell transplantation as a dynamical system: are clinical outcomes deterministic?

    PubMed

    Toor, Amir A; Kobulnicky, Jared D; Salman, Salman; Roberts, Catherine H; Jameson-Lee, Max; Meier, Jeremy; Scalora, Allison; Sheth, Nihar; Koparde, Vishal; Serrano, Myrna; Buck, Gregory A; Clark, William B; McCarty, John M; Chung, Harold M; Manjili, Masoud H; Sabo, Roy T; Neale, Michael C

    2014-01-01

    Outcomes in stem cell transplantation (SCT) are modeled using probability theory. However, the clinical course following SCT appears to demonstrate many characteristics of dynamical systems, especially when outcomes are considered in the context of immune reconstitution. Dynamical systems tend to evolve over time according to mathematically determined rules. Characteristically, the future states of the system are predicated on the states preceding them, and there is sensitivity to initial conditions. In SCT, the interaction between donor T cells and the recipient may be considered as such a system in which, graft source, conditioning, and early immunosuppression profoundly influence immune reconstitution over time. This eventually determines clinical outcomes, either the emergence of tolerance or the development of graft versus host disease. In this paper, parallels between SCT and dynamical systems are explored and a conceptual framework for developing mathematical models to understand disparate transplant outcomes is proposed.

  8. Stem Cell Transplantation as a Dynamical System: Are Clinical Outcomes Deterministic?

    PubMed Central

    Toor, Amir A.; Kobulnicky, Jared D.; Salman, Salman; Roberts, Catherine H.; Jameson-Lee, Max; Meier, Jeremy; Scalora, Allison; Sheth, Nihar; Koparde, Vishal; Serrano, Myrna; Buck, Gregory A.; Clark, William B.; McCarty, John M.; Chung, Harold M.; Manjili, Masoud H.; Sabo, Roy T.; Neale, Michael C.

    2014-01-01

    Outcomes in stem cell transplantation (SCT) are modeled using probability theory. However, the clinical course following SCT appears to demonstrate many characteristics of dynamical systems, especially when outcomes are considered in the context of immune reconstitution. Dynamical systems tend to evolve over time according to mathematically determined rules. Characteristically, the future states of the system are predicated on the states preceding them, and there is sensitivity to initial conditions. In SCT, the interaction between donor T cells and the recipient may be considered as such a system in which, graft source, conditioning, and early immunosuppression profoundly influence immune reconstitution over time. This eventually determines clinical outcomes, either the emergence of tolerance or the development of graft versus host disease. In this paper, parallels between SCT and dynamical systems are explored and a conceptual framework for developing mathematical models to understand disparate transplant outcomes is proposed. PMID:25520720

  9. 3D calcite heterostructures for dynamic and deformable mineralized matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jaeseok; Wang, Yucai; Jiang, Yuanwen

    Scales are rooted in soft tissues, and are regenerated by specialized cells. The realization of dynamic synthetic analogues with inorganic materials has been a significant challenge, because the abiological regeneration sites that could yield deterministic growth behavior are hard to form. Here we overcome this fundamental hurdle by constructing a mutable and deformable array of three-dimensional calcite heterostructures that are partially locked in silicone. Individual calcite crystals exhibit asymmetrical dumbbell shapes and are prepared by a parallel tectonic approach under ambient conditions. Furthermore, the silicone matrix immobilizes the epitaxial nucleation sites through self-templated cavities, which enables symmetry breaking in reactionmore » dynamics and scalable manipulation of the mineral ensembles. With this platform, we devise several mineral-enabled dynamic surfaces and interfaces. For example, we show that the induced growth of minerals yields localized inorganic adhesion for biological tissue and reversible focal encapsulation for sensitive components in flexible electronics.« less

  10. ParFit: A Python-Based Object-Oriented Program for Fitting Molecular Mechanics Parameters to ab Initio Data.

    PubMed

    Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu

    2017-03-27

    A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).

  11. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  12. A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar

    2017-07-01

    A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.

  13. Computing the apparent centroid of radar targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C.E.

    1996-12-31

    A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based onmore » a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.« less

  14. Towards the reliable calculation of residence time for off-lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Alexander, Kathleen C.; Schuh, Christopher A.

    2016-08-01

    Kinetic Monte Carlo (KMC) methods have the potential to extend the accessible timescales of off-lattice atomistic simulations beyond the limits of molecular dynamics by making use of transition state theory and parallelization. However, it is a challenge to identify a complete catalog of events accessible to an off-lattice system in order to accurately calculate the residence time for KMC. Here we describe possible approaches to some of the key steps needed to address this problem. These include methods to compare and distinguish individual kinetic events, to deterministically search an energy landscape, and to define local atomic environments. When applied to the ground state  ∑5(2 1 0) grain boundary in copper, these methods achieve a converged residence time, accounting for the full set of kinetically relevant events for this off-lattice system, with calculable uncertainty.

  15. A comparative study of noisy signal evolution in 2R all-optical regenerators with normal and anomalous average dispersions using an accelerated Multicanonical Monte Carlo method.

    PubMed

    Lakoba, Taras I; Vasilyev, Michael

    2008-10-27

    In [Opt. Express 15, 10061 (2007)] we proposed a new regime of multichannel all-optical regeneration that required anomalous average dispersion. This regime is superior to the previously studied normal-dispersion regime when signal distortions are deterministic in their temporal shape. However, there was a concern that the regenerator with anomalous average dispersion may be prone to noise amplification via modulational instability. Here, we show that this, in general, is not the case. Moreover, in the range of input powers that is of interest for multichannel regeneration, the device with anomalous average dispersion may even provide less noise amplification than the one with normal dispersion. These results are obtained with an improved version of the parallelized modification of the Multicanonical Monte Carlo method proposed in [IEEE J. Sel. Topics Quantum Electron. 14, 599 (2008)].

  16. 3D calcite heterostructures for dynamic and deformable mineralized matrices

    DOE PAGES

    Yi, Jaeseok; Wang, Yucai; Jiang, Yuanwen; ...

    2017-09-11

    Scales are rooted in soft tissues, and are regenerated by specialized cells. The realization of dynamic synthetic analogues with inorganic materials has been a significant challenge, because the abiological regeneration sites that could yield deterministic growth behavior are hard to form. Here we overcome this fundamental hurdle by constructing a mutable and deformable array of three-dimensional calcite heterostructures that are partially locked in silicone. Individual calcite crystals exhibit asymmetrical dumbbell shapes and are prepared by a parallel tectonic approach under ambient conditions. Furthermore, the silicone matrix immobilizes the epitaxial nucleation sites through self-templated cavities, which enables symmetry breaking in reactionmore » dynamics and scalable manipulation of the mineral ensembles. With this platform, we devise several mineral-enabled dynamic surfaces and interfaces. For example, we show that the induced growth of minerals yields localized inorganic adhesion for biological tissue and reversible focal encapsulation for sensitive components in flexible electronics.« less

  17. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  18. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  19. Diffusion of benzene confined in the oriented nanochannels of chrysotile asbestos fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamontov, E.; Department of Materials Science and Engineering, University of Maryland, College Park, Maryland 20742-2115; Kumzerov, Yu.A.

    We used quasielastic neutron scattering to study the dynamics of benzene that completely fills the nanochannels of chrysotile asbestos fibers with a characteristic diameter of about 5 nm. The macroscopical alignment of the nanochannels in fibers provided an interesting opportunity to study anisotropy of the dynamics of confined benzene by means of collecting the data with the scattering vector either parallel or perpendicular to the fibers axes. The translational diffusive motion of benzene molecules was found to be isotropic. While bulk benzene freezes at 278.5 K, we observed the translational dynamics of the supercooled confined benzene on the time scalemore » of hundreds of picoseconds even below 200 K, until at about 160 K its dynamics becomes too slow for the {mu}eV resolution of the neutron backscattering spectrometer. The residence time between jumps for the benzene molecules measured in the temperature range of 260 K to 320 K demonstrated low activation energy of 2.8 kJ/mol.« less

  20. Neutron diffraction study of antiferromagnetic ErNi3Ga9 in magnetic fields

    NASA Astrophysics Data System (ADS)

    Ninomiya, Hiroki; Sato, Takaaki; Matsumoto, Yuji; Moyoshi, Taketo; Nakao, Akiko; Ohishi, Kazuki; Kousaka, Yusuke; Akimitsu, Jun; Inoue, Katsuya; Ohara, Shigeo

    2018-05-01

    We report specific heat, magnetization, magnetoresistance, and neutron diffraction measurements of single crystals of ErNi3Ga9. This compound crystalizes in a chiral structure with space group R 32 . The erbium ions form a two-dimensional honeycomb structure. ErNi3Ga9 displays antiferromagnetic order below 6.4 K. We determined that the magnetic structure is slightly amplitude-modulated as well as antiferromagnetic with q = (0 , 0 , 0.5) . The magnetic properties are described by an Ising-like model in which the magnetic moment is always along the c-axis owing to the large uniaxial anisotropy caused by the crystalline electric field effect in the low temperature region. When the magnetic field is applied along the c-axis, a metamagnetic transition is observed around 12 kOe at 2 K. ErNi3Ga9 possesses crystal chirality, but the antisymmetric magnetic interaction, the so-called Dzyaloshinskii-Moriya (DM) interaction, does not contribute to the magnetic structure, because the magnetic moments are parallel to the DM-vector.

  1. Hodoscope Cineradiography Of Nuclear Fuel Destruction Experiments

    NASA Astrophysics Data System (ADS)

    De Volpi, A.

    1983-08-01

    Nuclear reactor safety studies have applied cineradiographic techniques to achieve key information regarding the durability of fuel elements that are subjected to destructive transients in test reactors. Beginning with its development in 1963, the fast-neutron hodoscope has recorded data at the TREAT reactor in the United States of America. Consisting of a collimator instrumented with several hundred parallel channels of detectors and associated instrumentation, the hodoscope measures fuel motion that takes place within thick-walled steel test containers. Fuel movement is determined by detecting the emission of fast neutrons induced in the test capsule by bursts of the test reactor that last from 0.3 to 30 s. The system has been designed so as to achieve under certain typical conditions( horizontal) spatial resolution less than lmm, time resolution close to lms, mass resolution below 0.1 g, with adequate dynamic range and recording duration. A variety of imaging forms have been developed to display the results of processing and analyzing recorded data.*

  2. Shell Model Far From Stability: Island of Inversion Mergers

    NASA Astrophysics Data System (ADS)

    Nowacki, F.; Poves, A.

    2018-02-01

    In this study we propose a common mechanism for the disappearance of shell closures far from stabilty. With the use of Large Scale Shell Model calculations (SM-CI), we predict that the region of deformation which comprises the heaviest Chromium and Iron isotopes at and beyond N=40 will merge with a new one at N=50 in an astonishing parallel to the N=20 and N=28 case in the Neon and Magnesium isotopes. We propose a valence space including the full pf-shell for the protons and the full sdg shell for the neutrons, which represents a come-back of the the harmonic oscillator shells in the very neutron rich regime. Our calculations preserve the doubly magic nature of the ground state of 78Ni, which, however, exhibits a well deformed prolate band at low excitation energy, providing a striking example of shape coexistence far from stability. This new Island of Inversion (IoI) adds to the four well documented ones at N=8, 20, 28 and 40.

  3. Lattice dynamics and thermal transport in multiferroic CuCrO2

    NASA Astrophysics Data System (ADS)

    Bansal, Dipanshu; Niedziela, Jennifer L.; May, Andrew F.; Said, Ayman; Ehlers, Georg; Abernathy, Douglas L.; Huq, Ashfia; Kirkham, Melanie; Zhou, Haidong; Delaire, Olivier

    2017-02-01

    Inelastic neutron and x-ray scattering measurements of phonons and spin waves were performed in the delafossite compound CuCrO2 over a wide range of temperature, and complemented with first-principles lattice dynamics simulations. The phonon dispersions and density of states are well reproduced by our density functional calculations, and reveal a strong anisotropy of Cu vibrations, which exhibit low-frequency modes of large amplitude parallel to the basal plane of the layered delafossite structure. The low frequency in-plane modes also show a systematic temperature dependence of neutron and x-ray scattering intensities. In addition, we find that spin fluctuations persist above 300 K, far above the Néel temperature for long-range antiferromagnetic order, TN≃24 K . Our modeling of the thermal conductivity, based on our phonon measurements and simulations, reveals a significant anisotropy and indicates that spin fluctuations above TN constitute an important source of phonon scattering, considerably suppressing the thermal conductivity compared to that of the isostructural but nonmagnetic compound CuAlO2.

  4. Computational attributes of the integral form of the equation of transfer

    NASA Technical Reports Server (NTRS)

    Frankel, J. I.

    1991-01-01

    Difficulties can arise in radiative and neutron transport calculations when a highly anisotropic scattering phase function is present. In the presence of anisotropy, currently used numerical solutions are based on the integro-differential form of the linearized Boltzmann transport equation. This paper, departs from classical thought and presents an alternative numerical approach based on application of the integral form of the transport equation. Use of the integral formalism facilitates the following steps: a reduction in dimensionality of the system prior to discretization, the use of symbolic manipulation to augment the computational procedure, and the direct determination of key physical quantities which are derivable through the various Legendre moments of the intensity. The approach is developed in the context of radiative heat transfer in a plane-parallel geometry, and results are presented and compared with existing benchmark solutions. Encouraging results are presented to illustrate the potential of the integral formalism for computation. The integral formalism appears to possess several computational attributes which are well-suited to radiative and neutron transport calculations.

  5. From Interfaces to Bulk: Experimental-Computational Studies Across Time and Length Scales of Multi-Functional Ionic Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perahia, Dvora; Grest, Gary S.

    Neutron experiments coupled with computational components have resulted in unprecedented understanding of the factors that impact the behavior of ionic structured polymers. Additionally, new computational tools to study macromolecules, were developed. In parallel, this DOE funding have enabled the education of the next generation of material researchers who are able to take the advantage neutron tools offer to the understanding and design of advanced materials. Our research has provided unprecedented insight into one of the major factors that limits the use of ionizable polymers, combining the macroscopic view obtained from the experimental techniques with molecular insight extracted from computational studiesmore » leading to transformative knowledge that will impact the design of nano-structured, materials. With the focus on model systems, of broad interest to the scientific community and to industry, the research addressed challenges that cut across a large number of polymers, independent of the specific chemical structure or the transported species.« less

  6. Research on water discharge characteristics of PEM fuel cells by using neutron imaging technology at the NRF, HANARO.

    PubMed

    Kim, TaeJoo; Sim, CheulMuu; Kim, MooHwan

    2008-05-01

    An investigation into the water discharge characteristics of proton exchange membrane (PEM) fuel cells is carried out by using a feasibility test apparatus and the Neutron Radiography Facility (NRF) at HANARO. The feasibility test apparatus was composed of a distilled water supply line, a compressed air supply line, heating systems, and single PEM fuel cells, which were a 1-parallel serpentine type with a 100 cm(2) active area. Three kinds of methods were used: compressed air supply-only; heating-only; and a combination of the methods of a compressed air supply and heating, respectively. The resultant water discharge characteristics are different according to the applied methods. The compressed air supply only is suitable for removing the water at a flow field and a heating only is suitable for water at the MEA. Therefore, in order to remove all the water at PEM fuel cells, the combination method is needed at the moment.

  7. Methyl group conformation and hydrogen bonds in proteins determined by neutron protein crystallography

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Atsushi; Shibata, Kouji; Tanaka, Ichiro; Niimura, Nobuo

    2009-02-01

    Using 'Hydrogen and Hydration in Proteins Data Base' (HHDB) that catalogs all H atom positions in biological macromolecules and in hydration water molecules that have been determined thus far by neutron macromolecular crystallography, methyl group conformation and hydrogen bonds (H.B.) in proteins are explored. It is found that most of the methyl groups belong to the stable staggered conformation but 11% of them seemed to be close to the eclipsed conformation. And geometrical consideration has been done for H.B. involved in α-helices. 125 H.B. were identified as donors for acceptor C dbnd O in the main chain α-helix. For these H.B., it is found that co-linear H.B. were rare, that hydrogen atoms seen from acceptors C dbnd O can localize upon certain arrangements, that H.B. are not parallel to the helix axis but rather inclined to C-terminal direction, and that hydrogen atoms except water are located inside, not outside of cylinders which the backbones of α-helices form.

  8. Stability and instability of Ellis and phantom wormholes: Are there ghosts?

    NASA Astrophysics Data System (ADS)

    Nandi, K. K.; Potapov, A. A.; Izmailov, R. N.; Tamang, A.; Evans, J. C.

    2016-05-01

    It is concluded in the literature that the Ellis wormhole is unstable under small perturbations and would either decay to the Schwarzschild black hole or expand away to infinity. While this deterministic conclusion of instability is correct, we show that the Ellis wormhole reduces to the Schwarzschild black hole only when the Ellis solution parameter γ assumes a complex value -i . We shall then reexamine the stability of Ellis and phantom wormholes from the viewpoint of local and asymptotic observers by using a completely different approach, viz., we adapt Tangherlini's nondeterministic, prequantal statistical simulation about photon motion in the real optical medium to an effective medium reformulation of motions obtained via Hamilton's optical-mechanical analogy in a gravity field. A crucial component of Tangherlini's idea is the observed increase of momentum of the photons entering a real medium. We show that this fact has a heuristic parallel in the effective medium version of the Pound-Rebka experiment in gravity. Our conclusion is that there is a nonzero probability that Ellis and phantom wormholes could appear stable or unstable depending on the location of observers and on the values of γ , leading to the possibility of ghost wormholes (like ghost stars). The Schwarzschild horizon, however, would always certainly appear to be stable (R =1 , T =0 ) to observers regardless of their location. Phantom wormholes of bounded mass in the extreme limit a →-1 are also shown to be stable just as the Schwarzschild black hole is. We shall propose a thought experiment showing that our nondeterministic results could be numerically translated into observable deterministic signatures of ghost wormholes.

  9. Monte Carlo MP2 on Many Graphical Processing Units.

    PubMed

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n 3 ) or better with system size n, which may be compared with the O(n 5 ) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  10. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  11. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  12. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  13. Mixed Legendre moments and discrete scattering cross sections for anisotropy representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calloo, A.; Vidal, J. F.; Le Tellier, R.

    2012-07-01

    This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, whichmore » better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)« less

  14. Influence of fusion dynamics on fission observables: A multidimensional analysis

    NASA Astrophysics Data System (ADS)

    Schmitt, C.; Mazurek, K.; Nadtochy, P. N.

    2018-01-01

    An attempt to unfold the respective influence of the fusion and fission stages on typical fission observables, and namely the neutron prescission multiplicity, is proposed. A four-dimensional dynamical stochastic Langevin model is used to calculate the decay by fission of excited compound nuclei produced in a wide set of heavy-ion collisions. The comparison of the results from such a calculation and experimental data is discussed, guided by predictions of the dynamical deterministic HICOL code for the compound-nucleus formation time. While the dependence of the latter on the entrance-channel properties can straigthforwardly explain some observations, a complex interplay between the various parameters of the reaction is found to occur in other cases. A multidimensional analysis of the respective role of these parameters, including entrance-channel asymmetry, bombarding energy, compound-nucleus fissility, angular momentum, and excitation energy, is proposed. It is shown that, depending on the size of the system, apparent inconsistencies may be deduced when projecting onto specific ordering parameters. The work suggests the possibility of delicate compensation effects in governing the measured fission observables, thereby highlighting the necessity of a multidimensional discussion.

  15. Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations

    NASA Astrophysics Data System (ADS)

    Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray

    2017-09-01

    The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.

  16. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  17. Neutron Scattering Studies of Vortex Matter in Type-II Superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xinsheng Ling

    2012-02-02

    The proposed program is an experimental study of the fundamental properties of Abrikosov vortex matter in type-II superconductors. Most superconducting materials used in applications such as MRI are type II and their transport properties are determined by the interplay between random pinning, interaction and thermal fluctuation effects in the vortex state. Given the technological importance of these materials, a fundamental understanding of the vortex matter is necessary. The vortex lines in type-II superconductors also form a useful model system for fundamental studies of a number of important issues in condensed matter physics, such as the presence of a symmetry-breaking phasemore » transition in the presence of random pinning. Recent advances in neutron scattering facilities such as the major upgrade of the NIST cold source and the Spallation Neutron Source are providing unprecedented opportunities in addressing some of the longstanding issues in vortex physics. The core component of the proposed program is to use small angle neutron scattering and Bitter decoration experiments to provide the most stringent test of the Bragg glass theory by measuring the structure factor in both the real and reciprocal spaces. The proposed experiments include a neutron reflectometry experiment to measure the precise Q-dependence of the structure factor of the vortex lattice in the Bragg glass state. A second set of SANS experiments will be on a shear-strained Nb single crystal for testing a recently proposed theory of the stability of Bragg glass. The objective is to artificially create a set of parallel grain boundaries into a Nb single crystal and use SANS to measure the vortex matter diffraction pattern as a function of the changing angle between the applied magnetic field to the grain boundaries. The intrinsic merits of the proposed work are a new fundamental understanding of type-II superconductors on which superconducting technology is based, and a firm understanding of phases and phase transitions in condensed matter systems with random pinning. The broader impact of the program includes the training of future generation of neutron scientists, and further development of neutron scattering and complementary techniques for studies of superconducting materials. The graduate and undergraduate students participating in this project will learn the state-of-the-art neutron scattering techniques, acquire a wide range of materials research experiences, and participate in the frontier research of superconductivity. This should best prepare the students for future careers in academia, industry, or government.« less

  18. Neutron resonance spin echo with longitudinal DC fields

    NASA Astrophysics Data System (ADS)

    Krautloher, Maximilian; Kindervater, Jonas; Keller, Thomas; Häußler, Wolfgang

    2016-12-01

    We report on the design, construction, and performance of a neutron resonance spin echo (NRSE) instrument employing radio frequency (RF) spin flippers combining RF fields with DC fields, the latter oriented parallel (longitudinal) to the neutron propagation direction (longitudinal NRSE (LNRSE)). The advantage of the longitudinal configuration is the inherent homogeneity of the effective magnetic path integrals. In the center of the RF coils, the sign of the spin precession phase is inverted by a π flip of the neutron spins, such that non-uniform spin precession at the boundaries of the RF flippers is canceled. The residual inhomogeneity can be reduced by Fresnel- or Pythagoras-coils as in the case of conventional spin echo instruments (neutron spin echo (NSE)). Due to the good intrinsic homogeneity of the B0 coils, the current densities required for the correction coils are at least a factor of three less than in conventional NSE. As the precision and the current density of the correction coils are the limiting factors for the resolution of both NSE and LNRSE, the latter has the intrinsic potential to surpass the energy resolution of present NSE instruments. Our prototype LNRSE spectrometer described here was implemented at the resonance spin echo for diverse applications (RESEDA) beamline at the MLZ in Garching, Germany. The DC fields are generated by B0 coils, based on resistive split-pair solenoids with an active shielding for low stray fields along the beam path. One pair of RF flippers at a distance of 2 m generates a field integral of ˜0.5 Tm. The LNRSE technique is a future alternative for high-resolution spectroscopy of quasi-elastic excitations. In addition, it also incorporates the MIEZE technique, which allows to achieve spin echo resolution for spin depolarizing samples and sample environments. Here we present the results of numerical optimization of the coil geometry and first data from the prototype instrument.

  19. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  20. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  1. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  2. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  3. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  4. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  5. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  6. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  7. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  8. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  9. Distinguishing Fissile From Non-Fissile Materials Using Linearly Polarized Gamma Rays

    NASA Astrophysics Data System (ADS)

    Mueller, J. M.; Ahmed, M. W.; Karwowski, H. J.; Myers, L. S.; Sikora, M. H.; Weller, H. R.; Zimmerman, W. R.; Randrup, J.; Vogt, R.

    2014-03-01

    Photofission of 232Th, 233 , 235 , 238U, 237Np, and 239,240Pu was induced by nearly 100% linearly polarized, high intensity (~107 γs per second), and nearly-monoenergetic γ-ray beams of energies between 5.3 and 7.6 MeV at the High Intensity γ-ray Source (HI γS). An array of 12-18 liquid scintillating detectors was used to measure prompt fission neutron yields. The ratio of prompt fission neutron yields parallel to the plane of beam polarization to the yields perpendicular to this plane was measured as a function of beam and neutron energy. A ratio near unity was found for 233,235U, 237Np, and 239Pu while a significant ratio (~1.5-3) was found for 232Th, 238U, and 240Pu. This large difference could be used to distinguish fissile isotopes (such as 233,235U and 239Pu) from non-fissile isotopes (such as 232Th, 238U, and 240Pu). The measured ratios agree with the results of a fission calculation (FREYA) which used with previously measured photofission fragment angular distributions as input. Partially supported by DHS (2010-DN-077-ARI046-02), DOE (DE-AC52-07NA27344 and DE-AC02-05CH11231), and the DOE Office of Science Graduate Fellowship Program (DOE SCGF).

  10. Helicity-dependent cross sections and double-polarization observable E in η photoproduction from quasifree protons and neutrons

    NASA Astrophysics Data System (ADS)

    Witthauer, L.; Dieterle, M.; Abt, S.; Achenbach, P.; Afzal, F.; Ahmed, Z.; Akondi, C. S.; Annand, J. R. M.; Arends, H. J.; Bashkanov, M.; Beck, R.; Biroth, M.; Borisov, N. S.; Braghieri, A.; Briscoe, W. J.; Cividini, F.; Costanza, S.; Collicott, C.; Denig, A.; Downie, E. J.; Drexler, P.; Ferretti-Bondy, M. I.; Gardner, S.; Garni, S.; Glazier, D. I.; Glowa, D.; Gradl, W.; Günther, M.; Gurevich, G. M.; Hamilton, D.; Hornidge, D.; Huber, G. M.; Käser, A.; Kashevarov, V. L.; Kay, S.; Keshelashvili, I.; Kondratiev, R.; Korolija, M.; Krusche, B.; Lazarev, A. B.; Linturi, J. M.; Lisin, V.; Livingston, K.; Lutterer, S.; MacGregor, I. J. D.; Mancell, J.; Manley, D. M.; Martel, P. P.; Metag, V.; Meyer, W.; Miskimen, R.; Mornacchi, E.; Mushkarenkov, A.; Neganov, A. B.; Neiser, A.; Oberle, M.; Ostrick, M.; Otte, P. B.; Paudyal, D.; Pedroni, P.; Polonski, A.; Prakhov, S. N.; Rajabi, A.; Reicherz, G.; Ron, G.; Rostomyan, T.; Sarty, A.; Sfienti, C.; Sikora, M. H.; Sokhoyan, V.; Spieker, K.; Steffen, O.; Strakovsky, I. I.; Strub, Th.; Supek, I.; Thiel, A.; Thiel, M.; Thomas, A.; Unverzagt, M.; Usov, Yu. A.; Wagner, S.; Walford, N. K.; Watts, D. P.; Werthmüller, D.; Wettig, J.; Wolfes, M.; Zana, L.; A2 Collaboration at MAMI

    2017-05-01

    Precise helicity-dependent cross sections and the double-polarization observable E were measured for η photoproduction from quasifree protons and neutrons bound in the deuteron. The η →2 γ and η →3 π0→6 γ decay modes were used to optimize the statistical quality of the data and to estimate systematic uncertainties. The measurement used the A2 detector setup at the tagged photon beam of the electron accelerator MAMI in Mainz. A longitudinally polarized deuterated butanol target was used in combination with a circularly polarized photon beam from bremsstrahlung of a longitudinally polarized electron beam. The reaction products were detected with the electromagnetic calorimeters Crystal Ball and TAPS, which covered 98% of the full solid angle. The results show that the narrow structure observed earlier in the unpolarized excitation function of η photoproduction off the neutron appears only in reactions with antiparallel photon and nucleon spin (σ1 /2). It is absent for reactions with parallel spin orientation (σ3 /2) and thus very probably related to partial waves with total spin 1/2. The behavior of the angular distributions of the helicity-dependent cross sections was analyzed by fitting them withLegendre polynomials. The results are in good agreement with a model from the Bonn-Gatchina group, which uses an interference of P11 and S11 partial waves to explain the narrow structure.

  11. Major safety and operational concerns for fuel debris criticality control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonoike, K.; Sono, H.; Umeda, M.

    2013-07-01

    It can be seen from the criticality control viewpoint that the requirement divides the decommissioning work into two parts. One is the present condition where it is requested to prevent criticality and to monitor subcritical condition while the debris is untouched. The other is future work where the subcritical condition shall be ensured even if the debris condition is changed intentionally by raising water level, debris retrieval, etc. Repair of damages on the containment vessel (CV) walls is one of the most important objectives at present in the site. On completion of this task, it will become possible to raisemore » water levels in the CVs and to shield the extremely high radiation emitted from the debris but there is a dilemma: raising the water level in the CVs implies to bring the debris closer to criticality because of the role of water for slowing down neutrons. This may be solved if the coolant water will start circulating in closed loops, and if a sufficient concentration of soluble neutron poison (borated water for instance) will be introduced in the loop. It should be still noted that this solution has a risk of worsening corrosion of the CV walls. Design of the retrieval operation of debris should be proposed as early as possible, which must include a neutron poison concentration required to ensure that the debris chunk is subcritical. In parallel, the development of the measurement system to monitor subcritical condition of the debris chunk should be conducted in case the borated water cannot be used continuously. The system would be based on a neutron counter with a high sensitivity and an appropriate shield for gamma-rays, and the adequate statistical signal processing.« less

  12. Does the finite size of the proto-neutron star preclude supernova neutrino flavor scintillation due to turbulence?

    DOE PAGES

    Kneller, James P.; Mauney, Alex W.

    2013-08-23

    Here, the transition probabilities describing the evolution of a neutrino with a given energy along some ray through a turbulent supernova profile are random variates unique to each ray. If the proto-neutron-star source of the neutrinos were a point, then one might expect the evolution of the turbulence would cause the flavor composition of the neutrinos to vary in time i.e. the flavor would scintillate. But in reality the proto-neutron star is not a point source—it has a size of order ˜10km, so the neutrinos emitted from different points at the source will each have seen different turbulence. The finitemore » source size will reduce the correlation of the flavor transition probabilities along different trajectories and reduce the magnitude of the flavor scintillation. To determine whether the finite size of the proto-neutron star will preclude flavor scintillation, we calculate the correlation of the neutrino flavor transition probabilities through turbulent supernova profiles as a function of the separation δx between the emission points. The correlation will depend upon the power spectrum used for the turbulence, and we consider two cases: when the power spectrum is isotropic, and the more realistic case of a power spectrum which is anisotropic on large scales and isotropic on small. Although it is dependent on a number of uncalibrated parameters, we show the supernova neutrino source is not of sufficient size to significantly blur flavor scintillation in all mixing channels when using an isotropic spectrum, and this same result holds when using an anisotropic spectrum, except when we greatly reduce the similarity of the turbulence along parallel trajectories separated by ˜10km or less.« less

  13. Voltage-controlled magnetization switching in MRAMs in conjunction with spin-transfer torque and applied magnetic field

    NASA Astrophysics Data System (ADS)

    Munira, Kamaram; Pandey, Sumeet C.; Kula, Witold; Sandhu, Gurtej S.

    2016-11-01

    Voltage-controlled magnetic anisotropy (VCMA) effect has attracted a significant amount of attention in recent years because of its low cell power consumption during the anisotropy modulation of a thin ferromagnetic film. However, the applied voltage or electric field alone is not enough to completely and reliably reverse the magnetization of the free layer of a magnetic random access memory (MRAM) cell from anti-parallel to parallel configuration or vice versa. An additional symmetry-breaking mechanism needs to be employed to ensure the deterministic writing process. Combinations of voltage-controlled magnetic anisotropy together with spin-transfer torque (STT) and with an applied magnetic field (Happ) were evaluated for switching reliability, time taken to switch with low error rate, and energy consumption during the switching process. In order to get a low write error rate in the MRAM cell with VCMA switching mechanism, a spin-transfer torque current or an applied magnetic field comparable to the critical current and field of the free layer is necessary. In the hybrid processes, the VCMA effect lowers the duration during which the higher power hungry secondary mechanism is in place. Therefore, the total energy consumed during the hybrid writing processes, VCMA + STT or VCMA + Happ, is less than the energy consumed during pure spin-transfer torque or applied magnetic field switching.

  14. A multi-state synthetic ferrimagnet with controllable switching near room temperature

    NASA Astrophysics Data System (ADS)

    Franco, A. F.; Landeros, P.

    2018-06-01

    Ferrite composites with temperature-induced magnetization reversal, and synthetic ferrimagnets and antiferromagnets have been of great interest to the scientific community due to their uncommon thermal properties and potential applications in magnetic storage, spintronic devices, and several other fields. One of the advantages of these structures is the strong antiferromagnetic coupling, which stabilizes the magnetization state and gives access to interesting static and dynamical magnetic behaviors. Some of their drawbacks lie in that it is difficult to induce temperature-induced magnetization reversal at room temperature in composites, and that the strong interaction makes it difficult to induce a parallel magnetization state (and thus a high magnetic moment). In this work, we study numerically the magnetization behaviour of a Cu(1 0 0)/Ni/Pt/[Co/Pt]4 synthetic ferrimagnet and show that is possible to revert the sign of its magnetization by varying the temperature in ranges around room temperature. We also show that the four parallel and antiparallel magnetization states are stable at temperatures up to 360 K, and demonstrate that it is possible to change deterministically between these states by increasing the temperature of the device and/or applying a magnetic field, showcasing simultaneous non-hysteretic and hysteretic switching processes induced by temperature. Thus, this structure opens the possibility to have reconfigurable magnetic devices with multiple purposes based on the nature of the different switching events and the interplay between them.

  15. Structural effects of radiation-induced volumetric expansion on unreinforced concrete biological shields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Pape, Y.

    Limited literature (Pomaro et al., 2011, Mirhosseini et al., 2014, Salomoni et al., 2014 and Andreev and Kapliy, 2014) is available on the structural analysis of irradiated concrete biological shield (CBS), although extended operations of nuclear powers plants may lead to critical neutron exposure above 1.0 × 10 +19 n cm ₋2. To the notable exception of Andreev and Kapliy, available structural models do not account for radiation-induced volumetric expansion, although it was found to develop important linear dimensional change of the order of 1%, and, can lead to significant concrete damage (Le Pape et al., 2015). A 1D-cylindrical model of an unreinforced CBS accounting for temperature and irradiation effects is developed. Irradiated concrete properties are characterized probabilistically using the updated database collected by Oak Ridge National Laboratory (Field et al., 2015). The overstressed concrete ratio (OCR) of the CBS, i.e., the proportion of the wall thickness being subject to stresses beyond the resistance of concrete, is derived by deterministic and probabilistic analysis assuming that irradiated concrete behaves as an elastic materials. In the bi-axial compressive zone near the reactor cavity, the OCR is limited to 5.7%, i.e., 8.6 cm (3more » $$_2^1$$ in.), whereas, in the tension zone, the OCR extends to 72%, i.e., 1.08 m (42$$_2^1$$ in.). Finally, we find that these results, valid for a maximum neutron fluence on the concrete surface of 3.1 × 10 +19 n cm ₋2 (E > 0.1 MeV) and, obtained after 80 years of operation, give an indication of the potential detrimental effects of prolonged irradiation of concrete in nuclear power plants.« less

  16. Analysis of C/E results of fission rate ratio measurements in several fast lead VENUS-F cores

    NASA Astrophysics Data System (ADS)

    Kochetkov, Anatoly; Krása, Antonín; Baeten, Peter; Vittiglio, Guido; Wagemans, Jan; Bécares, Vicente; Bianchini, Giancarlo; Fabrizio, Valentina; Carta, Mario; Firpo, Gabriele; Fridman, Emil; Sarotto, Massimo

    2017-09-01

    During the GUINEVERE FP6 European project (2006-2011), the zero-power VENUS water-moderated reactor was modified into VENUS-F, a mock-up of a lead cooled fast spectrum system with solid components that can be operated in both critical and subcritical mode. The Fast Reactor Experiments for hybrid Applications (FREYA) FP7 project was launched in 2011 to support the designs of the MYRRHA Accelerator Driven System (ADS) and the ALFRED Lead Fast Reactor (LFR). Three VENUS-F critical core configurations, simulating the complex MYRRHA core design and one configuration devoted to the LFR ALFRED core conditions were investigated in 2015. The MYRRHA related cores simulated step by step design peculiarities like the BeO reflector and in pile sections. For all of these cores the fuel assemblies were of a simple design consisting of 30% enriched metallic uranium, lead rodlets to simulate the coolant and Al2O3 rodlets to simulate the oxide fuel. Fission rate ratios of minor actinides such as Np-237, Am-241 as well as Pu-239, Pu-240, Pu-242 and U-238 to U-235 were measured in these VENUS-F critical assemblies with small fission chambers in specially designed locations, to determine the spectral indices in the different neutron spectrum conditions. The measurements have been analyzed using advanced computational tools including deterministic and stochastic codes and different nuclear data sets like JEFF-3.1, JEFF-3.2, ENDF/B7.1 and JENDL-4.0. The analysis of the C/E discrepancies will help to improve the nuclear data in the specific energy region of fast neutron reactor spectra.

  17. Structural effects of radiation-induced volumetric expansion on unreinforced concrete biological shields

    DOE PAGES

    Le Pape, Y.

    2015-11-22

    Limited literature (Pomaro et al., 2011, Mirhosseini et al., 2014, Salomoni et al., 2014 and Andreev and Kapliy, 2014) is available on the structural analysis of irradiated concrete biological shield (CBS), although extended operations of nuclear powers plants may lead to critical neutron exposure above 1.0 × 10 +19 n cm ₋2. To the notable exception of Andreev and Kapliy, available structural models do not account for radiation-induced volumetric expansion, although it was found to develop important linear dimensional change of the order of 1%, and, can lead to significant concrete damage (Le Pape et al., 2015). A 1D-cylindrical model of an unreinforced CBS accounting for temperature and irradiation effects is developed. Irradiated concrete properties are characterized probabilistically using the updated database collected by Oak Ridge National Laboratory (Field et al., 2015). The overstressed concrete ratio (OCR) of the CBS, i.e., the proportion of the wall thickness being subject to stresses beyond the resistance of concrete, is derived by deterministic and probabilistic analysis assuming that irradiated concrete behaves as an elastic materials. In the bi-axial compressive zone near the reactor cavity, the OCR is limited to 5.7%, i.e., 8.6 cm (3more » $$_2^1$$ in.), whereas, in the tension zone, the OCR extends to 72%, i.e., 1.08 m (42$$_2^1$$ in.). Finally, we find that these results, valid for a maximum neutron fluence on the concrete surface of 3.1 × 10 +19 n cm ₋2 (E > 0.1 MeV) and, obtained after 80 years of operation, give an indication of the potential detrimental effects of prolonged irradiation of concrete in nuclear power plants.« less

  18. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  19. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  20. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  1. The English Revision of The Blegdamsvej Faust

    NASA Astrophysics Data System (ADS)

    Keck, Karen

    2007-03-01

    At the 1932 meeting of quantum physicists at Niels Bohr's Copenhagen Institute, participants staged an updated version of Goethe's Faust with Pauli tempting Ehrenfest to accept a chargeless, massless particle, then called the neutron. The most widely read translation of the anonymous Faust: Eine Historie appears in George Gamow's Thirty Years that Shook Physics; his second wife, Barbara, translated the text. Her work masterfully communicates the parallels between Goethe's original and the anonymous parody, but it also rearranges and adds to the parody to strengthen those similarities and to reflect George Gamow's views. The changes emphasize the international and cooperative aspects of physics.

  2. Gravitation. [Book on general relativity

    NASA Technical Reports Server (NTRS)

    Misner, C. W.; Thorne, K. S.; Wheeler, J. A.

    1973-01-01

    This textbook on gravitation physics (Einstein's general relativity or geometrodynamics) is designed for a rigorous full-year course at the graduate level. The material is presented in two parallel tracks in an attempt to divide key physical ideas from more complex enrichment material to be selected at the discretion of the reader or teacher. The full book is intended to provide competence relative to the laws of physics in flat space-time, Einstein's geometric framework for physics, applications with pulsars and neutron stars, cosmology, the Schwarzschild geometry and gravitational collapse, gravitational waves, experimental tests of Einstein's theory, and mathematical concepts of differential geometry.

  3. Investigations into the behaviour of Plasma surrounding Pulsars: DYMPHNA3D

    NASA Astrophysics Data System (ADS)

    Rochford, Ronan; Mc Donald, John; Shearer, Andy

    2011-08-01

    We report on a new 3D fully relativistic, modular, parallel and scalable Particle-In-Cell (PIC) code currently being developed at the Computational Astrophysics Laboratory in the National University of Ireland, Galway and its initial test applications to the plasma distribution in the vicinity of a rapidly rotating neutron star. We find that Plasma remains confined by trapping surfaces close to the star as opposed to propagating to a significant portion of the light-cylinder distance as predicted in this early work. We discuss planned future modifications and applications of the developed code.

  4. Radiation sensitivity of graphene field effect transistors and other thin film architectures

    NASA Astrophysics Data System (ADS)

    Cazalas, Edward

    An important contemporary motivation for advancing radiation detection science and technology is the need for interdiction of nuclear and radiological materials, which may be used to fabricate weapons of mass destruction. The detection of such materials by nuclear techniques relies on achieving high sensitivity and selectivity to X-rays, gamma-rays, and neutrons. To be attractive in field deployable instruments, it is desirable for detectors to be lightweight, inexpensive, operate at low voltage, and consume low power. To address the relatively low particle flux in most passive measurements for nuclear security applications, detectors scalable to large areas that can meet the high absolute detection efficiency requirements are needed. Graphene-based and thin-film-based radiation detectors represent attractive technologies that could meet the need for inexpensive, low-power, size-scalable detection architectures, which are sensitive to X-rays, gamma-rays, and neutrons. The utilization of graphene to detect ionizing radiation relies on the modulation of graphene charge carrier density by changes in local electric field, i.e. the field effect in graphene. Built on the principle of a conventional field effect transistor, the graphene-based field effect transistor (GFET) utilizes graphene as a channel and a semiconducting substrate as an absorber medium with which the ionizing radiation interacts. A radiation interaction event that deposits energy within the substrate creates electron-hole pairs, which modify the electric field and modulate graphene charge carrier density. A detection event in a GFET is therefore measured as a change in graphene resistance or current. Thin (micron-scale) films can also be utilized for radiation detection of thermal neutrons provided nuclides with high neutron absorption cross section are present with appreciable density. Detection in thin-film detectors could be realized through the collection of charge carriers generated within the film by slowing-down of neutron capture reaction products. The objective of this dissertation is to develop, characterize, and optimize novel graphene-based and thin-film radiation detectors. The dissertation includes a review of relevant physics, comprehensive descriptions and discussions of the experimental campaigns that were conducted, computational simulations, and detailed analysis of certain processes occurring in graphene-based and thin-film radiation detectors that significantly affect their response characteristics. Experiments have been conducted to characterize the electrical properties of GFETs and their responsivity to radiation of different types, such as visible, ultraviolet, X-ray, and gamma-ray photons, and alpha particles. The nature of graphene hysteretic effects under operational conditions has been studied. Spatially dependent sensitivity of GFETs to irradiation has been experimentally investigated using both a focused laser beam and focused X-ray microbeam. A model has been developed that deterministically simulates the mechanisms of charge transport within the GFET substrate and explains the experimental finding that the effective area of the GFET significantly exceeds the size of graphene. Monte Carlo simulations were also carried out to examine the efficacy of thin-film radiation detectors based on 10B-enriched boron nitride and Gd2O3 for neutron detection.

  5. Improved Hybrid Modeling of Spent Fuel Storage Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bibber, Karl van

    This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less

  6. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  7. INDEXING MECHANISM

    DOEpatents

    Kock, L.J.

    1959-09-22

    A device is presented for loading and unloading fuel elements containing material fissionable by neutrons of thermal energy. The device comprises a combination of mechanical features Including a base, a lever pivotally attached to the base, an Indexing plate on the base parallel to the plane of lever rotation and having a plurality of apertures, the apertures being disposed In rows, each aperture having a keyway, an Index pin movably disposed to the plane of lever rotation and having a plurality of apertures, the apertures being disposed in rows, each aperture having a keyway, an index pin movably disposed on the lever normal to the plane rotation, a key on the pin, a sleeve on the lever spaced from and parallel to the index pin, a pair of pulleys and a cable disposed between them, an open collar rotatably attached to the sleeve and linked to one of the pulleys, a pin extending from the collar, and a bearing movably mounted in the sleeve and having at least two longitudinal grooves in the outside surface.

  8. Visualization of Pulsar Search Data

    NASA Astrophysics Data System (ADS)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  9. High Performance Radiation Transport Simulations on TITAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M

    2012-01-01

    In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less

  10. Neutron transport analysis for nuclear reactor design

    DOEpatents

    Vujic, Jasmina L.

    1993-01-01

    Replacing regular mesh-dependent ray tracing modules in a collision/transfer probability (CTP) code with a ray tracing module based upon combinatorial geometry of a modified geometrical module (GMC) provides a general geometry transfer theory code in two dimensions (2D) for analyzing nuclear reactor design and control. The primary modification of the GMC module involves generation of a fixed inner frame and a rotating outer frame, where the inner frame contains all reactor regions of interest, e.g., part of a reactor assembly, an assembly, or several assemblies, and the outer frame, with a set of parallel equidistant rays (lines) attached to it, rotates around the inner frame. The modified GMC module allows for determining for each parallel ray (line), the intersections with zone boundaries, the path length between the intersections, the total number of zones on a track, the zone and medium numbers, and the intersections with the outer surface, which parameters may be used in the CTP code to calculate collision/transfer probability and cross-section values.

  11. Neutron transport analysis for nuclear reactor design

    DOEpatents

    Vujic, J.L.

    1993-11-30

    Replacing regular mesh-dependent ray tracing modules in a collision/transfer probability (CTP) code with a ray tracing module based upon combinatorial geometry of a modified geometrical module (GMC) provides a general geometry transfer theory code in two dimensions (2D) for analyzing nuclear reactor design and control. The primary modification of the GMC module involves generation of a fixed inner frame and a rotating outer frame, where the inner frame contains all reactor regions of interest, e.g., part of a reactor assembly, an assembly, or several assemblies, and the outer frame, with a set of parallel equidistant rays (lines) attached to it, rotates around the inner frame. The modified GMC module allows for determining for each parallel ray (line), the intersections with zone boundaries, the path length between the intersections, the total number of zones on a track, the zone and medium numbers, and the intersections with the outer surface, which parameters may be used in the CTP code to calculate collision/transfer probability and cross-section values. 28 figures.

  12. Non-Thermal Spectra from Pulsar Magnetospheres in the Full Electromagnetic Cascade Scenario

    NASA Astrophysics Data System (ADS)

    Peng, Qi-Yong; Zhang, Li

    2008-08-01

    We simulated non-thermal emission from a pulsar magnetosphere within the framework of a full polar-cap cascade scenario by taking the acceleration gap into account, using the Monte Carlo method. For a given electric field parallel to open field lines located at some height above the surface of a neutron star, primary electrons were accelerated by parallel electric fields and lost their energies by curvature radiation; these photons were converted to electron-positron pairs, which emitted photons through subsequent quantum synchrotron radiation and inverse Compton scattering, leading to a cascade. In our calculations, the acceleration gap was assumed to be high above the stellar surface (about several stellar radii); the primary and secondary particles and photons emitted during the journey of those particles in the magnetosphere were traced using the Monte Carlo method. In such a scenario, we calculated the non-thermal photon spectra for different pulsar parameters and compared the model results for two normal pulsars and one millisecond pulsar with the observed data.

  13. Performance assessment of KORAT-3D on the ANL IBM-SP computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.

    1999-09-01

    The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less

  14. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  15. Development of neutron/gamma generators and a polymer semiconductor detector for homeland security applications

    NASA Astrophysics Data System (ADS)

    King, Michael Joseph

    Instrumentation development is essential to the advancement and success of homeland security systems. Active interrogation techniques that scan luggage and cargo containers for shielded special nuclear materials or explosives hold great potential in halting further terrorist attacks. The development of more economical, compact and efficient source and radiation detection devices will facilitate scanning of all containers and luggage while maintaining high-throughput and low-false alarms Innovative ion sources were developed for two novel, specialized neutron generating devices and initial generator tests were performed. In addition, a low-energy acceleration gamma generator was developed and its performance characterized. Finally, an organic semiconductor was investigated for direct fast neutron detection. A main part of the thesis work was the development of ion sources, crucial components of the neutron/gamma generator development. The use of an externally-driven radio-frequency antenna allows the ion source to generate high beam currents with high, mono-atomic species fractions while maintaining low operating pressures, advantageous parameters for neutron generators. A dual "S" shaped induction antenna was developed to satisfy the high current and large extraction area requirements of the high-intensity neutron generator. The dual antenna arrangement generated a suitable current density of 28 mA/cm2 at practical RF power levels. The stringent requirements of the Pulsed Fast Neutron Transmission Spectroscopy neutron generator necessitated the development of a specialized ten window ion source of toroidal shape with a narrow neutron production target at its center. An innovative ten antenna arrangement with parallel capacitors was developed for driving the multi-antenna arrangement and uniform coupling of RF power to all ten antennas was achieved. To address the desire for low-impact, low-radiation dose active interrogation systems, research was performed on mono-energetic gamma generators that operate at low-acceleration energies and leverage neutron generator technologies. The dissertation focused on the experimental characterization of the generator performance and involved MCNPX simulations to evaluate and analyze the experimental results. The emission of the 11.7 MeV gamma-rays was observed to be slightly anisotropic and the gamma yield was measured to be 2.0*105 gamma/s-mA. The lanthanum hexaboride target suffered beam damage from a high power density beam; however, this may be overcome by sweeping the beam across a larger target area. The efficient detection of fast neutrons is vital to active interrogation techniques for the detection of both SNM and explosives. Novel organic semiconductors are air-stable, low-cost materials that demonstrate direct electronic particle detection. As part of the development of a pi-conjugated organic polymer for fast neutron detection, charge generation and collection properties were investigated. By devising a dual, thin-film detector test arrangement, charge collection was measured for high energy protons traversing the dual detector arrangement that allowed the creation of variable track lengths by tilting the detector. The results demonstrated that an increase in track length resulted in a decreased signal collection. This can be understood by assuming charge carrier transport along the track instead of along the field lines, which was made possible by the filling of traps. However, this charge collection mechanism may be insufficient to generate a useful signal. This dissertation has explored the viability of a new generation of radiation sources and detectors, where the newly developed ion source technologies and prototype generators will further enhance the capabilities of existing threat detection systems and promote the development of cutting-edge detection technologies.

  16. Coupled Effects of non-Newtonian Rheology and Aperture Variability on Flow in a Single Fracture

    NASA Astrophysics Data System (ADS)

    Di Federico, V.; Felisa, G.; Lauriola, I.; Longo, S.

    2017-12-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing and drilling operations, EOR, environmental remediation, and to understand magma intrusions. An important step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is spatially variable. A large bibliography exists on Newtonian and non-Newtonian flow in variable aperture fractures. Ultimately, stochastic or deterministic modeling leads to the flowrate under a given pressure gradient as a function of the parameters describing the aperture variability and the fluid rheology. Typically, analytical or numerical studies are performed adopting a power-law (Oswald-de Waele) model. Yet the power-law model, routinely used e.g. for hydro-fracturing modeling, does not characterize real fluids at low and high shear rates. A more appropriate rheological model is provided by e.g. the four-parameter Carreau constitutive equation, which is in turn approximated by the more tractable truncated power-law model. Moreover, fluids of interest may exhibit yield stress, which requires the Bingham or Herschel-Bulkely model. This study employs different rheological models in the context of flow in variable aperture fractures, with the aim of understanding the coupled effect of rheology and aperture spatial variability with a simplified model. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and i) perpendicular; ii) parallel to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results for the different rheological models are compared with those obtained for the pure power-law. The adoption of the latter model leads to overestimation of the flowrate, more so for large aperture variability. The presence of yield stress also induces significant changes in the resulting flowrate for assigned external pressure gradient.

  17. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  18. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  19. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  20. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  1. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  2. Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P.N.; Chang, B.; Hanebutte, U.R.

    1999-12-29

    Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less

  3. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Siegel, Andrew R.

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  4. Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport

    DOE PAGES

    Romano, Paul K.; Siegel, Andrew R.

    2017-07-01

    The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup duemore » to vectorization as a function of the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size to achieve vector efficiency greater than 90%. Lastly, when the execution times for events are allowed to vary, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration.« less

  5. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  6. Ultra-small-angle neutron scattering with azimuthal asymmetry

    DOE PAGES

    Gu, X.; Mildner, D. F. R.

    2016-05-16

    Small-angle neutron scattering (SANS) measurements from thin sections of rock samples such as shales demand as great a scattering vector range as possible because the pores cover a wide range of sizes. The limitation of the scattering vector range for pinhole SANS requires slit-smeared ultra-SANS (USANS) measurements that need to be converted to pinhole geometry. The desmearing algorithm is only successful for azimuthally symmetric data. Scattering from samples cut parallel to the plane of bedding is symmetric, exhibiting circular contours on a two-dimensional detector. Samples cut perpendicular to the bedding show elliptically dependent contours with the long axis corresponding tomore » the normal to the bedding plane. A method is given for converting such asymmetric data collected on a double-crystal diffractometer for concatenation with the usual pinhole-geometry SANS data. Furthermore, the aspect ratio from the SANS data is used to modify the slit-smeared USANS data to produce quasi-symmetric contours. Rotation of the sample about the incident beam may result in symmetric data but cannot extract the same information as obtained from pinhole geometry.« less

  7. Ultra-small-angle neutron scattering with azimuthal asymmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, X.; Mildner, D. F. R.

    Small-angle neutron scattering (SANS) measurements from thin sections of rock samples such as shales demand as great a scattering vector range as possible because the pores cover a wide range of sizes. The limitation of the scattering vector range for pinhole SANS requires slit-smeared ultra-SANS (USANS) measurements that need to be converted to pinhole geometry. The desmearing algorithm is only successful for azimuthally symmetric data. Scattering from samples cut parallel to the plane of bedding is symmetric, exhibiting circular contours on a two-dimensional detector. Samples cut perpendicular to the bedding show elliptically dependent contours with the long axis corresponding tomore » the normal to the bedding plane. A method is given for converting such asymmetric data collected on a double-crystal diffractometer for concatenation with the usual pinhole-geometry SANS data. Furthermore, the aspect ratio from the SANS data is used to modify the slit-smeared USANS data to produce quasi-symmetric contours. Rotation of the sample about the incident beam may result in symmetric data but cannot extract the same information as obtained from pinhole geometry.« less

  8. Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, S.

    2002-07-01

    As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less

  9. Dynamical behavior of a single polymer chain under nanometric confinement

    NASA Astrophysics Data System (ADS)

    Lagrené, K.; Zanotti, J.-M.; Daoud, M.; Farago, B.; Judeinstein, P.

    2010-10-01

    We address the dynamical behavior of a single polymer chain under nanometric confinement. We consider a polymer melt made of a mixture of hydrogenated and deuterated high molecular mass Poly(Ethylene Oxide) (PEO). The confining material is a membrane of Anodic Aluminum Oxide (AAO), a macroscopically highly ordered confining system made of parallel cylindrical channels. We use Neutron Spin-Echo (NSE) under the Zero Average Contrast (ZAC) condition to, all at once, i) match the intense porous AAO detrimental elastic SANS (Small Angle Neutron Scattering) contribution to the total intermediate scattering function I(Q,t) and ii) measure the Q dependence of the dynamical modes of a single chain under confinement. The polymer dynamics is probed on an extremely broad spacial ([2.2 10-2 Å-1, 0.2 Å-1]) and temporal ([0.1 ns, 600 ns]) ranges. We do not detect any influence of confinement on the polymer dynamics. This result is discussed in the framework of the debate on the existence of a "corset effect" recently suggested by NMR relaxometry data.

  10. Neutron powder diffraction and molecular simulation study of the structural evolution of ammonia borane from 15 to 340 K.

    PubMed

    Hess, Nancy J; Schenter, Gregory K; Hartman, Michael R; Daemen, Luc L; Proffen, Thomas; Kathmann, Shawn M; Mundy, Christopher J; Hartl, Monika; Heldebrant, David J; Stowe, Ashley C; Autrey, Tom

    2009-05-14

    The structural behavior of (11)B-, (2)H-enriched ammonia borane, ND(3)(11)BD(3), over the temperature range from 15 to 340 K was investigated using a combination of neutron powder diffraction and ab initio molecular dynamics simulations. In the low temperature orthorhombic phase, the progressive displacement of the borane group under the amine group was observed leading to the alignment of the B-N bond near parallel to the c-axis. The orthorhombic to tetragonal structural phase transition at 225 K is marked by dramatic change in the dynamics of both the amine and borane group. The resulting hydrogen disorder is problematic to extract from the metrics provided by Rietveld refinement but is readily apparent in molecular dynamics simulation and in difference Fourier transform maps. At the phase transition, Rietveld refinement does indicate a disruption of one of two dihydrogen bonds that link adjacent ammonia borane molecules. Metrics determined by Rietveld refinement are in excellent agreement with those determined from molecular simulation. This study highlights the valuable insights added by coupled experimental and computational studies.

  11. Real-Time, Fast Neutron Coincidence Assay of Plutonium With a 4-Channel Multiplexed Analyzer and Organic Scintillators

    NASA Astrophysics Data System (ADS)

    Joyce, Malcolm J.; Gamage, Kelum A. A.; Aspinall, M. D.; Cave, F. D.; Lavietes, A.

    2014-06-01

    The design, principle of operation and the results of measurements made with a four-channel organic scintillator system are described. The system comprises four detectors and a multiplexed analyzer for the real-time parallel processing of fast neutron events. The function of the real-time, digital multiple-channel pulse-shape discrimination analyzer is described together with the results of laboratory-based measurements with 252Cf, 241Am-Li and plutonium. The analyzer is based on a single-board solution with integrated high-voltage supplies and graphical user interface. It has been developed to meet the requirements of nuclear materials assay of relevance to safeguards and security. Data are presented for the real-time coincidence assay of plutonium in terms of doubles count rate versus mass. This includes an assessment of the limiting mass uncertainty for coincidence assay based on a 100 s measurement period and samples in the range 0-50 g. Measurements of count rate versus order of multiplicity for 252Cf and 241Am-Li and combinations of both are also presented.

  12. Neutron powder diffraction refinement of the nuclear and magnetic structures of HoNi{sub 2}B{sub 2}C at R.T., 10, 5.1, and 2.2 K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q.; Grigereit, T.E.; Lynn, J.W.

    The nuclear and magnetic structures of HoNi{sub 2}B{sub 2}C have been investigated by neutron powder diffraction at room temperature and at 10, 5.1 and 2.2K. The compound crystallizes with the symmetry of space group 14/mmm and has room temperature lattice parameters a = 3.5170(1) and c = 10.5217(3) {angstrom}. No phase transitions of the nuclear structure have been observed in the range of temperatures examined. Magnetic peaks begin to appear at about 8K. The magnetic structure is the superposition of two configurations, one in which ferromagnetic sheets of holmium spins parallel to the a-b plane are coupled antiferromagnetically along themore » c-axis, and another in which the ferromagnetic planes are rotated away from the antiparallel configuration to give an incommensurate helicoidal structure with a period approximately equal to twelve times the length of the c-axis. The helicoidal structure competes with superconductivity while the antiferromagnetism coexists with it.« less

  13. An Update on Improvements to NiCE Support for PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Andrew; McCaskey, Alexander J.; Billings, Jay Jay

    2015-09-01

    The Department of Energy Office of Nuclear Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program has supported the development of the NEAMS Integrated Computational Environment (NiCE), a modeling and simulation workflow environment that provides services and plugins to facilitate tasks such as code execution, model input construction, visualization, and data analysis. This report details the development of workflows for the reactor core neutronics application, PROTEUS. This advanced neutronics application (primarily developed at Argonne National Laboratory) aims to improve nuclear reactor design and analysis by providing an extensible and massively parallel, finite-element solver for current and advanced reactor fuel neutronicsmore » modeling. The integration of PROTEUS-specific tools into NiCE is intended to make the advanced capabilities that PROTEUS provides more accessible to the nuclear energy research and development community. This report will detail the work done to improve existing PROTEUS workflow support in NiCE. We will demonstrate and discuss these improvements, including the development of flexible IO services, an improved interface for input generation, and the addition of advanced Fortran development tools natively in the platform.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  15. Neutronics qualification of the Jules Horowitz reactor fuel by interpretation of the VALMONT experimental program - Transposition of the uncertainties on the reactivity of JHR with JEF2.2 and JEFF3.1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leray, O.; Hudelot, J. P.; Antony, M.

    2011-07-01

    The new European material testing Jules Horowitz Reactor (JHR), currently under construction in Cadarache center (CEA France), will use LEU (20% enrichment in {sup 235}U) fuels (U{sub 3}Si{sub 2} for the start up and UMoAl in the future) which are quite different from the industrial oxide fuel, for which an extensive neutronics qualification database has been established. The HORUS3D/N neutronics calculation scheme, used for the design and safety studies of the JHR, is being developed within the framework of a rigorous verification-validation-qualification methodology. In this framework, the experimental VALMONT (Validation of Aluminium Molybdenum uranium fuel for Neutronics) program has beenmore » performed in the MINERVE facility of CEA Cadarache (France), in order to qualify the capability of HORUS3D/N to accurately calculate the reactivity of the JHR reactor. The MINERVE facility using the oscillation technique provides accurate measurements of reactivity effect of samples. The VALMONT program includes oscillations of samples of UAl{sub x}/Al and UMo/Al with enrichments ranging from 0.2% to 20% and Uranium densities from 2.2 to 8 g/cm{sup 3}. The geometry of the samples and the pitch of the experimental lattice ensure maximum representativeness with the neutron spectrum expected for JHR. By comparing the effect of the sample with the one of a known fuel specimen, the reactivity effect can be measured in absolute terms and be compared to computational results. Special attention was paid to the rigorous determination and reduction of the experimental uncertainties. The calculational analysis of the VALMONT results was performed with the French deterministic code APOLLO2. A comparison of the impact of the different calculation methods, data libraries and energy meshes that were tested is presented. The interpretation of the VALMONT experimental program allowed the qualification of JHR fuel UMoAl8 (with an enrichment of 19.75% {sup 235}U) by the Minerve-dedicated interpretation tool: PIMS. The effect of energy meshes and evaluations put forward the JEFF3.1.1/SHEM scheme that leads to a better calculation of the reactivity effect of VALMONT samples. Then, in order to quantify the impact of the uncertainties linked to the basic nuclear data, their propagation from the cross section measurement to the final computational result was analysed in a rigorous way by using a nuclear data re-estimation method based on Gauss-Newton iterations. This study concludes that the prior uncertainties due to nuclear data (uranium, aluminium, beryllium and water) on the reactivity of the Begin Of Cycle (BOC) for the JHR core reach 1217 pcm at 2{sigma}. Now, the uppermost uncertainty on the JHR reactivity is due to aluminium. (authors)« less

  16. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-08-23

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  17. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  18. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  19. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  20. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  1. Multipurpose epithermal neutron beam on new research station at MARIA research reactor in Swierk-Poland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gryzinski, M.A.; Maciak, M.

    MARIA reactor is an open-pool research reactor what gives the chance to install uranium fission converter on the periphery of the core. It could be installed far enough not to induce reactivity of the core but close enough to produce high flux of fast neutrons. Special design of the converter is now under construction. It is planned to set the research stand based on such uranium converter in the near future: in 2015 MARIA reactor infrastructure should be ready (preparation started in 2013), in 2016 the neutron beam starts and in 2017 opening the stand for material and biological researchmore » or for medical training concerning BNCT. Unused for many years, horizontal channel number H2 at MARIA research rector in Poland, is going to be prepared as a part of unique stand. The characteristics of the neutron beam will be significant advantage of the facility. High flux of neutrons at the level of 2x10{sup 9} cm{sup -2}s{sup -1} will be obtainable by uranium neutron converter located 90 cm far from the reactor core fuel elements (still inside reactor core basket between so called core reflectors). Due to reaction of core neutrons with converter U{sub 3}Si{sub 2} material it will produce high flux of fast neutrons. After conversion neutrons will be collimated and moderated in the channel by special set of filters and moderators. At the end of H2 channel i.e. at the entrance to the research room neutron energy will be in the epithermal energy range with neutron intensity at least at the level required for BNCT (2x10{sup 9} cm{sup -2}s{sup -1}). For other purposes density of the neutron flux could be smaller. The possibility to change type and amount of installed filters/moderators which enables getting different properties of the beam (neutron energy spectrum, neutron-gamma ratio and beam profile and shape) is taken into account. H2 channel is located in separate room which is adjacent to two other empty rooms under the preparation for research laboratories (200 m2). It is planned to create fully equipped complex facility possible to perform various experiments on the intensive neutron beam. Epithermal neutron beam enables development across the full spectrum of materials research for example shielding concrete tests or electronic devices construction improvement. Due to recent reports on the construction of the accelerator for the Boron Neutron Capture Therapy (BNCT) it has the opportunity to become useful and successful method in the fight against brain and other types of cancers not treated with well known medical methods. In Europe there is no such epithermal neutron source which could be used throughout the year for training and research for scientist working on BNCT what makes the stand unique in Europe. Also our research group which specializes in mixed radiation dosimetry around nuclear and medical facilities would be able to carry out research on new detectors and methods of measurements for radiological protection and in-beam (therapeutic) dosimetry. Another group of scientists from National Centre for Nuclear Research, where MARIA research reactor is located, is involved in research of gamma detector systems. There is an idea to develop Prompt-gamma Single Photon Emission Computed Tomography (Pg- SPECT). This method could be used as imaging system for compounds emitting gamma rays after nuclear reaction with thermal neutrons e.g. for boron concentration in BNCT. Inside the room, where H2 channel is located, there is another horizontal channel - H1 which is also unused. Simultaneously with the construction of the H2 stand it will be possible to create special pneumatic horizontal mail inside the H1 channel for irradiation material samples in the vicinity of the core i.e. in the distal part of the H1 channel. It might expand the scope of research at the planned neutron station. Secondly it is planned to equip both stands with moveable positioning system, video system and facilities to perform animal experiments (anaesthesia, vital signs control, imaging devices, positioning). These all above make constructed station unique in the world (uranium fission converter-based beam) and the only one of such intense neutron beam in the Europe. Moreover implementation of the station would allow the development of research on a number of issues for researchers from all over the Europe. One of very important advantages of the station is undisturbed exploitation of the reactor and other vertical and horizontal channels. MARIA reactor operates 6000 hours per year and that amount of time will be achievable for research on the neutron station. It have to be underlined that new neutron station will work parallel to all another ventures. (authors)« less

  2. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  3. Quantum computation based on photonic systems with two degrees of freedom assisted by the weak cross-Kerr nonlinearity

    PubMed Central

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong

    2016-01-01

    Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding. PMID:27424767

  4. Quantum computation based on photonic systems with two degrees of freedom assisted by the weak cross-Kerr nonlinearity.

    PubMed

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong

    2016-07-18

    Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding.

  5. Workshop on Structural Dynamics and Control Interaction of Flexible Structures

    NASA Technical Reports Server (NTRS)

    Davis, L. P.; Wilson, J. F.; Jewell, R. E.

    1987-01-01

    The Hubble Space Telescope features the most exacting line of sight jitter requirement thus far imposed on a spacecraft pointing system. Consideration of the fine pointing requirements prompted an attempt to isolate the telescope from the low level vibration disturbances generated by the attitude control system reaction wheels. The primary goal was to provide isolation from axial component of wheel disturbance without compromising the control system bandwidth. A passive isolation system employing metal springs in parallel with viscous fluid dampers was designed, fabricated, and space qualified. Stiffness and damping characteristics are deterministic, controlled independently, and were demonstrated to remain constant over at least five orders of input disturbance magnitude. The damping remained purely viscous even at the data collection threshold of .16 x .000001 in input displacement, a level much lower than the anticipated Hubble Space Telescope disturbance amplitude. Vibration attenuation goals were obtained and ground test of the vehicle has demonstrated the isolators are transparent to the attitude control system.

  6. Ion beam figuring of highly steep mirrors with a 5-axis hybrid machine tool

    NASA Astrophysics Data System (ADS)

    Yin, Xiaolin; Tang, Wa; Hu, Haixiang; Zeng, Xuefeng; Wang, Dekang; Xue, Donglin; Zhang, Feng; Deng, Weijie; Zhang, Xuejun

    2018-02-01

    Ion beam figuring (IBF) is an advanced and deterministic method for optical mirror surface processing. The removal function of IBF varies with the different incident angles of ion beam. Therefore, for the curved surface especially the highly steep one, the Ion Beam Source (IBS) should be equipped with 5-axis machining capability to remove the material along the normal direction of the mirror surface, so as to ensure the stability of the removal function. Based on the 3-RPS parallel mechanism and two dimensional displacement platform, a new type of 5-axis hybrid machine tool for IBF is presented. With the hybrid machine tool, the figuring process of a highly steep fused silica spherical mirror is introduced. The R/# of the mirror is 0.96 and the aperture is 104mm. The figuring result shows that, PV value of the mirror surface error is converged from 121.1nm to32.3nm, and RMS value 23.6nm to 3.4nm.

  7. Pro Free Will Priming Enhances “Risk-Taking” Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies

    PubMed Central

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854

  8. Pro Free Will Priming Enhances "Risk-Taking" Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies.

    PubMed

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.

  9. Helicity-dependent cross sections and double-polarization observable E in η photoproduction from quasifree protons and neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witthauer, L.; Dieterle, M.; Abt, S.

    2017-05-01

    Precise helicity-dependent cross sections and the double-polarization observable E were measured for η photoproduction from quasifree protons and neutrons bound in the deuteron. The η → 2γ and η → 3π 0 → 6γ decay modes were used to optimize the statistical quality of the data and to estimate systematic uncertainties. The measurement used the A2 detector setup at the tagged photon beam of the electron accelerator MAMI in Mainz. A longitudinally polarized deuterated butanol target was used in combination with a circularly polarized photon beam from bremsstrahlung of a longitudinally polarized electron beam. The reaction products were detected withmore » the electromagnetic calorimeters Crystal Ball and TAPS, which covered 98% of the full solid angle. The results show that the narrow structure observed earlier in the unpolarized excitation function of η photoproduction off the neutron appears only in reactions with antiparallel photon and nucleon spin (σ 1/2). It is absent for reactions with parallel spin orientation (σ 3/2) and thus very probably related to partial waves with total spin 1/2. The behavior of the angular distributions of the helicity-dependent cross sections was analyzed by fitting them with Legendre polynomials. The results are in good agreement with a model from the Bonn-Gatchina group, which uses an interference of P 11 and S 11 partial waves to explain the narrow structure.« less

  10. Boron detection from blood samples by ICP-AES and ICP-MS during boron neutron capture therapy.

    PubMed

    Linko, S; Revitzer, H; Zilliacus, R; Kortesniemi, M; Kouri, M; Savolainen, S

    2008-01-01

    The concept of boron neutron capture therapy (BNCT) involves infusion of a (10)B containing tracer into the patient's bloodstream followed by local neutron irradiation(s). Accurate estimation of the blood boron level for the treatment field before irradiation is required. Boron concentration can be quantified by inductively coupled plasma atomic emission spectrometry (ICP-AES), mass spectrometry (ICP-MS), spectrofluorometric and direct current atomic emission spectrometry (DCP-AES) or by prompt gamma photon detection methods. The blood boron concentrations were analysed and compared using ICP-AES and ICP-MS to ensure congruency of the results if the analysis had to be changed during the treatment, e.g. for technical reasons. The effect of wet-ashing on the results was studied in addition. The mean of all samples analysed with ICP-MS was 5.8 % lower than with ICP-AES coupled to wet-ashing (R (2) = 0.88). Without wet-ashing, the mean of all samples analysed with ICP-MS was 9.1 % higher than with ICP-AES (R (2) = 0.99). Boron concentration analysed from whole blood samples with ICP-AES correlated well with the values of ICP-MS with wet-ashing of the sample matrix, which is generally considered the reference method. When using these methods in parallel at certain intervals during the treatments, reliability of the blood boron concentration values remains satisfactory, taking into account the required accuracy of dose determination in the irradiation of cancer patients.

  11. Investigation of three-dimensional localisation of radioactive sources using a fast organic liquid scintillator detector

    NASA Astrophysics Data System (ADS)

    Gamage, K. A. A.; Joyce, M. J.; Taylor, G. C.

    2013-04-01

    In this paper we discuss the possibility of locating radioactive sources in space using a scanning-based method, relative to the three-dimensional location of the detector. The scanning system comprises an organic liquid scintillator detector, a tungsten collimator and an adjustable equatorial mount. The detector output is connected to a bespoke fast digitiser (Hybrid Instruments Ltd., UK) which streams digital samples to a personal computer. A radioactive source has been attached to a vertical wall and the data have been collected in two stages. In the first case, the scanning system was placed a couple of metres away from the wall and in the second case it moved few centimetres from the previous location, parallel to the wall. In each case data were collected from a grid of measurement points (set of azimuth angles for set of elevation angles) which covered the source on the wall. The discrimination of fast neutrons and gamma rays, detected by the organic liquid scintillator detector, is carried out on the basis of pulse gradient analysis. Images are then produced in terms of the angular distribution of events for total counts, gamma rays and neutrons for both cases. The three-dimensional location of the neutron source can be obtained by considering the relative separation of the centres of the corresponding images of angular distribution of events. The measurements have been made at the National Physical Laboratory, Teddington, Middlesex, UK.

  12. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  13. Ion implantation for deterministic single atom devices

    NASA Astrophysics Data System (ADS)

    Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.

    2017-12-01

    We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  14. Counterfactual Quantum Deterministic Key Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Wang, Jian; Tang, Chao-Jing

    2013-01-01

    We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.

  15. Ion implantation for deterministic single atom devices

    DOE PAGES

    Pacheco, J. L.; Singh, M.; Perry, D. L.; ...

    2017-12-04

    Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  16. Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Lee, Kim Fook; Kumar, Prem

    2007-09-15

    By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.

  17. A wavelet approach to binary blackholes with asynchronous multitasking

    NASA Astrophysics Data System (ADS)

    Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.

  18. Scalable nuclear density functional theory with Sky3D

    NASA Astrophysics Data System (ADS)

    Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin

    2018-02-01

    In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.

  19. A Sludge Drum in the APNea System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, D.

    1998-11-17

    The assay of sludge drums pushes the APNea System to a definite extreme. Even though it seems clear that neutron based assay should be the method of choice for sludge drums, the difficulties posed by this matrix push any NDA technique to its limits. Special emphasis is given here to the differential die-away technique, which appears to approach the desired sensitivity. A parallel analysis of ethafoam drums will be presented, since the ethafoam matrix fits well within the operating range of the AIWea System, and, having been part of the early PDP trials, has been assayed by many in themore » NDA community.« less

  20. SEALED INSULATOR BUSHING

    DOEpatents

    Carmichael, H.

    1952-11-11

    The manufacture of electrode insulators that are mechanically strong, shock-proof, vacuum tight, and are capable of withstanding gas pressures of many atmospheres under intense neutron bombardment, such as may be needed in an ionization chamber, is described. The ansulator comprises a bolt within a quartz tube, surrounded by a bushing held in place by two quartz rings, and tightened to a pressure of 1,000 pounds per square inch by a nut and washer. Quartz is the superior material to meet these conditions, however, to withstand this pressure the quartz must be fire polished, lapped to form smooth and parallel surfaces, and again fire polished to form an extremely smooth and fracture resistant mating surface.

Top