Sample records for mcnp-based depletion codes

  1. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    DOE PAGES

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less

  2. SMITHERS: An object-oriented modular mapping methodology for MCNP-based neutronic–thermal hydraulic multiphysics

    DOE PAGES

    Richard, Joshua; Galloway, Jack; Fensin, Michael; ...

    2015-04-04

    A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from themore » combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.« less

  3. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    PubMed

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  4. Adjoint-Based Uncertainty Quantification with MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less

  5. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  6. Verification of BWR Turbine Skyshine Dose with the MCNP5 Code Based on an Experiment Made at SHIMANE Nuclear Power Station

    NASA Astrophysics Data System (ADS)

    Tayama, Ryuichi; Wakasugi, Kenichi; Kawanaka, Ikunori; Kadota, Yoshinobu; Murakami, Yasuhiro

    We measured the skyshine dose from turbine buildings at Shimane Nuclear Power Station Unit 1 (NS-1) and Unit 2 (NS-2), and then compared it with the dose calculated with the Monte Carlo transport code MCNP5. The skyshine dose values calculated with the MCNP5 code agreed with the experimental data within a factor of 2.8, when the roof of the turbine building was precisely modeled. We concluded that our MCNP5 calculation was valid for BWR turbine skyshine dose evaluation.

  7. MCNP Version 6.2 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werner, Christopher John; Bull, Jeffrey S.; Solomon, C. J.

    Monte Carlo N-Particle or MCNP ® is a general-purpose Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP Version 6.2 follows the MCNP6.1.1 beta version and has been released in order to provide the radiation transport community with the latest feature developments and bug fixes for MCNP. Since the last release of MCNP major work has been conducted to improve the code base, add features, and provide tools to facilitate ease of use of MCNP version 6.2 as well as the analysis of results. These release notes serve as a general guidemore » for the new/improved physics, source, data, tallies, unstructured mesh, code enhancements and tools. For more detailed information on each of the topics, please refer to the appropriate references or the user manual which can be found at http://mcnp.lanl.gov. This release of MCNP version 6.2 contains 39 new features in addition to 172 bug fixes and code enhancements. There are still some 33 known issues the user should familiarize themselves with (see Appendix).« less

  8. Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses.

    PubMed

    Love, P A; Lewis, D G; Al-Affan, I A; Smith, C W

    1998-05-01

    The Monte Carlo codes EGS4 and MCNP have been compared when calculating radiotherapy depth doses in water. The aims of the work were to study (i) the differences between calculated depth doses in water for a range of monoenergetic photon energies and (ii) the relative efficiency of the two codes for different electron transport energy cut-offs. The depth doses from the two codes agree with each other within the statistical uncertainties of the calculations (1-2%). The relative depth doses also agree with data tabulated in the British Journal of Radiology Supplement 25. A discrepancy in the dose build-up region may by attributed to the different electron transport algorithims used by EGS4 and MCNP. This discrepancy is considerably reduced when the improved electron transport routines are used in the latest (4B) version of MCNP. Timing calculations show that EGS4 is at least 50% faster than MCNP for the geometries used in the simulations.

  9. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  10. Comparison study of photon attenuation characteristics of Lead-Boron Polyethylene by MCNP code, XCOM and experimental data

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jia, Mingchun; Gong, Junjun; Xia, Wenming

    2017-08-01

    The linear attenuation coefficient, mass attenuation coefficient and mean free path of various Lead-Boron Polyethylene (PbBPE) samples which can be used as the photon shielding materials in marine reactor have been simulated using the Monte Carlo N-Particle (MCNP)-5 code. The MCNP simulation results are in good agreement with the XCOM values and the reported experimental data for source Cesium-137 and Cobalt-60. Thus, this method based on MCNP can be used to simulate the photon attenuation characteristics of various types of PbBPE materials.

  11. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  12. Implementation of a tree algorithm in MCNP code for nuclear well logging applications.

    PubMed

    Li, Fusheng; Han, Xiaogang

    2012-07-01

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    PubMed

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  14. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    PubMed Central

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP codeMCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  15. MCNP6.1 simulations for low-energy atomic relaxation: Code-to-code comparison with GATEv7.2, PENELOPE2014, and EGSnrc

    NASA Astrophysics Data System (ADS)

    Jung, Seongmoon; Sung, Wonmo; Lee, Jaegi; Ye, Sung-Joon

    2018-01-01

    Emerging radiological applications of gold nanoparticles demand low-energy electron/photon transport calculations including details of an atomic relaxation process. Recently, MCNP® version 6.1 (MCNP6.1) has been released with extended cross-sections for low-energy electron/photon, subshell photoelectric cross-sections, and more detailed atomic relaxation data than the previous versions. With this new feature, the atomic relaxation process of MCNP6.1 has not been fully tested yet with its new physics library (eprdata12) that is based on the Evaluated Atomic Data Library (EADL). In this study, MCNP6.1 was compared with GATEv7.2, PENELOPE2014, and EGSnrc that have been often used to simulate low-energy atomic relaxation processes. The simulations were performed to acquire both photon and electron spectra produced by interactions of 15 keV electrons or photons with a 10-nm-thick gold nano-slab. The photon-induced fluorescence X-rays from MCNP6.1 fairly agreed with those from GATEv7.2 and PENELOPE2014, while the electron-induced fluorescence X-rays of the four codes showed more or less discrepancies. A coincidence was observed in the photon-induced Auger electrons simulated by MCNP6.1 and GATEv7.2. A recent release of MCNP6.1 with eprdata12 can be used to simulate the photon-induced atomic relaxation.

  16. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

  17. The MCNP-DSP code for calculations of time and frequency analysis parameters for subcritical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valentine, T.E.; Mihalczo, J.T.

    1995-12-31

    This paper describes a modified version of the MCNP code, the MCNP-DSP. Variance reduction features were disabled to have strictly analog particle tracking in order to follow fluctuating processes more accurately. Some of the neutron and photon physics routines were modified to better represent the production of particles. Other modifications are discussed.

  18. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  19. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    PubMed

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  20. Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.

    PubMed

    Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C

    2004-01-01

    Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.

  1. Organ dose conversion coefficients based on a voxel mouse model and MCNP code for external photon irradiation.

    PubMed

    Zhang, Xiaomin; Xie, Xiangdong; Cheng, Jie; Ning, Jing; Yuan, Yong; Pan, Jie; Yang, Guoshan

    2012-01-01

    A set of conversion coefficients from kerma free-in-air to the organ absorbed dose for external photon beams from 10 keV to 10 MeV are presented based on a newly developed voxel mouse model, for the purpose of radiation effect evaluation. The voxel mouse model was developed from colour images of successive cryosections of a normal nude male mouse, in which 14 organs or tissues were segmented manually and filled with different colours, while each colour was tagged by a specific ID number for implementation of mouse model in Monte Carlo N-particle code (MCNP). Monte Carlo simulation with MCNP was carried out to obtain organ dose conversion coefficients for 22 external monoenergetic photon beams between 10 keV and 10 MeV under five different irradiation geometries conditions (left lateral, right lateral, dorsal-ventral, ventral-dorsal, and isotropic). Organ dose conversion coefficients were presented in tables and compared with the published data based on a rat model to investigate the effect of body size and weight on the organ dose. The calculated and comparison results show that the organ dose conversion coefficients varying the photon energy exhibits similar trend for most organs except for the bone and skin, and the organ dose is sensitive to body size and weight at a photon energy approximately <0.1 MeV.

  2. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    DTIC Science & Technology

    2014-03-27

    VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR... PARTICLE CODE 6 (MCNP6) WITH NEUTRON PROTECTION FACTOR MEASUREMENTS OF AN IRON BOX THESIS Presented to the Faculty Department of Engineering...STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED iv AFIT-ENP-14-M-05 VERIFICATION AND VALIDATION OF MONTE CARLO N- PARTICLE CODE 6

  3. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  4. Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.

    PubMed

    Yoriyaz, H; Stabin, M G; dos Santos, A

    2001-04-01

    This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)

  5. Benchmarking study of the MCNP code against cold critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, S.

    1991-01-01

    The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less

  6. Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A

    2005-01-01

    The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.

  7. MCNP4A: Features and philosophy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.

    This paper describes MCNP, states its philosophy, introduces a number of new features becoming available with version MCNP4A, and answers a number of questions asked by participants in the workshop. MCNP is a general-purpose three-dimensional neutron, photon and electron transport code. Its philosophy is ``Quality, Value and New Features.`` Quality is exemplified by new software quality assurance practices and a program of benchmarking against experiments. Value includes a strong emphasis on documentation and code portability. New features are the third priority. MCNP4A is now available at Los Alamos. New features in MCNP4A include enhanced statistical analysis, distributed processor multitasking, newmore » photon libraries, ENDF/B-VI capabilities, X-Windows graphics, dynamic memory allocation, expanded criticality output, periodic boundaries, plotting of particle tracks via SABRINA, and many other improvements. 23 refs.« less

  8. Brachytherapy dosimetry of 125I and 103Pd sources using an updated cross section library for the MCNP Monte Carlo transport code.

    PubMed

    Bohm, Tim D; DeLuca, Paul M; DeWerd, Larry A

    2003-04-01

    Permanent implantation of low energy (20-40 keV) photon emitting radioactive seeds to treat prostate cancer is an important treatment option for patients. In order to produce accurate implant brachytherapy treatment plans, the dosimetry of a single source must be well characterized. Monte Carlo based transport calculations can be used for source characterization, but must have up to date cross section libraries to produce accurate dosimetry results. This work benchmarks the MCNP code and its photon cross section library for low energy photon brachytherapy applications. In particular, we calculate the emitted photon spectrum, air kerma, depth dose in water, and radial dose function for both 125I and 103Pd based seeds and compare to other published results. Our results show that MCNP's cross section library differs from recent data primarily in the photoelectric cross section for low energies and low atomic number materials. In water, differences as large as 10% in the photoelectric cross section and 6% in the total cross section occur at 125I and 103Pd photon energies. This leads to differences in the dose rate constant of 3% and 5%, and differences as large as 18% and 20% in the radial dose function for the 125I and 103Pd based seeds, respectively. Using a partially updated photon library, calculations of the dose rate constant and radial dose function agree with other published results. Further, the use of the updated photon library allows us to verify air kerma and depth dose in water calculations performed using MCNP's perturbation feature to simulate updated cross sections. We conclude that in order to most effectively use MCNP for low energy photon brachytherapy applications, we must update its cross section library. Following this update, the MCNP code system will be a very effective tool for low energy photon brachytherapy dosimetry applications.

  9. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  10. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  11. Performance of MCNP4A on seven computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-12-31

    The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less

  12. Geometry creation for MCNP by Sabrina and XSM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Riper, K.A.

    The Monte Carlo N-Particle transport code MCNP is based on a surface description of 3-dimensional geometry. Cells are defined in terms of boolean operations on signed quadratic surfaces. MCNP geometry is entered as a card image file containing coefficients of the surface equations and a list of surfaces and operators describing cells. Several programs are available to assist in creation of the geometry specification, among them Sabrina and the new ``Smart Editor`` code XSM. We briefly describe geometry creation in Sabrina and then discuss XSM in detail. XSM is under development; our discussion is based on the state of XSMmore » as of January 1, 1994.« less

  13. AN ASSESSMENT OF MCNP WEIGHT WINDOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. HENDRICKS; C. N. CULBERTSON

    2000-01-01

    The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less

  14. An investigation of voxel geometries for MCNP-based radiation dose calculations.

    PubMed

    Zhang, Juying; Bednarz, Bryan; Xu, X George

    2006-11-01

    Voxelized geometry such as those obtained from medical images is increasingly used in Monte Carlo calculations of absorbed doses. One useful application of calculated absorbed dose is the determination of fluence-to-dose conversion factors for different organs. However, confusion still exists about how such a geometry is defined and how the energy deposition is best computed, especially involving a popular code, MCNP5. This study investigated two different types of geometries in the MCNP5 code, cell and lattice definitions. A 10 cm x 10 cm x 10 cm test phantom, which contained an embedded 2 cm x 2 cm x 2 cm target at its center, was considered. A planar source emitting parallel photons was also considered in the study. The results revealed that MCNP5 does not calculate total target volume for multi-voxel geometries. Therefore, tallies which involve total target volume must be divided by the user by the total number of voxels to obtain a correct dose result. Also, using planar source areas greater than the phantom size results in the same fluence-to-dose conversion factor.

  15. Comparisons between MCNP, EGS4 and experiment for clinical electron beams.

    PubMed

    Jeraj, R; Keall, P J; Ostwald, P M

    1999-03-01

    Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high-Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high-Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation.

  16. Multi-threading performance of Geant4, MCNP6, and PHITS Monte Carlo codes for tetrahedral-mesh geometry.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Shin, Bangho; Kim, Chan Hyeong; Furuta, Takuya

    2018-05-04

    In this study, the multi-threading performance of the Geant4, MCNP6, and PHITS codes was evaluated as a function of the number of threads (N) and the complexity of the tetrahedral-mesh phantom. For this, three tetrahedral-mesh phantoms of varying complexity (simple, moderately complex, and highly complex) were prepared and implemented in the three different Monte Carlo codes, in photon and neutron transport simulations. Subsequently, for each case, the initialization time, calculation time, and memory usage were measured as a function of the number of threads used in the simulation. It was found that for all codes, the initialization time significantly increased with the complexity of the phantom, but not with the number of threads. Geant4 exhibited much longer initialization time than the other codes, especially for the complex phantom (MRCP). The improvement of computation speed due to the use of a multi-threaded code was calculated as the speed-up factor, the ratio of the computation speed on a multi-threaded code to the computation speed on a single-threaded code. Geant4 showed the best multi-threading performance among the codes considered in this study, with the speed-up factor almost linearly increasing with the number of threads, reaching ~30 when N  =  40. PHITS and MCNP6 showed a much smaller increase of the speed-up factor with the number of threads. For PHITS, the speed-up factors were low when N  =  40. For MCNP6, the increase of the speed-up factors was better, but they were still less than ~10 when N  =  40. As for memory usage, Geant4 was found to use more memory than the other codes. In addition, compared to that of the other codes, the memory usage of Geant4 more rapidly increased with the number of threads, reaching as high as ~74 GB when N  =  40 for the complex phantom (MRCP). It is notable that compared to that of the other codes, the memory usage of PHITS was much lower, regardless of both the complexity of the

  17. Dose mapping using MCNP code and experiment for SVST-Co-60/B irradiator in Vietnam.

    PubMed

    Tran, Van Hung; Tran, Khac An

    2010-06-01

    By using MCNP code and ethanol-chlorobenzene (ECB) dosimeters the simulations and measurements of absorbed dose distribution in a tote-box of the Cobalt-60 irradiator, SVST-Co60/B at VINAGAMMA have been done. Based on the results Dose Uniformity Ratios (DUR), positions and values of minimum and maximum dose extremes in a tote-box, and efficiency of the irradiator for the different dummy densities have been gained. There is a good agreement between simulation and experimental results in comparison and they have valuable meanings for operation of the irradiator. Copyright 2010 Elsevier Ltd. All rights reserved.

  18. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part I: boron neutron capture therapy models.

    PubMed

    Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.

  19. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes.

    PubMed

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-21

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  20. Image enhancement using MCNP5 code and MATLAB in neutron radiography.

    PubMed

    Tharwat, Montaser; Mohamed, Nader; Mongy, T

    2014-07-01

    This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Comparison of the thermal neutron scattering treatment in MCNP6 and GEANT4 codes

    NASA Astrophysics Data System (ADS)

    Tran, H. N.; Marchix, A.; Letourneau, A.; Darpentigny, J.; Menelle, A.; Ott, F.; Schwindling, J.; Chauvin, N.

    2018-06-01

    To ensure the reliability of simulation tools, verification and comparison should be made regularly. This paper describes the work performed in order to compare the neutron transport treatment in MCNP6.1 and GEANT4-10.3 in the thermal energy range. This work focuses on the thermal neutron scattering processes for several potential materials which would be involved in the neutron source designs of Compact Accelerator-based Neutrons Sources (CANS), such as beryllium metal, beryllium oxide, polyethylene, graphite, para-hydrogen, light water, heavy water, aluminium and iron. Both thermal scattering law and free gas model, coming from the evaluated data library ENDF/B-VII, were considered. It was observed that the GEANT4.10.03-patch2 version was not able to account properly the coherent elastic process occurring in crystal lattice. This bug is treated in this work and it should be included in the next release of the code. Cross section sampling and integral tests have been performed for both simulation codes showing a fair agreement between the two codes for most of the materials except for iron and aluminium.

  2. Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Sjenitzer, Bart L.

    2014-06-01

    To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.

  3. Review of heavy charged particle transport in MCNP6.2

    NASA Astrophysics Data System (ADS)

    Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.

    2018-04-01

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.

  4. Review of Heavy Charged Particle Transport in MCNP6.2

    DOE PAGES

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George; ...

    2018-01-05

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  5. How to Build MCNP 6.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Jeffrey S.

    This presentation describes how to build MCNP 6.2. MCNP®* 6.2 can be compiled on Macs, PCs, and most Linux systems. It can also be built for parallel execution using both OpenMP and Messing Passing Interface (MPI) methods. MCNP6 requires Fortran, C, and C++ compilers to build the code.

  6. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  7. MCNP6 Fission Multiplicity with FMULT Card

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilcox, Trevor; Fensin, Michael Lorne; Hendricks, John S.

    With the merger of MCNPX and MCNP5 into MCNP6, MCNP6 now provides all the capabilities of both codes allowing the user to access all the fission multiplicity data sets. Detailed in this paper is: (1) the new FMULT card capabilities for accessing these different data sets; (2) benchmark calculations, as compared to experiment, detailing the results of selecting these separate data sets for thermal neutron induced fission on U-235.

  8. EBR-II Static Neutronic Calculations by PHISICS / MCNP6 codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paolo Balestra; Carlo Parisi; Andrea Alfonsi

    2016-02-01

    The International Atomic Energy Agency (IAEA) launched a Coordinated Research Project (CRP) on the Shutdown Heat Removal Tests (SHRT) performed in the '80s at the Experimental fast Breeder Reactor EBR-II, USA. The scope of the CRP is to improve and validate the simulation tools for the study and the design of the liquid metal cooled fast reactors. Moreover, training of the next generation of fast reactor analysts is being also considered the other scope of the CRP. In this framework, a static neutronic model was developed, using state-of-the art neutron transport codes like SCALE/PHISICS (deterministic solution) and MCNP6 (stochastic solution).more » Comparison between both solutions is briefly illustrated in this summary.« less

  9. MCNP-model for the OAEP Thai Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallmeier, F.X.; Tang, J.S.; Primm, R.T. III

    An MCNP input was prepared for the Thai Research Reactor, making extensive use of the MCNP geometry`s lattice feature that allows a flexible and easy rearrangement of the core components and the adjustment of the control elements. The geometry was checked for overdefined or undefined zones by two-dimensional plots of cuts through the core configuration with the MCNP geometry plotting capabilities, and by a three-dimensional view of the core configuration with the SABRINA code. Cross sections were defined for a hypothetical core of 67 standard fuel elements and 38 low-enriched uranium fuel elements--all filled with fresh fuel. Three test calculationsmore » were performed with the MCNP4B-code to obtain the multiplication factor for the cases with control elements fully inserted, fully withdrawn, and at a working position.« less

  10. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  11. Monte Carlo determination of the conversion coefficients Hp(3)/Ka in a right cylinder phantom with 'PENELOPE' code. Comparison with 'MCNP' simulations.

    PubMed

    Daures, J; Gouriou, J; Bordy, J M

    2011-03-01

    This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.

  12. Development of the MCNPX depletion capability: A Monte Carlo linked depletion method that automates the coupling between MCNPX and CINDER90 for high fidelity burnup calculations

    NASA Astrophysics Data System (ADS)

    Fensin, Michael Lorne

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established

  13. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6 [Production of energetic light fragments in CEM, LAQGSM, and MCNP6

    DOE PAGES

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.; ...

    2017-03-23

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less

  14. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6 [Production of energetic light fragments in CEM, LAQGSM, and MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less

  15. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  16. Calculation of the effective dose from natural radioactivity in soil using MCNP code.

    PubMed

    Krstic, D; Nikezic, D

    2010-01-01

    Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this work. Calculations have been done for the most common natural radionuclides in soil (238)U, (232)Th series and (40)K. A ORNL human phantoms and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs. The effective dose was calculated according to ICRP 74 recommendations. Conversion factors of effective dose per air kerma were determined. Results obtained here were compared with other authors. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. Elaborate SMART MCNP Modelling Using ANSYS and Its Applications

    NASA Astrophysics Data System (ADS)

    Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng

    2017-09-01

    An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.

  18. MCNP6 Simulation of Light and Medium Nuclei Fragmentation at Intermediate Energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich; Kerby, Leslie Marie

    2015-05-22

    MCNP6, the latest and most advanced LANL Monte Carlo transport code, representing a merger of MCNP5 and MCNPX, is actually much more than the sum of those two computer codes; MCNP6 is available to the public via RSICC at Oak Ridge, TN, USA. In the present work, MCNP6 was validated and verified (V&V) against different experimental data on intermediate-energy fragmentation reactions, and results by several other codes, using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.03 and LAQGSM03.03. It was found that MCNP6 usingmore » CEM03.03 and LAQGSM03.03 describes well fragmentation reactions induced on light and medium target nuclei by protons and light nuclei of energies around 1 GeV/nucleon and below, and can serve as a reliable simulation tool for different applications, like cosmic-ray-induced single event upsets (SEU’s), radiation protection, and cancer therapy with proton and ion beams, to name just a few. Future improvements of the predicting capabilities of MCNP6 for such reactions are possible, and are discussed in this work.« less

  19. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  20. A comparison of the COG and MCNP codes in computational neutron capture therapy modeling, Part II: gadolinium neutron capture therapy models and therapeutic effects.

    PubMed

    Wangerin, K; Culbertson, C N; Jevremovic, T

    2005-08-01

    The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for gadolinium neutron capture therapy (GdNCT) related modeling. The validity of COG NCT model has been established for this model, and here the calculation was extended to analyze the effect of various gadolinium concentrations on dose distribution and cell-kill effect of the GdNCT modality and to determine the optimum therapeutic conditions for treating brain cancers. The computational results were compared with the widely used MCNP code. The differences between the COG and MCNP predictions were generally small and suggest that the COG code can be applied to similar research problems in NCT. Results for this study also showed that a concentration of 100 ppm gadolinium in the tumor was most beneficial when using an epithermal neutron beam.

  1. Treating voxel geometries in radiation protection dosimetry with a patched version of the Monte Carlo codes MCNP and MCNPX.

    PubMed

    Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P

    2007-01-01

    The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.

  2. Turtle 24.0 diffusion depletion code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is corecontained. Provision is made to save data on tape for futuremore » reference. ( auth)« less

  3. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  4. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  5. Dose conversion coefficients based on the Chinese mathematical phantom and MCNP code for external photon irradiation.

    PubMed

    Qiu, Rui; Li, Junli; Zhang, Zhan; Liu, Liye; Bi, Lei; Ren, Li

    2009-02-01

    A set of conversion coefficients from kerma free-in-air to the organ-absorbed dose are presented for external monoenergetic photon beams from 10 keV to 10 MeV based on the Chinese mathematical phantom, a whole-body mathematical phantom model. The model was developed based on the methods of the Oak Ridge National Laboratory mathematical phantom series and data from the Chinese Reference Man and the Reference Asian Man. This work is carried out to obtain the conversion coefficients based on this model, which represents the characteristics of the Chinese population, as the anatomical parameters of the Chinese are different from those of Caucasians. Monte Carlo simulation with MCNP code is carried out to calculate the organ dose conversion coefficients. Before the calculation, the effects from the physics model and tally type are investigated, considering both the calculation efficiency and precision. In the calculation irradiation conditions include anterior-posterior, posterior-anterior, right lateral, left lateral, rotational and isotropic geometries. Conversion coefficients from this study are compared with those recommended in the Publication 74 of International Commission on Radiological Protection (ICRP74) since both the sets of data are calculated with mathematical phantoms. Overall, consistency between the two sets of data is observed and the difference for more than 60% of the data is below 10%. However, significant deviations are also found, mainly for the superficial organs (up to 65.9%) and bone surface (up to 66%). The big difference of the dose conversion coefficients for the superficial organs at high photon energy could be ascribed to kerma approximation for the data in ICRP74. Both anatomical variations between races and the calculation method contribute to the difference of the data for bone surface.

  6. Calculation of conversion coefficients for clinical photon spectra using the MCNP code.

    PubMed

    Lima, M A F; Silva, A X; Crispim, V R

    2004-01-01

    In this work, the MCNP4B code has been employed to calculate conversion coefficients from air kerma to the ambient dose equivalent, H*(10)/Ka, for monoenergetic photon energies from 10 keV to 50 MeV, assuming the kerma approximation. Also estimated are the H*(10)/Ka for photon beams produced by linear accelerators, such as Clinac-4 and Clinac-2500, after transmission through primary barriers of radiotherapy treatment rooms. The results for the conversion coefficients for monoenergetic photon energies, with statistical uncertainty <2%, are compared with those in ICRP publication 74 and good agreements were obtained. The conversion coefficients calculated for real clinic spectra transmitted through walls of concrete of 1, 1.5 and 2 m thick, are in the range of 1.06-1.12 Sv Gy(-1).

  7. Dosimetric comparison of Monte Carlo codes (EGS4, MCNP, MCNPX) considering external and internal exposures of the Zubal phantom to electron and photon sources.

    PubMed

    Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M

    2005-01-01

    This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed.

  8. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Uranium Metal, Oxide, and Solution Systems on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell

    In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.

  9. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  10. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  11. MCNP-based computational model for the Leksell gamma knife.

    PubMed

    Trnka, Jiri; Novotny, Josef; Kluson, Jaroslav

    2007-01-01

    We have focused on the usage of MCNP code for calculation of Gamma Knife radiation field parameters with a homogenous polystyrene phantom. We have investigated several parameters of the Leksell Gamma Knife radiation field and compared the results with other studies based on EGS4 and PENELOPE code as well as the Leksell Gamma Knife treatment planning system Leksell GammaPlan (LGP). The current model describes all 201 radiation beams together and simulates all the sources in the same time. Within each beam, it considers the technical construction of the source, the source holder, collimator system, the spherical phantom, and surrounding material. We have calculated output factors for various sizes of scoring volumes, relative dose distributions along basic planes including linear dose profiles, integral doses in various volumes, and differential dose volume histograms. All the parameters have been calculated for each collimator size and for the isocentric configuration of the phantom. We have found the calculated output factors to be in agreement with other authors' works except the case of 4 mm collimator size, where averaging over the scoring volume and statistical uncertainties strongly influences the calculated results. In general, all the results are dependent on the choice of the scoring volume. The calculated linear dose profiles and relative dose distributions also match independent studies and the Leksell GammaPlan, but care must be taken about the fluctuations within the plateau, which can influence the normalization, and accuracy in determining the isocenter position, which is important for comparing different dose profiles. The calculated differential dose volume histograms and integral doses have been compared with data provided by the Leksell GammaPlan. The dose volume histograms are in good agreement as well as integral doses calculated in small calculation matrix volumes. However, deviations in integral doses up to 50% can be observed for large

  12. Lecture Notes on Criticality Safety Validation Using MCNP & Whisper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    Training classes for nuclear criticality safety, MCNP documentation. The need for, and problems surrounding, validation of computer codes and data area considered first. Then some background for MCNP & Whisper is given--best practices for Monte Carlo criticality calculations, neutron spectra, S(α,β) thermal neutron scattering data, nuclear data sensitivities, covariance data, and correlation coefficients. Whisper is computational software designed to assist the nuclear criticality safety analyst with validation studies with the Monte Carlo radiation transport package MCNP. Whisper's methodology (benchmark selection – C k's, weights; extreme value theory – bias, bias uncertainty; MOS for nuclear data uncertainty – GLLS) and usagemore » are discussed.« less

  13. Analysis of JSI TRIGA MARK II reactor physical parameters calculated with TRIPOLI and MCNP.

    PubMed

    Henry, R; Tiselj, I; Snoj, L

    2015-03-01

    New computational model of the JSI TRIGA Mark II research reactor was built for TRIPOLI computer code and compared with existing MCNP code model. The same modelling assumptions were used in order to check the differences of the mathematical models of both Monte Carlo codes. Differences between the TRIPOLI and MCNP predictions of keff were up to 100pcm. Further validation was performed with analyses of the normalized reaction rates and computations of kinetic parameters for various core configurations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Verification of MCNP simulation of neutron flux parameters at TRIGA MK II reactor of Malaysia.

    PubMed

    Yavar, A R; Khalafi, H; Kasesaz, Y; Sarmani, S; Yahaya, R; Wood, A K; Khoo, K S

    2012-10-01

    A 3-D model for 1 MW TRIGA Mark II research reactor was simulated. Neutron flux parameters were calculated using MCNP-4C code and were compared with experimental results obtained by k(0)-INAA and absolute method. The average values of φ(th),φ(epi), and φ(fast) by MCNP code were (2.19±0.03)×10(12) cm(-2)s(-1), (1.26±0.02)×10(11) cm(-2)s(-1) and (3.33±0.02)×10(10) cm(-2)s(-1), respectively. These average values were consistent with the experimental results obtained by k(0)-INAA. The findings show a good agreement between MCNP code results and experimental results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Tally and geometry definition influence on the computing time in radiotherapy treatment planning with MCNP Monte Carlo code.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Santos, A; Verdu, G

    2006-01-01

    The present work has simulated the photon and electron transport in a Theratron 780 (MDS Nordion) (60)Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle), version 5. In order to become computationally more efficient in view of taking part in the practical field of radiotherapy treatment planning, this work is focused mainly on the analysis of dose results and on the required computing time of different tallies applied in the model to speed up calculations.

  16. Simulation the spatial resolution of an X-ray imager based on zinc oxide nanowires in anodic aluminium oxide membrane by using MCNP and OPTICS Codes

    NASA Astrophysics Data System (ADS)

    Samarin, S. N.; Saramad, S.

    2018-05-01

    The spatial resolution of a detector is a very important parameter for x-ray imaging. A bulk scintillation detector because of spreading of light inside the scintillator does't have a good spatial resolution. The nanowire scintillators because of their wave guiding behavior can prevent the spreading of light and can improve the spatial resolution of traditional scintillation detectors. The zinc oxide (ZnO) scintillator nanowire, with its simple construction by electrochemical deposition in regular hexagonal structure of Aluminum oxide membrane has many advantages. The three dimensional absorption of X-ray energy in ZnO scintillator is simulated by a Monte Carlo transport code (MCNP). The transport, attenuation and scattering of the generated photons are simulated by a general-purpose scintillator light response simulation code (OPTICS). The results are compared with a previous publication which used a simulation code of the passage of particles through matter (Geant4). The results verify that this scintillator nanowire structure has a spatial resolution less than one micrometer.

  17. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tippayakul, C.; Ivanov, K.; Misu, S.

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross sectionmore » library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)« less

  18. Performance Study of Monte Carlo Codes on Xeon Phi Coprocessors — Testing MCNP 6.1 and Profiling ARCHER Geometry Module on the FS7ONNi Problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George

    2017-09-01

    This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.

  19. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  20. Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.

    1993-07-01

    The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less

  1. Validation of MCNP: SPERT-D and BORAX-V fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.; Palmer, B.

    1992-11-01

    This report discusses critical experiments involving SPERT-D{sup 1,2} fuel elements and BORAX-V{sup 3-8} fuel which have been modeled and calculations performed with MCNP. MCNP is a Monte Carlo based transport code. For this study continuous-energy nuclear data from the ENDF/B-V cross section library was used. The SPERT-D experiments consisted of various arrays of fuel elements moderated and reflected with either water or a uranyl nitrate solution. Some SPERT-D experiments used cadmium as a fixed neutron poison, while others were poisoned with various concentrations of boron in the moderating/reflecting solution. ne BORAX-V experiments were arrays of either boiling fuel rod assembliesmore » or superheater assemblies, both types of arrays were moderated and reflected with water. In one boiling fuel experiment, two fuel rods were replaced with borated stainless steel poison rods.« less

  2. Validation of MCNP: SPERT-D and BORAX-V fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.; Palmer, B.

    1992-11-01

    This report discusses critical experiments involving SPERT-D[sup 1,2] fuel elements and BORAX-V[sup 3-8] fuel which have been modeled and calculations performed with MCNP. MCNP is a Monte Carlo based transport code. For this study continuous-energy nuclear data from the ENDF/B-V cross section library was used. The SPERT-D experiments consisted of various arrays of fuel elements moderated and reflected with either water or a uranyl nitrate solution. Some SPERT-D experiments used cadmium as a fixed neutron poison, while others were poisoned with various concentrations of boron in the moderating/reflecting solution. ne BORAX-V experiments were arrays of either boiling fuel rod assembliesmore » or superheater assemblies, both types of arrays were moderated and reflected with water. In one boiling fuel experiment, two fuel rods were replaced with borated stainless steel poison rods.« less

  3. Calculations of the thermal and fast neutron fluxes in the Syrian miniature neutron source reactor using the MCNP-4C code.

    PubMed

    Khattab, K; Sulieman, I

    2009-04-01

    The MCNP-4C code, based on the probabilistic approach, was used to model the 3D configuration of the core of the Syrian miniature neutron source reactor (MNSR). The continuous energy neutron cross sections from the ENDF/B-VI library were used to calculate the thermal and fast neutron fluxes in the inner and outer irradiation sites of MNSR. The thermal fluxes in the MNSR inner irradiation sites were also measured experimentally by the multiple foil activation method ((197)Au (n, gamma) (198)Au and (59)Co (n, gamma) (60)Co). The foils were irradiated simultaneously in each of the five MNSR inner irradiation sites to measure the thermal neutron flux and the epithermal index in each site. The calculated and measured results agree well.

  4. TRIPOLI-4® - MCNP5 ITER A-lite neutronic model benchmarking

    NASA Astrophysics Data System (ADS)

    Jaboulay, J.-C.; Cayla, P.-Y.; Fausser, C.; Lee, Y.-K.; Trama, J.-C.; Li-Puma, A.

    2014-06-01

    The aim of this paper is to present the capability of TRIPOLI-4®, the CEA Monte Carlo code, to model a large-scale fusion reactor with complex neutron source and geometry. In the past, numerous benchmarks were conducted for TRIPOLI-4® assessment on fusion applications. Experiments (KANT, OKTAVIAN, FNG) analysis and numerical benchmarks (between TRIPOLI-4® and MCNP5) on the HCLL DEMO2007 and ITER models were carried out successively. In this previous ITER benchmark, nevertheless, only the neutron wall loading was analyzed, its main purpose was to present MCAM (the FDS Team CAD import tool) extension for TRIPOLI-4®. Starting from this work a more extended benchmark has been performed about the estimation of neutron flux, nuclear heating in the shielding blankets and tritium production rate in the European TBMs (HCLL and HCPB) and it is presented in this paper. The methodology to build the TRIPOLI-4® A-lite model is based on MCAM and the MCNP A-lite model (version 4.1). Simplified TBMs (from KIT) have been integrated in the equatorial-port. Comparisons of neutron wall loading, flux, nuclear heating and tritium production rate show a good agreement between the two codes. Discrepancies are mainly included in the Monte Carlo codes statistical error.

  5. Shielding analysis of the Microtron MT-25 bunker using the MCNP-4C code and NCRP Report 51.

    PubMed

    Casanova, A O; López, N; Gelen, A; Guevara, M V Manso; Díaz, O; Cimino, L; D'Alessandro, K; Melo, J C

    2004-01-01

    A cyclic electron accelerator Microtron MT-25 will be installed in Havana, Cuba. Electrons, neutrons and gamma radiation up to 25 MeV can be produced in the MT-25. A detailed shielding analysis for the bunker is carried out using two ways: the NCRP-51 Report and the Monte Carlo Method (MCNP-4C Code). The walls and ceiling thicknesses are estimated with dose constraints of 0.5 and 20 mSv y(-1), respectively, and an area occupancy factor of 1/16. Both results are compared and a preliminary bunker design is shown. Copyright 2004 Oxford University Press

  6. Monte Carlo calculations of thermal neutron capture in gadolinium: a comparison of GEANT4 and MCNP with measurements.

    PubMed

    Enger, Shirin A; Munck af Rosenschöld, Per; Rezaei, Arash; Lundqvist, Hans

    2006-02-01

    GEANT4 is a Monte Carlo code originally implemented for high-energy physics applications and is well known for particle transport at high energies. The capacity of GEANT4 to simulate neutron transport in the thermal energy region is not equally well known. The aim of this article is to compare MCNP, a code commonly used in low energy neutron transport calculations and GEANT4 with experimental results and select the suitable code for gadolinium neutron capture applications. To account for the thermal neutron scattering from chemically bound atoms [S(alpha,beta)] in biological materials a comparison of thermal neutron fluence in tissue-like poly(methylmethacrylate) phantom is made with MCNP4B, GEANT4 6.0 patch1, and measurements from the neutron capture therapy (NCT) facility at the Studsvik, Sweden. The fluence measurements agreed with MCNP calculated results considering S(alpha,beta). The location of the thermal neutron peak calculated with MCNP without S(alpha,beta) and GEANT4 is shifted by about 0.5 cm towards a shallower depth and is 25%-30% lower in amplitude. Dose distribution from the gadolinium neutron capture reaction is then simulated by MCNP and compared with measured data. The simulations made by MCNP agree well with experimental results. As long as thermal neutron scattering from chemically bound atoms are not included in GEANT4 it is not suitable for NCT applications.

  7. Performance of the MTR core with MOX fuel using the MCNP4C2 code.

    PubMed

    Shaaban, Ismail; Albarhoum, Mohamad

    2016-08-01

    The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Morgan C.

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a selectmore » group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability

  9. Accelerating Pseudo-Random Number Generator for MCNP on GPU

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Hu, Qingfeng; Deng, Li; Gong, Zhenghu

    2010-09-01

    Pseudo-random number generators (PRNG) are intensively used in many stochastic algorithms in particle simulations, artificial neural networks and other scientific computation. The PRNG in Monte Carlo N-Particle Transport Code (MCNP) requires long period, high quality, flexible jump and fast enough. In this paper, we implement such a PRNG for MCNP on NVIDIA's GTX200 Graphics Processor Units (GPU) using CUDA programming model. Results shows that 3.80 to 8.10 times speedup are achieved compared with 4 to 6 cores CPUs and more than 679.18 million double precision random numbers can be generated per second on GPU.

  10. An Electron/Photon/Relaxation Data Library for MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, III, H. Grady

    The capabilities of the MCNP6 Monte Carlo code in simulation of electron transport, photon transport, and atomic relaxation have recently been significantly expanded. The enhancements include not only the extension of existing data and methods to lower energies, but also the introduction of new categories of data and methods. Support of these new capabilities has required major additions to and redesign of the associated data tables. In this paper we present the first complete documentation of the contents and format of the new electron-photon-relaxation data library now available with the initial production release of MCNP6.

  11. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  12. Verification of MCNP6.2 for Nuclear Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-05-10

    Several suites of verification/validation benchmark problems were run in early 2017 to verify that the new production release of MCNP6.2 performs correctly for nuclear criticality safety applications (NCS). MCNP6.2 results for several NCS validation suites were compared to the results from MCNP6.1 [1] and MCNP6.1.1 [2]. MCNP6.1 is the production version of MCNP® released in 2013, and MCNP6.1.1 is the update released in 2014. MCNP6.2 includes all of the standard features for NCS calculations that have been available for the past 15 years, along with new features for sensitivity-uncertainty based methods for NCS validation [3]. Results from the benchmark suitesmore » were compared with results from previous verification testing [4-8]. Criticality safety analysts should consider testing MCNP6.2 on their particular problems and validation suites. No further development of MCNP5 is planned. MCNP6.1 is now 4 years old, and MCNP6.1.1 is now 3 years old. In general, released versions of MCNP are supported only for about 5 years, due to resource limitations. All future MCNP improvements, bug fixes, user support, and new capabilities are targeted only to MCNP6.2 and beyond.« less

  13. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  14. Using Machine Learning to Predict MCNP Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grechanuk, Pavel Aleksandrovi

    For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less

  15. Addressing Fission Product Validation in MCNP Burnup Credit Criticality Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don; Bowen, Douglas G; Marshall, William BJ J

    2015-01-01

    The US Nuclear Regulatory Commission (NRC) Division of Spent Fuel Storage and Transportation issued Interim Staff Guidance (ISG) 8, Revision 3 in September 2012. This ISG provides guidance for NRC staff members’ review of burnup credit (BUC) analyses supporting transport and dry storage of pressurized water reactor spent nuclear fuel (SNF) in casks. The ISG includes guidance for addressing validation of criticality (k eff) calculations crediting the presence of a limited set of fission products and minor actinides (FP&MAs). Based on previous work documented in NRC Regulatory Guide (NUREG) Contractor Report (CR)-7109, the ISG recommends that NRC staff members acceptmore » the use of either 1.5 or 3% of the FP&MA worth—in addition to bias and bias uncertainty resulting from validation of k eff calculations for the major actinides in SNF—to conservatively account for the bias and bias uncertainty associated with the specified unvalidated FP&MAs. The ISG recommends (1) use of 1.5% of the FP&MA worth if a modern version of SCALE and its nuclear data are used and (2) 3% of the FP&MA worth for well qualified, industry standard code systems other than SCALE with the Evaluated Nuclear Data Files, Part B (ENDF/B),-V, ENDF/B-VI, or ENDF/B-VII cross sections libraries. The work presented in this paper provides a basis for extending the use of the 1.5% of the FP&MA worth bias to BUC criticality calculations performed using the Monte Carlo N-Particle (MCNP) code. The extended use of the 1.5% FP&MA worth bias is shown to be acceptable by comparison of FP&MA worths calculated using SCALE and MCNP with ENDF/B-V, -VI, and -VII–based nuclear data. The comparison supports use of the 1.5% FP&MA worth bias when the MCNP code is used for criticality calculations, provided that the cask design is similar to the hypothetical generic BUC-32 cask model and that the credited FP&MA worth is no more than 0.1 Δk eff (ISG-8, Rev. 3, Recommendation 4).« less

  16. Probabilistic approach for decay heat uncertainty estimation using URANIE platform and MENDEL depletion code

    NASA Astrophysics Data System (ADS)

    Tsilanizara, A.; Gilardi, N.; Huynh, T. D.; Jouanne, C.; Lahaye, S.; Martinez, J. M.; Diop, C. M.

    2014-06-01

    The knowledge of the decay heat quantity and the associated uncertainties are important issues for the safety of nuclear facilities. Many codes are available to estimate the decay heat. ORIGEN, FISPACT, DARWIN/PEPIN2 are part of them. MENDEL is a new depletion code developed at CEA, with new software architecture, devoted to the calculation of physical quantities related to fuel cycle studies, in particular decay heat. The purpose of this paper is to present a probabilistic approach to assess decay heat uncertainty due to the decay data uncertainties from nuclear data evaluation like JEFF-3.1.1 or ENDF/B-VII.1. This probabilistic approach is based both on MENDEL code and URANIE software which is a CEA uncertainty analysis platform. As preliminary applications, single thermal fission of uranium 235, plutonium 239 and PWR UOx spent fuel cell are investigated.

  17. Monte Carlo dose calculations in homogeneous media and at interfaces: a comparison between GEPTS, EGSnrc, MCNP, and measurements.

    PubMed

    Chibani, Omar; Li, X Allen

    2002-05-01

    Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three

  18. Calculated organ doses for Mayak production association central hall using ICRP and MCNP.

    PubMed

    Choe, Dong-Ok; Shelkey, Brenda N; Wilde, Justin L; Walk, Heidi A; Slaughter, David M

    2003-03-01

    As part of an ongoing dose reconstruction project, equivalent organ dose rates from photons and neutrons were estimated using the energy spectra measured in the central hall above the graphite reactor core located in the Russian Mayak Production Association facility. Reconstruction of the work environment was necessary due to the lack of personal dosimeter data for neutrons in the time period prior to 1987. A typical worker scenario for the central hall was developed for the Monte Carlo Neutron Photon-4B (MCNP) code. The resultant equivalent dose rates for neutrons and photons were compared with the equivalent dose rates derived from calculations using the conversion coefficients in the International Commission on Radiological Protection Publications 51 and 74 in order to validate the model scenario for this Russian facility. The MCNP results were in good agreement with the results of the ICRP publications indicating the modeling scenario was consistent with actual work conditions given the spectra provided. The MCNP code will allow for additional orientations to accurately reflect source locations.

  19. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  20. MCNP Output Data Analysis with ROOT (MODAR)

    NASA Astrophysics Data System (ADS)

    Carasco, C.

    2010-06-01

    MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. Program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 155 373 No. of bytes in distributed program, including test data, etc.: 14 815 461 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PC Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two-dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Nature of problem: The output of an MCNP simulation is an ASCII file. The data processing is usually performed by copying and pasting the relevant parts of the ASCII file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time

  1. MCNP5 CALCULATIONS REPLICATING ARH-600 NITRATE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINFROCK SH

    This report serves to extend the previous document: 'MCNP Calculations Replicating ARH-600 Data' by replicating the nitrate curves found in ARH-600. This report includes the MCNP models used, the calculated critical dimension for each analyzed parameter set, and the resulting data libraries for use with the CritView code. As with the ARH-600 data, this report is not meant to replace the analysis of the fissile systems by qualified criticality personnel. The M CNP data is presented without accounting for the statistical uncertainty (although this is typically less than 0.001) or bias and, as such, the application of a reasonable safetymore » margin is required. The data that follows pertains to the uranyl nitrate and plutonium nitrate spheres, infinite cylinders, and infinite slabs of varying isotopic composition, reflector thickness, and molarity. Each of the cases was modeled in MCNP (version 5.1.40), using the ENDF/B-VI cross section set. Given a molarity, isotopic composition, and reflector thickness, the fissile concentration and diameter (or thicknesses in the case of the slab geometries) were varied. The diameter for which k-effective equals 1.00 for a given concentration could then be calculated and graphed. These graphs are included in this report. The pages that follow describe the regions modeled, formulas for calculating the various parameters, a list of cross-sections used in the calculations, a description of the automation routine and data, and finally the data output. The data of most interest are the critical dimensions of the various systems analyzed. This is presented graphically, and in table format, in Appendix B. Appendix C provides a text listing of the same data in a format that is compatible with the CritView code. Appendices D and E provide listing of example Template files and MCNP input files (these are discussed further in Section 4). Appendix F is a complete listing of all of the output data (i.e., all of the analyzed dimensions and

  2. MCNP and GADRAS Comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klasky, Marc Louis; Myers, Steven Charles; James, Michael R.

    To facilitate the timely execution of System Threat Reviews (STRs) for DNDO, and also to develop a methodology for performing STRs, LANL performed comparisons of several radiation transport codes (MCNP, GADRAS, and Gamma-Designer) that have been previously utilized to compute radiation signatures. While each of these codes has strengths, it is of paramount interest to determine the limitations of each of the respective codes and also to identify the most time efficient means by which to produce computational results, given the large number of parametric cases that are anticipated in performing STR's. These comparisons serve to identify regions of applicabilitymore » for each code and provide estimates of uncertainty that may be anticipated. Furthermore, while performing these comparisons, examination of the sensitivity of the results to modeling assumptions was also examined. These investigations serve to enable the creation of the LANL methodology for performing STRs. Given the wide variety of radiation test sources, scenarios, and detectors, LANL calculated comparisons of the following parameters: decay data, multiplicity, device (n,γ) leakages, and radiation transport through representative scenes and shielding. This investigation was performed to understand potential limitations utilizing specific codes for different aspects of the STR challenges.« less

  3. Development of Multi-physics (Multiphase CFD + MCNP) simulation for generic solution vessel power calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seung Jun; Buechler, Cynthia Eileen

    The current study aims to predict the steady state power of a generic solution vessel and to develop a corresponding heat transfer coefficient correlation for a Moly99 production facility by conducting a fully coupled multi-physics simulation. A prediction of steady state power for the current application is inherently interconnected between thermal hydraulic characteristics (i.e. Multiphase computational fluid dynamics solved by ANSYS-Fluent 17.2) and the corresponding neutronic behavior (i.e. particle transport solved by MCNP6.2) in the solution vessel. Thus, the development of a coupling methodology is vital to understand the system behavior at a variety of system design and postulated operatingmore » scenarios. In this study, we report on the k-effective (keff) calculation for the baseline solution vessel configuration with a selected solution concentration using MCNP K-code modeling. The associated correlation of thermal properties (e.g. density, viscosity, thermal conductivity, specific heat) at the selected solution concentration are developed based on existing experimental measurements in the open literature. The numerical coupling methodology between multiphase CFD and MCNP is successfully demonstrated, and the detailed coupling procedure is documented. In addition, improved coupling methods capturing realistic physics in the solution vessel thermal-neutronic dynamics are proposed and tested further (i.e. dynamic height adjustment, mull-cell approach). As a key outcome of the current study, a multi-physics coupling methodology between MCFD and MCNP is demonstrated and tested for four different operating conditions. Those different operating conditions are determined based on the neutron source strength at a fixed geometry condition. The steady state powers for the generic solution vessel at various operating conditions are reported, and a generalized correlation of the heat transfer coefficient for the current application is discussed. The assessment of

  4. Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahler, Albert Comstock

    We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.

  5. The effects of nuclear data library processing on Geant4 and MCNP simulations of the thermal neutron scattering law

    NASA Astrophysics Data System (ADS)

    Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.

    2018-05-01

    Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.

  6. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  7. MCNP modelling of the wall effects observed in tissue-equivalent proportional counters.

    PubMed

    Hoff, J L; Townsend, L W

    2002-01-01

    Tissue-equivalent proportional counters (TEPCs) utilise tissue-equivalent materials to depict homogeneous microscopic volumes of human tissue. Although both the walls and gas simulate the same medium, they respond to radiation differently. Density differences between the two materials cause distortions, or wall effects, in measurements, with the most dominant effect caused by delta rays. This study uses a Monte Carlo transport code, MCNP, to simulate the transport of secondary electrons within a TEPC. The Rudd model, a singly differential cross section with no dependence on electron direction, is used to describe the energy spectrum obtained by the impact of two iron beams on water. Based on the models used in this study, a wall-less TEPC had a higher lineal energy (keV.micron-1) as a function of impact parameter than a solid-wall TEPC for the iron beams under consideration. An important conclusion of this study is that MCNP has the ability to model the wall effects observed in TEPCs.

  8. MCNP6 Status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goorley, John T.

    2012-06-25

    We, the development teams for MCNP, NJOY, and parts of ENDF, would like to invite you to a proposed 3 day workshop October 30, 31 and November 1 2012, to be held at Los Alamos National Laboratory. At this workshop, we will review new and developing missions that MCNP6 and the underlying nuclear data are being asked to address. LANL will also present its internal plans to address these missions and recent advances in these three capabilities and we will be interested to hear your input on these topics. Additionally we are interested in hearing from you additional technical advances,more » missions, concerns, and other issues that we should be considering for both short term (1-3 years) and long term (4-6 years)? What are the additional existing capabilities and methods that we should be investigating? The goal of the workshop is to refine priorities for mcnp6 transport methods, algorithms, physics, data and processing as they relate to the intersection of MCNP, NJOY and ENDF.« less

  9. Numerical Tests for the Problem of U-Pu Fuel Burnup in Fuel Rod and Polycell Models Using the MCNP Code

    NASA Astrophysics Data System (ADS)

    Muratov, V. G.; Lopatkin, A. V.

    An important aspect in the verification of the engineering techniques used in the safety analysis of MOX-fuelled reactors, is the preparation of test calculations to determine nuclide composition variations under irradiation and analysis of burnup problem errors resulting from various factors, such as, for instance, the effect of nuclear data uncertainties on nuclide concentration calculations. So far, no universally recognized tests have been devised. A calculation technique has been developed for solving the problem using the up-to-date calculation tools and the latest versions of nuclear libraries. Initially, in 1997, a code was drawn up in an effort under ISTC Project No. 116 to calculate the burnup in one VVER-1000 fuel rod, using the MCNP Code. Later on, the authors developed a computation technique which allows calculating fuel burnup in models of a fuel rod, or a fuel assembly, or the whole reactor. It became possible to apply it to fuel burnup in all types of nuclear reactors and subcritical blankets.

  10. Comparison of scientific computing platforms for MCNP4A Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-04-01

    The performance of seven computer platforms is evaluated with the widely used and internationally available MCNP4A Monte Carlo radiation transport code. All results are reproducible and are presented in such a way as to enable comparison with computer platforms not in the study. The authors observed that the HP/9000-735 workstation runs MCNP 50% faster than the Cray YMP 8/64. Compared with the Cray YMP 8/64, the IBM RS/6000-560 is 68% as fast, the Sun Sparc10 is 66% as fast, the Silicon Graphics ONYX is 90% as fast, the Gateway 2000 model 4DX2-66V personal computer is 27% as fast, and themore » Sun Sparc2 is 24% as fast. In addition to comparing the timing performance of the seven platforms, the authors observe that changes in compilers and software over the past 2 yr have resulted in only modest performance improvements, hardware improvements have enhanced performance by less than a factor of [approximately]3, timing studies are very problem dependent, MCNP4Q runs about as fast as MCNP4.« less

  11. MCNP/X TRANSPORT IN THE TABULAR REGIME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HUGHES, H. GRADY

    2007-01-08

    The authors review the transport capabilities of the MCNP and MCNPX Monte Carlo codes in the energy regimes in which tabular transport data are available. Giving special attention to neutron tables, they emphasize the measures taken to improve the treatment of a variety of difficult aspects of the transport problem, including unresolved resonances, thermal issues, and the availability of suitable cross sections sets. They also briefly touch on the current situation in regard to photon, electron, and proton transport tables.

  12. On the effect of updated MCNP photon cross section data on the simulated response of the HPA TLD.

    PubMed

    Eakins, Jonathan

    2009-02-01

    The relative response of the new Health Protection Agency thermoluminescence dosimeter (TLD) has been calculated for Narrow Series X-ray distribution and (137)Cs photon sources using the Monte Carlo code MCNP5, and the results compared with those obtained during its design stage using the predecessor code, MCNP4c2. The results agreed at intermediate energies (approximately 0.1 MeV to (137)Cs), but differed at low energies (<0.1 MeV) by up to approximately 10%. This disparity has been ascribed to differences in the default photon interaction data used by the two codes, and derives ultimately from the effect on absorbed dose of the recent updates to the photoelectric cross sections. The sources of these data have been reviewed.

  13. Element analysis and calculation of the attenuation coefficients for gold, bronze and water matrixes using MCNP, WinXCom and experimental data

    NASA Astrophysics Data System (ADS)

    Esfandiari, M.; Shirmardi, S. P.; Medhat, M. E.

    2014-06-01

    In this study, element analysis and the mass attenuation coefficient for matrixes of gold, bronze and water with various impurities and the concentrations of heavy metals (Cu, Mn, Pb and Zn) are evaluated and calculated by the MCNP simulation code for photons emitted from Barium-133, Americium-241 and sources with energies between 1 and 100 keV. The MCNP data are compared with the experimental data and WinXCom code simulated results by Medhat. The results showed that the obtained results of bronze and gold matrix are in good agreement with the other methods for energies above 40 and 60 keV, respectively. However for water matrixes with various impurities, there is a good agreement between the three methods MCNP, WinXCom and the experimental one in low and high energies.

  14. DXRaySMCS: a user-friendly interface developed for prediction of diagnostic radiology X-ray spectra produced by Monte Carlo (MCNP-4C) simulation.

    PubMed

    Bahreyni Toossi, M T; Moradi, H; Zare, H

    2008-01-01

    In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.

  15. SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T. III

    SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.

  16. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  17. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  18. Shielding calculations for industrial 5/7.5MeV electron accelerators using the MCNP Monte Carlo Code

    NASA Astrophysics Data System (ADS)

    Peri, Eyal; Orion, Itzhak

    2017-09-01

    High energy X-rays from accelerators are used to irradiate food ingredients to prevent growth and development of unwanted biological organisms in food, and by that extend the shelf life of the products. The production of X-rays is done by accelerating 5 MeV electrons and bombarding them into a heavy target (high Z). Since 2004, the FDA has approved using 7.5 MeV energy, providing higher production rates with lower treatments costs. In this study we calculated all the essential data needed for a straightforward concrete shielding design of typical food accelerator rooms. The following evaluation is done using the MCNP Monte Carlo code system: (1) Angular dependence (0-180°) of photon dose rate for 5 MeV and 7.5 MeV electron beams bombarding iron, aluminum, gold, tantalum, and tungsten targets. (2) Angular dependence (0-180°) spectral distribution simulations of bremsstrahlung for gold, tantalum, and tungsten bombarded by 5 MeV and 7.5 MeV electron beams. (3) Concrete attenuation calculations in several photon emission angles for the 5 MeV and 7.5 MeV electron beams bombarding a tantalum target. Based on the simulation, we calculated the expected increase in dose rate for facilities intending to increase the energy from 5 MeV to 7.5 MeV, and the concrete width needed to be added in order to keep the existing dose rate unchanged.

  19. MCNP output data analysis with ROOT (MODAR)

    NASA Astrophysics Data System (ADS)

    Carasco, C.

    2010-12-01

    MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. New version program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 150 927 No. of bytes in distributed program, including test data, etc.: 4 981 633 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PCs Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 Catalogue identifier of previous version: AEGA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1161 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: The output of a MCNP simulation is an ascii file. The data processing is usually performed by copying and pasting the relevant parts of the ascii

  20. A method to optimize the shield compact and lightweight combining the structure with components together by genetic algorithm and MCNP code.

    PubMed

    Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao

    2018-05-17

    To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  2. Conversion coefficients for determination of dispersed photon dose during radiotherapy: NRUrad input code for MCNP.

    PubMed

    Shahmohammadi Beni, Mehrdad; Ng, C Y P; Krstic, D; Nikezic, D; Yu, K N

    2017-01-01

    Radiotherapy is a common cancer treatment module, where a certain amount of dose will be delivered to the targeted organ. This is achieved usually by photons generated by linear accelerator units. However, radiation scattering within the patient's body and the surrounding environment will lead to dose dispersion to healthy tissues which are not targets of the primary radiation. Determination of the dispersed dose would be important for assessing the risk and biological consequences in different organs or tissues. In the present work, the concept of conversion coefficient (F) of the dispersed dose was developed, in which F = (Dd/Dt), where Dd was the dispersed dose in a non-targeted tissue and Dt is the absorbed dose in the targeted tissue. To quantify Dd and Dt, a comprehensive model was developed using the Monte Carlo N-Particle (MCNP) package to simulate the linear accelerator head, the human phantom, the treatment couch and the radiotherapy treatment room. The present work also demonstrated the feasibility and power of parallel computing through the use of the Message Passing Interface (MPI) version of MCNP5.

  3. Conversion coefficients for determination of dispersed photon dose during radiotherapy: NRUrad input code for MCNP

    PubMed Central

    Krstic, D.; Nikezic, D.

    2017-01-01

    Radiotherapy is a common cancer treatment module, where a certain amount of dose will be delivered to the targeted organ. This is achieved usually by photons generated by linear accelerator units. However, radiation scattering within the patient’s body and the surrounding environment will lead to dose dispersion to healthy tissues which are not targets of the primary radiation. Determination of the dispersed dose would be important for assessing the risk and biological consequences in different organs or tissues. In the present work, the concept of conversion coefficient (F) of the dispersed dose was developed, in which F = (Dd/Dt), where Dd was the dispersed dose in a non-targeted tissue and Dt is the absorbed dose in the targeted tissue. To quantify Dd and Dt, a comprehensive model was developed using the Monte Carlo N-Particle (MCNP) package to simulate the linear accelerator head, the human phantom, the treatment couch and the radiotherapy treatment room. The present work also demonstrated the feasibility and power of parallel computing through the use of the Message Passing Interface (MPI) version of MCNP5. PMID:28362837

  4. SUMCOR: Cascade summing correction for volumetric sources applying MCNP6.

    PubMed

    Dias, M S; Semmler, R; Moreira, D S; de Menezes, M O; Barros, L F; Ribeiro, R V; Koskinas, M F

    2018-04-01

    The main features of code SUMCOR developed for cascade summing correction for volumetric sources are described. MCNP6 is used to track histories starting from individual points inside the volumetric source, for each set of cascade transitions from the radionuclide. Total and FEP efficiencies are calculated for all gamma-rays and X-rays involved in the cascade. Cascade summing correction is based on the matrix formalism developed by Semkow et al. (1990). Results are presented applying the experimental data sent to the participants of two intercomparisons organized by the ICRM-GSWG and coordinated by Dr. Marie-Cristine Lépy from the Laboratoire National Henri Becquerel (LNE-LNHB), CEA, in 2008 and 2010, respectively and compared to the other participants in the intercomparisons. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.

    PubMed

    Hendriks, P H G M; Maucec, M; de Meijer, R J

    2002-09-01

    gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.

  6. Determination of neutron flux distribution in an Am-Be irradiator using the MCNP.

    PubMed

    Shtejer-Diaz, K; Zamboni, C B; Zahn, G S; Zevallos-Chávez, J Y

    2003-10-01

    A neutron irradiator has been assembled at IPEN facilities to perform qualitative-quantitative analysis of many materials using thermal and fast neutrons outside the nuclear reactor premises. To establish the prototype specifications, the neutron flux distribution and the absorbed dose rates were calculated using the MCNP computer code. These theoretical predictions then allow one to discuss the optimum irradiator design and its performance.

  7. An MCNP-based model of a medical linear accelerator x-ray photon beam.

    PubMed

    Ajaj, F A; Ghassal, N M

    2003-09-01

    The major components in the x-ray photon beam path of the treatment head of the VARIAN Clinac 2300 EX medical linear accelerator were modeled and simulated using the Monte Carlo N-Particle radiation transport computer code (MCNP). Simulated components include x-ray target, primary conical collimator, x-ray beam flattening filter and secondary collimators. X-ray photon energy spectra and angular distributions were calculated using the model. The x-ray beam emerging from the secondary collimators were scored by considering the total x-ray spectra from the target as the source of x-rays at the target position. The depth dose distribution and dose profiles at different depths and field sizes have been calculated at a nominal operating potential of 6 MV and found to be within acceptable limits. It is concluded that accurate specification of the component dimensions, composition and nominal accelerating potential gives a good assessment of the x-ray energy spectra.

  8. Parameter dependence of the MCNP electron transport in determining dose distributions.

    PubMed

    Reynaert, N; Palmans, H; Thierens, H; Jeraj, R

    2002-10-01

    In this paper, a detailed study of the electron transport in MCNP is performed, separating the effects of the energy binning technique on the energy loss rate, the scattering angles, and the sub-step length as a function of energy. As this problem is already well known, in this paper we focus on the explanation as to why the default mode of MCNP can lead to large deviations. The resolution dependence was investigated as well. An error in the MCNP code in the energy binning technique in the default mode (DBCN 18 card = 0) was revealed, more specific in the updating of cross sections when a sub-step is performed corresponding to a high-energy loss. This updating error is not present in the ITS mode (DBCN 18 card = 1) and leads to a systematically lower dose deposition rate in the default mode. The effect is present for all energies studied (0.5-10 MeV) and depends on the geometrical resolution of the scoring regions and the energy grid resolution. The effect of the energy binning technique is of the same order of that of the updating error for energies below 2 MeV, and becomes less important for higher energies. For a 1 MeV point source surrounded by homogeneous water, the deviation of the default MCNP results at short distances attains 9% and remains approximately the same for all energies. This effect could be corrected by removing the completion of an energy step each time an electron changes from an energy bin during a sub-step. Another solution consists of performing all calculations in the ITS mode. Another problem is the resolution dependence, even in the ITS mode. The higher the resolution is chosen (the smaller the scoring regions) the faster the energy is deposited along the electron track. It is proven that this is caused by starting a new energy step when crossing a surface. The resolution effect should be investigated for every specific case when calculating dose distributions around beta sources. The resolution should not be higher than 0.85*(1-EFAC

  9. MCNP calculations for container inspection with tagged neutrons

    NASA Astrophysics Data System (ADS)

    Boghen, G.; Donzella, A.; Filippini, V.; Fontana, A.; Lunardon, M.; Moretto, S.; Pesente, S.; Zenoni, A.

    2005-12-01

    We are developing an innovative tagged neutrons inspection system (TNIS) for cargo containers: the system will allow us to assay the chemical composition of suspect objects, previously identified by a standard X-ray radiography. The operation of the system is extensively being simulated by using the MCNP Monte Carlo code to study different inspection geometries, cargo loads and hidden threat materials. Preliminary simulations evaluating the signal and the signal over background ratio expected as a function of the system parameters are presented. The results for a selection of cases are briefly discussed and demonstrate that the system can operate successfully in different filling conditions.

  10. Delta-ray Production in MCNP 6.2.0

    NASA Astrophysics Data System (ADS)

    Anderson, C.; McKinney, G.; Tutt, J.; James, M.

    Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. This paper discusses the MCNP6 heavy charged-particle implementation and provides production results for several benchmark/test problems.

  11. Status Report on the MCNP 2020 Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan

    2017-10-02

    The discussion below provides a status report on the MCNP 2020 initiative. It includes discussion of the history of MCNP 2020, accomplishments during 2013-17, priorities for near-term development, other related efforts, a brief summary, and a list of references for the plans and work accomplished.

  12. Application of the MCNP5 code to the Modeling of vaginal and intra-uterine applicators used in intracavitary brachytherapy: a first approach

    NASA Astrophysics Data System (ADS)

    Gerardy, I.; Rodenas, J.; Van Dycke, M.; Gallardo, S.; Tondeur, F.

    2008-02-01

    Brachytherapy is a radiotherapy treatment where encapsulated radioactive sources are introduced within a patient. Depending on the technique used, such sources can produce high, medium or low local dose rates. The Monte Carlo method is a powerful tool to simulate sources and devices in order to help physicists in treatment planning. In multiple types of gynaecological cancer, intracavitary brachytherapy (HDR Ir-192 source) is used combined with other therapy treatment to give an additional local dose to the tumour. Different types of applicators are used in order to increase the dose imparted to the tumour and to limit the effect on healthy surrounding tissues. The aim of this work is to model both applicator and HDR source in order to evaluate the dose at a reference point as well as the effect of the materials constituting the applicators on the near field dose. The MCNP5 code based on the Monte Carlo method has been used for the simulation. Dose calculations have been performed with *F8 energy deposition tally, taking into account photons and electrons. Results from simulation have been compared with experimental in-phantom dose measurements. Differences between calculations and measurements are lower than 5%.The importance of the source position has been underlined.

  13. Shielding properties of 80TeO2-5TiO2-(15-x) WO3-xAnOm glasses using WinXCom and MCNP5 code

    NASA Astrophysics Data System (ADS)

    Dong, M. G.; El-Mallawany, R.; Sayyed, M. I.; Tekin, H. O.

    2017-12-01

    Gamma ray shielding properties of 80TeO2-5TiO2-(15-x) WO3-xAnOm glasses, where AnOm is Nb2O5 = 0.01, 5, Nd2O3 = 3, 5 and Er2O3 = 5 mol% have been achieved. Shielding parameters; mass attenuation coefficients, half value layers, and macroscopic effective removal cross section for fast neutrons have been computed by using WinXCom program and MCNP5 Monte Carlo code. In addition, by using Geometric Progression method (G-P), exposure buildup factor values were also calculated. Variations of shielding parameters are discussed for the effect of REO addition into the glasses and photon energy.

  14. Possible Improvements to MCNP6 and its CEM/LAQGSM Event-Generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan Georgievich

    2015-08-04

    This report is intended to the MCNP6 developers and sponsors of MCNP6. It presents a set of suggested possible future improvements to MCNP6 and to its CEM03.03 and LAQGSM03.03 event-generators. A few suggested modifications of MCNP6 are quite simple, aimed at avoiding possible problems with running MCNP6 on various computers, i.e., these changes are not expected to change or improve any results, but should make the use of MCNP6 easier; such changes are expected to require limited man-power resources. On the other hand, several other suggested improvements require a serious further development of nuclear reaction models, are expected to improvemore » significantly the predictive power of MCNP6 for a number of nuclear reactions; but, such developments require several years of work by real experts on nuclear reactions.« less

  15. Delta-ray Production in MCNP 6.2.0

    DOE PAGES

    Anderson, Casey Alan; McKinney, Gregg Walter; Tutt, James Robert; ...

    2017-10-26

    Secondary electrons in the form of delta-rays, also referred to as knock-on electrons, have been a feature of MCNP for electron and positron transport for over 20 years. While MCNP6 now includes transport for a suite of heavy-ions and charged particles from its integration with MCNPX, the production of delta-rays was still limited to electron and positron transport. In the newest release of MCNP6, version 6.2.0, delta-ray production has now been extended for all energetic charged particles. The basis of this production is the analytical formulation from Rossi and ICRU Report 37. As a result, this paper discusses the MCNP6more » heavy charged-particle implementation and provides production results for several benchmark/test problems.« less

  16. MCNP simulation of a Theratron 780 radiotherapy unit.

    PubMed

    Miró, R; Soler, J; Gallardo, S; Campayo, J M; Díez, S; Verdú, G

    2005-01-01

    A Theratron 780 (MDS Nordion) 60Co radiotherapy unit has been simulated with the Monte Carlo code MCNP. The unit has been realistically modelled: the cylindrical source capsule and its housing, the rectangular collimator system, both the primary and secondary jaws and the air gaps between the components. Different collimator openings, ranging from 5 x 5 cm2 to 20 x 20 cm2 (narrow and broad beams) at a source-surface distance equal to 80 cm have been used during the study. In the present work, we have calculated spectra as a function of field size. A study of the variation of the electron contamination of the 60Co beam has also been performed.

  17. Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.

    PubMed

    El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H

    2008-09-01

    Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.

  18. Assessment of background hydrogen by the Monte Carlo computer code MCNP-4A during measurements of total body nitrogen.

    PubMed

    Ryde, S J; al-Agel, F A; Evans, C J; Hancock, D A

    2000-05-01

    The use of a hydrogen internal standard to enable the estimation of absolute mass during measurement of total body nitrogen by in vivo neutron activation is an established technique. Central to the technique is a determination of the H prompt gamma ray counts arising from the subject. In practice, interference counts from other sources--e.g., neutron shielding--are included. This study reports use of the Monte Carlo computer code, MCNP-4A, to investigate the interference counts arising from shielding both with and without a phantom containing a urea solution. Over a range of phantom size (depth 5 to 30 cm, width 20 to 40 cm), the counts arising from shielding increased by between 4% and 32% compared with the counts without a phantom. For any given depth, the counts increased approximately linearly with width. For any given width, there was little increase for depths exceeding 15 centimeters. The shielding counts comprised between 15% and 26% of those arising from the urea phantom. These results, although specific to the Swansea apparatus, suggest that extraneous hydrogen counts can be considerable and depend strongly on the subject's size.

  19. SIGACE Code for Generating High-Temperature ACE Files; Validation and Benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Amit R.; Ganesan, S.; Trkov, A.

    2005-05-24

    A code named SIGACE has been developed as a tool for MCNP users within the scope of a research contract awarded by the Nuclear Data Section of the International Atomic Energy Agency (IAEA) (Ref: 302-F4-IND-11566 B5-IND-29641). A new recipe has been evolved for generating high-temperature ACE files for use with the MCNP code. Under this scheme the low-temperature ACE file is first converted to an ENDF formatted file using the ACELST code and then Doppler broadened, essentially limited to the data in the resolved resonance region, to any desired higher temperature using SIGMA1. The SIGACE code then generates a high-temperaturemore » ACE file for use with the MCNP code. A thinning routine has also been introduced in the SIGACE code for reducing the size of the ACE files. The SIGACE code and the recipe for generating ACE files at higher temperatures has been applied to the SEFOR fast reactor benchmark problem (sodium-cooled fast reactor benchmark described in ENDF-202/BNL-19302, 1974 document). The calculated Doppler coefficient is in good agreement with the experimental value. A similar calculation using ACE files generated directly with the NJOY system also agrees with our SIGACE computed results. The SIGACE code and the recipe is further applied to study the numerical benchmark configuration of selected idealized PWR pin cell configurations with five different fuel enrichments as reported by Mosteller and Eisenhart. The SIGACE code that has been tested with several FENDL/MC files will be available, free of cost, upon request, from the Nuclear Data Section of the IAEA.« less

  20. Improved radial dose function estimation using current version MCNP Monte-Carlo simulation: Model 6711 and ISC3500 125I brachytherapy sources.

    PubMed

    Duggan, Dennis M

    2004-12-01

    Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.

  1. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  2. MCNP HPGe detector benchmark with previously validated Cyltran model.

    PubMed

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  3. Simulations of neutron transport at low energy: a comparison between GEANT and MCNP.

    PubMed

    Colonna, N; Altieri, S

    2002-06-01

    The use of the simulation tool GEANT for neutron transport at energies below 20 MeV is discussed, in particular with regard to shielding and dose calculations. The reliability of the GEANT/MICAP package for neutron transport in a wide energy range has been verified by comparing the results of simulations performed with this package in a wide energy range with the prediction of MCNP-4B, a code commonly used for neutron transport at low energy. A reasonable agreement between the results of the two codes is found for the neutron flux through a slab of material (iron and ordinary concrete), as well as for the dose released in soft tissue by neutrons. These results justify the use of the GEANT/MICAP code for neutron transport in a wide range of applications, including health physics problems.

  4. Inter-comparison of Dose Distributions Calculated by FLUKA, GEANT4, MCNP, and PHITS for Proton Therapy

    NASA Astrophysics Data System (ADS)

    Yang, Zi-Yi; Tsai, Pi-En; Lee, Shao-Chun; Liu, Yen-Chiang; Chen, Chin-Cheng; Sato, Tatsuhiko; Sheu, Rong-Jiun

    2017-09-01

    The dose distributions from proton pencil beam scanning were calculated by FLUKA, GEANT4, MCNP, and PHITS, in order to investigate their applicability in proton radiotherapy. The first studied case was the integrated depth dose curves (IDDCs), respectively from a 100 and a 226-MeV proton pencil beam impinging a water phantom. The calculated IDDCs agree with each other as long as each code employs 75 eV for the ionization potential of water. The second case considered a similar condition of the first case but with proton energies in a Gaussian distribution. The comparison to the measurement indicates the inter-code differences might not only due to different stopping power but also the nuclear physics models. How the physics parameter setting affect the computation time was also discussed. In the third case, the applicability of each code for pencil beam scanning was confirmed by delivering a uniform volumetric dose distribution based on the treatment plan, and the results showed general agreement between each codes, the treatment plan, and the measurement, except that some deviations were found in the penumbra region. This study has demonstrated that the selected codes are all capable of performing dose calculations for therapeutic scanning proton beams with proper physics settings.

  5. Acceleration of MCNP calculations for small pipes configurations by using Weigth Windows Importance cards created by the SN-3D ATTILA

    NASA Astrophysics Data System (ADS)

    Castanier, Eric; Paterne, Loic; Louis, Céline

    2017-09-01

    In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.

  6. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).

    PubMed

    Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M

    2007-02-21

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  7. Validation of the MCNP computational model for neutron flux distribution with the neutron activation analysis measurement

    NASA Astrophysics Data System (ADS)

    Tiyapun, K.; Chimtin, M.; Munsorn, S.; Somchit, S.

    2015-05-01

    The objective of this work is to demonstrate the method for validating the predication of the calculation methods for neutron flux distribution in the irradiation tubes of TRIGA research reactor (TRR-1/M1) using the MCNP computer code model. The reaction rate using in the experiment includes 27Al(n, α)24Na and 197Au(n, γ)198Au reactions. Aluminium (99.9 wt%) and gold (0.1 wt%) foils and the gold foils covered with cadmium were irradiated in 9 locations in the core referred to as CT, C8, C12, F3, F12, F22, F29, G5, and G33. The experimental results were compared to the calculations performed using MCNP which consisted of the detailed geometrical model of the reactor core. The results from the experimental and calculated normalized reaction rates in the reactor core are in good agreement for both reactions showing that the material and geometrical properties of the reactor core are modelled very well. The results indicated that the difference between the experimental measurements and the calculation of the reactor core using the MCNP geometrical model was below 10%. In conclusion the MCNP computational model which was used to calculate the neutron flux and reaction rate distribution in the reactor core can be used for others reactor core parameters including neutron spectra calculation, dose rate calculation, power peaking factors calculation and optimization of research reactor utilization in the future with the confidence in the accuracy and reliability of the calculation.

  8. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  9. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  10. Depletion-based techniques for super-resolution imaging of NV-diamond

    NASA Astrophysics Data System (ADS)

    Jaskula, Jean-Christophe; Trifonov, Alexei; Glenn, David; Walsworth, Ronald

    2012-06-01

    We discuss the development and application of depletion-based techniques for super-resolution imaging of NV centers in diamond: stimulated emission depletion (STED), metastable ground state depletion (GSD), and dark state depletion (DSD). NV centers in diamond do not bleach under optical excitation, are not biotoxic, and have long-lived electronic spin coherence and spin-state-dependent fluorescence. Thus NV-diamond has great potential as a fluorescent biomarker and as a magnetic biosensor.

  11. Calculation of self–shielding factor for neutron activation experiments using GEANT4 and MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero–Barrientos, Jaime, E-mail: jaromero@ing.uchile.cl; Universidad de Chile, DFI, Facultad de Ciencias Físicas Y Matemáticas, Avenida Blanco Encalada 2008, Santiago; Molina, F.

    2016-07-07

    The neutron self–shielding factor G as a function of the neutron energy was obtained for 14 pure metallic samples in 1000 isolethargic energy bins from 1·10{sup −5}eV to 2·10{sup 7}eV using Monte Carlo simulations in GEANT4 and MCNP6. The comparison of these two Monte Carlo codes shows small differences in the final self–shielding factor mostly due to the different cross section databases that each program uses.

  12. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  13. An analysis of MCNP cross-sections and tally methods for low-energy photon emitters.

    PubMed

    Demarco, John J; Wallace, Robert E; Boedeker, Kirsten

    2002-04-21

    Monte Carlo calculations are frequently used to analyse a variety of radiological science applications using low-energy (10-1000 keV) photon sources. This study seeks to create a low-energy benchmark for the MCNP Monte Carlo code by simulating the absolute dose rate in water and the air-kerma rate for monoenergetic point sources with energies between 10 keV and 1 MeV. The analysis compares four cross-section datasets as well as the tally method for collision kerma versus absorbed dose. The total photon attenuation coefficient cross-section for low atomic number elements has changed significantly as cross-section data have changed between 1967 and 1989. Differences of up to 10% are observed in the photoelectric cross-section for water at 30 keV between the standard MCNP cross-section dataset (DLC-200) and the most recent XCOM/NIST tabulation. At 30 keV, the absolute dose rate in water at 1.0 cm from the source increases by 7.8% after replacing the DLC-200 photoelectric cross-sections for water with those from the XCOM/NIST tabulation. The differences in the absolute dose rate are analysed when calculated with either the MCNP absorbed dose tally or the collision kerma tally. Significant differences between the collision kerma tally and the absorbed dose tally can occur when using the DLC-200 attenuation coefficients in conjunction with a modern tabulation of mass energy-absorption coefficients.

  14. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  15. An Assessment of the Detection of Highly Enriched Uranium and its Use in an Improvised Nuclear Device using the Monte Carlo Computer Code MCNP-5

    NASA Astrophysics Data System (ADS)

    Cochran, Thomas

    2007-04-01

    In 2002 and again in 2003, an investigative journalist unit at ABC News transported a 6.8 kilogram metallic slug of depleted uranium (DU) via shipping container from Istanbul, Turkey to Brooklyn, NY and from Jakarta, Indonesia to Long Beach, CA. Targeted inspection of these shipping containers by Department of Homeland Security (DHS) personnel, included the use of gamma-ray imaging, portal monitors and hand-held radiation detectors, did not uncover the hidden DU. Monte Carlo analysis of the gamma-ray intensity and spectrum of a DU slug and one consisting of highly-enriched uranium (HEU) showed that DU was a proper surrogate for testing the ability of DHS to detect the illicit transport of HEU. Our analysis using MCNP-5 illustrated the ease of fully shielding an HEU sample to avoid detection. The assembly of an Improvised Nuclear Device (IND) -- a crude atomic bomb -- from sub-critical pieces of HEU metal was then examined via Monte Carlo criticality calculations. Nuclear explosive yields of such an IND as a function of the speed of assembly of the sub-critical HEU components were derived. A comparison was made between the more rapid assembly of sub-critical pieces of HEU in the ``Little Boy'' (Hiroshima) weapon's gun barrel and gravity assembly (i.e., dropping one sub-critical piece of HEU on another from a specified height). Based on the difficulty of detection of HEU and the straightforward construction of an IND utilizing HEU, current U.S. government policy must be modified to more urgently prioritize elimination of and securing the global inventories of HEU.

  16. Monte Carlo simulation of x-ray spectra in diagnostic radiology and mammography using MCNP4C

    NASA Astrophysics Data System (ADS)

    Ay, M. R.; Shahriari, M.; Sarkar, S.; Adib, M.; Zaidi, H.

    2004-11-01

    The general purpose Monte Carlo N-particle radiation transport computer code (MCNP4C) was used for the simulation of x-ray spectra in diagnostic radiology and mammography. The electrons were transported until they slow down and stop in the target. Both bremsstrahlung and characteristic x-ray production were considered in this work. We focus on the simulation of various target/filter combinations to investigate the effect of tube voltage, target material and filter thickness on x-ray spectra in the diagnostic radiology and mammography energy ranges. The simulated x-ray spectra were compared with experimental measurements and spectra calculated by IPEM report number 78. In addition, the anode heel effect and off-axis x-ray spectra were assessed for different anode angles and target materials and the results were compared with EGS4-based Monte Carlo simulations and measured data. Quantitative evaluation of the differences between our Monte Carlo simulated and comparison spectra was performed using student's t-test statistical analysis. Generally, there is a good agreement between the simulated x-ray and comparison spectra, although there are systematic differences between the simulated and reference spectra especially in the K-characteristic x-rays intensity. Nevertheless, no statistically significant differences have been observed between IPEM spectra and the simulated spectra. It has been shown that the difference between MCNP simulated spectra and IPEM spectra in the low energy range is the result of the overestimation of characteristic photons following the normalization procedure. The transmission curves produced by MCNP4C have good agreement with the IPEM report especially for tube voltages of 50 kV and 80 kV. The systematic discrepancy for higher tube voltages is the result of systematic differences between the corresponding spectra.

  17. Fission products detection in irradiated TRIGA fuel by means of gamma spectroscopy and MCNP calculation.

    PubMed

    Cagnazzo, M; Borio di Tigliole, A; Böck, H; Villa, M

    2018-05-01

    Aim of this work was the detection of fission products activity distribution along the axial dimension of irradiated fuel elements (FEs) at the TRIGA Mark II research reactor of the Technische Universität (TU) Wien. The activity distribution was measured by means of a customized fuel gamma scanning device, which includes a vertical lifting system to move the fuel rod along its vertical axis. For each investigated FE, a gamma spectrum measurement was performed along the vertical axis, with steps of 1 cm, in order to determine the axial distribution of the fission products. After the fuel elements underwent a relatively short cooling down period, different fission products were detected. The activity concentration was determined by calibrating the gamma detector with a standard calibration source of known activity and by MCNP6 simulations for the evaluation of self-absorption and geometric effects. Given the specific TRIGA fuel composition, a correction procedure is developed and used in this work for the measurement of the fission product Zr 95 . This measurement campaign is part of a more extended project aiming at the modelling of the TU Wien TRIGA reactor by means of different calculation codes (MCNP6, Serpent): the experimental results presented in this paper will be subsequently used for the benchmark of the models developed with the calculation codes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    PubMed

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  19. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  20. Simulation of irradiation exposure of electronic devices due to heavy ion therapy with Monte Carlo Code MCNP6

    NASA Astrophysics Data System (ADS)

    Lapins, Janis; Guilliard, Nicole; Bernnat, Wolfgang; Buck, Arnulf

    2017-09-01

    During heavy ion irradiation therapy the patient has to be located exactly at the right position to make sure that the Bragg peak occurs in the tumour. The patient has to be moved in the range of millimetres to scan the ill tissue. For that reason a special table was developed which allows exact positioning. The electronic control can be located outside the surgery. But that has some disadvantage for the construction. To keep the system compact it would be much more comfortable to put the electronic control inside the surgery. As a lot of high energetic secondary particles are produced during the therapy causing a high dose in the room it is important to find positions with low dose rates. Therefore, investigations are needed where the electronic devices should be located to obtain a minimum of radiation, help to prevent the failure of sensitive devices. The dose rate was calculated for carbon ions with different initial energy and protons over the entire therapy room with Monte Carlo particle tracking using MCNP6. The types of secondary particles were identified and the dose rate for a thin silicon layer and an electronic mixture material was determined. In addition, the shielding effect of several selected material layers was calculated using MCNP6.

  1. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

    2002-09-11

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less

  2. SABRINA - an interactive geometry modeler for MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T.; Murphy, J.

    One of the most difficult tasks when analyzing a complex three-dimensional system with Monte Carlo is geometry model development. SABRINA attempts to make the modeling process more user-friendly and less of an obstacle. It accepts both combinatorial solid bodies and MCNP surfaces and produces MCNP cells. The model development process in SABRINA is highly interactive and gives the user immediate feedback on errors. Users can view their geometry from arbitrary perspectives while the model is under development and interactively find and correct modeling errors. An example of a SABRINA display is shown. It represents a complex three-dimensional shape.

  3. Efficiency of whole-body counter for various body size calculated by MCNP5 software.

    PubMed

    Krstic, D; Nikezic, D

    2012-11-01

    The efficiency of a whole-body counter for (137)Cs and (40)K was calculated using the MCNP5 code. The ORNL phantoms of a human body of different body sizes were applied in a sitting position in front of a detector. The aim was to investigate the dependence of efficiency on the body size (age) and the detector position with respect to the body and to estimate the accuracy of real measurements. The calculation work presented here is related to the NaI detector, which is available in the Serbian Whole-body Counter facility in Vinca Institute.

  4. Gadolinia depletion analysis by CASMO-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Y.; Saji, E.; Toba, A.

    1993-01-01

    CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less

  5. Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges

    NASA Astrophysics Data System (ADS)

    Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver

    2017-10-01

    The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.

  6. Doppler Temperature Coefficient Calculations Using Adjoint-Weighted Tallies and Continuous Energy Cross Sections in MCNP6

    NASA Astrophysics Data System (ADS)

    Gonzales, Matthew Alejandro

    The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research

  7. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: a case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital.

    PubMed

    Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.

  8. V&V of MCNP 6.1.1 Beta Against Intermediate and High-Energy Experimental Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashnik, Stepan G

    This report presents a set of validation and verification (V&V) MCNP 6.1.1 beta results calculated in parallel, with MPI, obtained using its event generators at intermediate and high-energies compared against various experimental data. It also contains several examples of results using the models at energies below 150 MeV, down to 10 MeV, where data libraries are normally used. This report can be considered as the forth part of a set of MCNP6 Testing Primers, after its first, LA-UR-11-05129, and second, LA-UR-11-05627, and third, LA-UR-26944, publications, but is devoted to V&V with the latest, 1.1 beta version of MCNP6. The MCNP6more » test-problems discussed here are presented in the /VALIDATION_CEM/and/VALIDATION_LAQGSM/subdirectories in the MCNP6/Testing/directory. README files that contain short descriptions of every input file, the experiment, the quantity of interest that the experiment measures and its description in the MCNP6 output files, and the publication reference of that experiment are presented for every test problem. Templates for plotting the corresponding results with xmgrace as well as pdf files with figures representing the final results of our V&V efforts are presented. Several technical “bugs” in MCNP 6.1.1 beta were discovered during our current V&V of MCNP6 while running it in parallel with MPI using its event generators. These “bugs” are to be fixed in the following version of MCNP6. Our results show that MCNP 6.1.1 beta using its CEM03.03, LAQGSM03.03, Bertini, and INCL+ABLA, event generators describes, as a rule, reasonably well different intermediate- and high-energy measured data. This primer isn’t meant to be read from cover to cover. Readers may skip some sections and go directly to any test problem in which they are interested.« less

  9. Comparison of penumbra regions produced by ancient Gamma knife model C and Gamma ART 6000 using Monte Carlo MCNP6 simulation.

    PubMed

    Banaee, Nooshin; Asgari, Sepideh; Nedaie, Hassan Ali

    2018-07-01

    The accuracy of penumbral measurements in radiotherapy is pivotal because dose planning computers require accurate data to adequately modeling the beams, which in turn are used to calculate patient dose distributions. Gamma knife is a non-invasive intracranial technique based on principles of the Leksell stereotactic system for open deep brain surgeries, invented and developed by Professor Lars Leksell. The aim of this study is to compare the penumbra widths of Leksell Gamma Knife model C and Gamma ART 6000. Initially, the structure of both systems were simulated by using Monte Carlo MCNP6 code and after validating the accuracy of simulation, beam profiles of different collimators were plotted. MCNP6 beam profile calculations showed that the penumbra values of Leksell Gamma knife model C and Gamma ART 6000 for 18, 14, 8 and 4 mm collimators are 9.7, 7.9, 4.3, 2.6 and 8.2, 6.9, 3.6, 2.4, respectively. The results of this study showed that since Gamma ART 6000 has larger solid angle in comparison with Gamma Knife model C, it produces better beam profile penumbras than Gamma Knife model C in the direct plane. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Evaluation of the DRAGON code for VHTR design analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division

    2006-01-12

    This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less

  11. Deplete! Deplete! Deplete!

    NASA Astrophysics Data System (ADS)

    Woodson, J.

    2017-12-01

    Deplete is intended to demonstrate by analogy the harmful effect that Green House Gases (GHG's) such as CO2 and H2O vapor are causing to the Ozone Layer. Increasing temperatures from human activities are contributing to the depletion of Ozone.

  12. A CT and MRI scan to MCNP input conversion program.

    PubMed

    Van Riper, Kenneth A

    2005-01-01

    We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.

  13. An improved MCNP version of the NORMAN voxel phantom for dosimetry studies.

    PubMed

    Ferrari, P; Gualdrini, G

    2005-09-21

    In recent years voxel phantoms have been developed on the basis of tomographic data of real individuals allowing new sets of conversion coefficients to be calculated for effective dose. Progress in radiation studies brought ICRP to revise its recommendations and a new report, already circulated in draft form, is expected to change the actual effective dose evaluation method. In the present paper the voxel phantom NORMAN developed at HPA, formerly NRPB, was employed with MCNP Monte Carlo code. A modified version of the phantom, NORMAN-05, was developed to take into account the new set of tissues and weighting factors proposed in the cited ICRP draft. Air kerma to organ equivalent dose and effective dose conversion coefficients for antero-posterior and postero-anterior parallel photon beam irradiations, from 20 keV to 10 MeV, have been calculated and compared with data obtained in other laboratories using different numerical phantoms. Obtained results are in good agreement with published data with some differences for the effective dose calculated employing the proposed new tissue weighting factors set in comparison with previous evaluations based on the ICRP 60 report.

  14. Simplification of an MCNP model designed for dose rate estimation

    NASA Astrophysics Data System (ADS)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  15. Calibration with MCNP of NaI detector for the determination of natural radioactivity levels in the field.

    PubMed

    Cinelli, Giorgia; Tositti, Laura; Mostacci, Domiziano; Baré, Jonathan

    2016-05-01

    In view of assessing natural radioactivity with on-site quantitative gamma spectrometry, efficiency calibration of NaI(Tl) detectors is investigated. A calibration based on Monte Carlo simulation of detector response is proposed, to render reliable quantitative analysis practicable in field campaigns. The method is developed with reference to contact geometry, in which measurements are taken placing the NaI(Tl) probe directly against the solid source to be analyzed. The Monte Carlo code used for the simulations was MCNP. Experimental verification of the calibration goodness is obtained by comparison with appropriate standards, as reported. On-site measurements yield a quick quantitative assessment of natural radioactivity levels present ((40)K, (238)U and (232)Th). On-site gamma spectrometry can prove particularly useful insofar as it provides information on materials from which samples cannot be taken. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Neutron and photon shielding benchmark calculations by MCNP on the LR-0 experimental facility.

    PubMed

    Hordósy, G

    2005-01-01

    In the framework of the REDOS project, the space-energy distribution of the neutron and photon flux has been calculated over the pressure vessel simulator thickness of the LR-0 experimental reactor, Rez, Czech Republic. The results calculated by the Monte Carlo code MCNP4C are compared with the measurements performed in the Nuclear Research Institute, Rez. The spectra have been measured at the barrel, in front of, inside and behind the pressure vessel in different configurations. The neutron measurements were performed in the energy range 0.1-10 MeV. This work has been done in the frame of the 5th Frame Work Programme of the European Community 1998-2002.

  17. Impact of thorium based molten salt reactor on the closure of the nuclear fuel cycle

    NASA Astrophysics Data System (ADS)

    Jaradat, Safwan Qasim Mohammad

    Molten salt reactor (MSR) is one of six reactors selected by the Generation IV International Forum (GIF). The liquid fluoride thorium reactor (LFTR) is a MSR concept based on thorium fuel cycle. LFTR uses liquid fluoride salts as a nuclear fuel. It uses 232Th and 233U as the fertile and fissile materials, respectively. Fluoride salt of these nuclides is dissolved in a mixed carrier salt of lithium and beryllium (FLiBe). The objective of this research was to complete feasibility studies of a small commercial thermal LFTR. The focus was on neutronic calculations in order to prescribe core design parameter such as core size, fuel block pitch (p), fuel channel radius, fuel path, reflector thickness, fuel salt composition, and power. In order to achieve this objective, the applicability of Monte Carlo N-Particle Transport Code (MCNP) to MSR modeling was verified. Then, a prescription for conceptual small thermal reactor LFTR and relevant calculations were performed using MCNP to determine the main neutronic parameters of the core reactor. The MCNP code was used to study the reactor physics characteristics for the FUJI-U3 reactor. The results were then compared with the results obtained from the original FUJI-U3 using the reactor physics code SRAC95 and the burnup analysis code ORIPHY2. The results were comparable with each other. Based on the results, MCNP was found to be a reliable code to model a small thermal LFTR and study all the related reactor physics characteristics. The results of this study were promising and successful in demonstrating a prefatory small commercial LFTR design. The outcome of using a small core reactor with a diameter/height of 280/260 cm that would operate for more than five years at a power level of 150 MWth was studied. The fuel system 7LiF - BeF2 - ThF4 - UF4 with a (233U/ 232Th) = 2.01 % was the candidate fuel for this reactor core.

  18. Efficient ultrafiltration-based protocol to deplete extracellular vesicles from fetal bovine serum

    PubMed Central

    Kornilov, Roman; Puhka, Maija; Mannerström, Bettina; Hiidenmaa, Hanna; Peltoniemi, Hilkka; Siljander, Pia; Seppänen-Kaijansinkko, Riitta; Kaur, Sippy

    2018-01-01

    ABSTRACT Fetal bovine serum (FBS) is the most commonly used supplement in studies involving cell-culture experiments. However, FBS contains large numbers of bovine extracellular vesicles (EVs), which hamper the analyses of secreted EVs from the cell type of preference and, thus, also the downstream analyses. Therefore, a prior elimination of EVs from FBS is crucial. However, the current methods of EV depletion by ultracentrifugation are cumbersome and the commercial alternatives expensive. In this study, our aim was to develop a protocol to completely deplete EVs from FBS, which may have wide applicability in cell-culture applications. We investigated different EV-depleted FBS prepared by our novel ultrafiltration-based protocol, by conventionally used overnight ultracentrifugation, or commercially available depleted FBS, and compared them with regular FBS. All sera were characterized by nanoparticle tracking analysis, electron microscopy, Western blotting and RNA quantification. Next, adipose-tissue mesenchymal stem cells (AT-MSCs) and cancer cells were grown in the media supplemented with the three different EV-depleted FBS and compared with cells grown in regular FBS media to assess the effects on cell proliferation, stress, differentiation and EV production. The novel ultrafiltration-based protocol depleted EVs from FBS clearly more efficiently than ultracentrifugation and commercial methods. Cell proliferation, stress, differentiation and EV production of AT-MSCs and cancer cell lines were similarly maintained in all three EV-depleted FBS media up to 96 h. In summary, our ultrafiltration protocol efficiently depletes EVs, is easy to use and maintains cell growth and metabolism. Since the method is also cost-effective and easy to standardize, it could be used in a wide range of cell-culture applications helping to increase comparability of EV research results between laboratories. PMID:29410778

  19. An approach to design a 90Sr radioisotope thermoelectric generator using analytical and Monte Carlo methods with ANSYS, COMSOL, and MCNP.

    PubMed

    Khajepour, Abolhasan; Rahmani, Faezeh

    2017-01-01

    In this study, a 90 Sr radioisotope thermoelectric generator (RTG) with power of milliWatt was designed to operate in the determined temperature (300-312K). For this purpose, the combination of analytical and Monte Carlo methods with ANSYS and COMSOL software as well as the MCNP code was used. This designed RTG contains 90 Sr as a radioisotope heat source (RHS) and 127 coupled thermoelectric modules (TEMs) based on bismuth telluride. Kapton (2.45mm in thickness) and Cryotherm sheets (0.78mm in thickness) were selected as the thermal insulators of the RHS, as well as a stainless steel container was used as a generator chamber. The initial design of the RHS geometry was performed according to the amount of radioactive material (strontium titanate) as well as the heat transfer calculations and mechanical strength considerations. According to the Monte Carlo simulation performed by the MCNP code, approximately 0.35 kCi of 90 Sr is sufficient to generate heat power in the RHS. To determine the optimal design of the RTG, the distribution of temperature as well as the dissipated heat and input power to the module were calculated in different parts of the generator using the ANSYS software. Output voltage according to temperature distribution on TEM was calculated using COMSOL. Optimization of the dimension of the RHS and heat insulator was performed to adapt the average temperature of the hot plate of TEM to the determined hot temperature value. This designed RTG generates 8mW in power with an efficiency of 1%. This proposed approach of combination method can be used for the precise design of various types of RTGs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Bias estimates used in lieu of validation of fission products and minor actinides in MCNP K eff calculations for PWR burnup credit casks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Don E.; Marshall, William J.; Wagner, John C.

    The U.S. Nuclear Regulatory Commission (NRC) Division of Spent Fuel Storage and Transportation recently issued Interim Staff Guidance (ISG) 8, Revision 3. This ISG provides guidance for burnup credit (BUC) analyses supporting transport and storage of PWR pressurized water reactor (PWR) fuel in casks. Revision 3 includes guidance for addressing validation of criticality (k eff) calculations crediting the presence of a limited set of fission products and minor actinides (FP&MA). Based on previous work documented in NUREG/CR-7109, recommendation 4 of ISG-8, Rev. 3, includes a recommendation to use 1.5 or 3% of the FP&MA worth to conservatively cover the biasmore » due to the specified FP&MAs. This bias is supplementary to the bias and bias uncertainty resulting from validation of k eff calculations for the major actinides in SNF and does not address extension to actinides and fission products beyond those identified herein. The work described in this report involves comparison of FP&MA worths calculated using SCALE and MCNP with ENDF/B-V, -VI, and -VII based nuclear data and supports use of the 1.5% FP&MA worth bias when either SCALE or MCNP codes are used for criticality calculations, provided the other conditions of the recommendation 4 are met. The method used in this report may also be applied to demonstrate the applicability of the 1.5% FP&MA worth bias to other codes using ENDF/B V, VI or VII based nuclear data. The method involves use of the applicant s computational method to generate FP&MA worths for a reference SNF cask model using specified spent fuel compositions. The applicant s FP&MA worths are then compared to reference values provided in this report. The applicants FP&MA worths should not exceed the reference results by more than 1.5% of the reference FP&MA worths.« less

  1. Thorium-based mixed oxide fuel in a pressurized water reactor: A feasibility analysis with MCNP

    NASA Astrophysics Data System (ADS)

    Tucker, Lucas Powelson

    This dissertation investigates techniques for spent fuel monitoring, and assesses the feasibility of using a thorium-based mixed oxide fuel in a conventional pressurized water reactor for plutonium disposition. Both non-paralyzing and paralyzing dead-time calculations were performed for the Portable Spectroscopic Fast Neutron Probe (N-Probe), which can be used for spent fuel interrogation. Also, a Canberra 3He neutron detector's dead-time was estimated using a combination of subcritical assembly measurements and MCNP simulations. Next, a multitude of fission products were identified as candidates for burnup and spent fuel analysis of irradiated mixed oxide fuel. The best isotopes for these applications were identified by investigating half-life, photon energy, fission yield, branching ratios, production modes, thermal neutron absorption cross section and fuel matrix diffusivity. 132I and 97Nb were identified as good candidates for MOX fuel on-line burnup analysis. In the second, and most important, part of this work, the feasibility of utilizing ThMOX fuel in a pressurized water reactor (PWR) was first examined under steady-state, beginning of life conditions. Using a three-dimensional MCNP model of a Westinghouse-type 17x17 PWR, several fuel compositions and configurations of a one-third ThMOX core were compared to a 100% UO2 core. A blanket-type arrangement of 5.5 wt% PuO2 was determined to be the best candidate for further analysis. Next, the safety of the ThMOX configuration was evaluated through three cycles of burnup at several using the following metrics: axial and radial nuclear hot channel factors, moderator and fuel temperature coefficients, delayed neutron fraction, and shutdown margin. Additionally, the performance of the ThMOX configuration was assessed by tracking cycle length, plutonium destroyed, and fission product poison concentration.

  2. A common class of transcripts with 5'-intron depletion, distinct early coding sequence features, and N1-methyladenosine modification.

    PubMed

    Cenik, Can; Chua, Hon Nian; Singh, Guramrit; Akef, Abdalla; Snyder, Michael P; Palazzo, Alexander F; Moore, Melissa J; Roth, Frederick P

    2017-03-01

    Introns are found in 5' untranslated regions (5'UTRs) for 35% of all human transcripts. These 5'UTR introns are not randomly distributed: Genes that encode secreted, membrane-bound and mitochondrial proteins are less likely to have them. Curiously, transcripts lacking 5'UTR introns tend to harbor specific RNA sequence elements in their early coding regions. To model and understand the connection between coding-region sequence and 5'UTR intron status, we developed a classifier that can predict 5'UTR intron status with >80% accuracy using only sequence features in the early coding region. Thus, the classifier identifies transcripts with 5 ' proximal- i ntron- m inus-like-coding regions ("5IM" transcripts). Unexpectedly, we found that the early coding sequence features defining 5IM transcripts are widespread, appearing in 21% of all human RefSeq transcripts. The 5IM class of transcripts is enriched for non-AUG start codons, more extensive secondary structure both preceding the start codon and near the 5' cap, greater dependence on eIF4E for translation, and association with ER-proximal ribosomes. 5IM transcripts are bound by the exon junction complex (EJC) at noncanonical 5' proximal positions. Finally, N 1 -methyladenosines are specifically enriched in the early coding regions of 5IM transcripts. Taken together, our analyses point to the existence of a distinct 5IM class comprising ∼20% of human transcripts. This class is defined by depletion of 5' proximal introns, presence of specific RNA sequence features associated with low translation efficiency, N 1 -methyladenosines in the early coding region, and enrichment for noncanonical binding by the EJC. © 2017 Cenik et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  3. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo

  4. Multi-group Fokker-Planck proton transport in MCNP{trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.

    1997-11-01

    MCNP has been enhanced to perform proton transport using a multigroup Fokker Planck (MGFP) algorithm with primary emphasis on proton radiography simulations. The new method solves the Fokker Planck approximation to the Boltzmann transport equation for the small angle multiple scattering portion of proton transport. Energy loss is accounted for by applying a group averaged stopping power over each transport step. Large angle scatter and non-inelastic events are treated as extinction. Comparisons with the more rigorous LAHET code show agreement to a few per cent for the total transmitted currents. The angular distributions through copper and low Z compounds showmore » good agreement between LAHET and MGFP with the MGFP method being slightly less forward peaked and without the large angle tails apparent in the LAHET simulation. Suitability of this method for proton radiography simulations is shown for a simple problem of a hole in a copper slab. LAHET and MGFP calculations of position, angle and energy through more complex objects are presented.« less

  5. Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms

    PubMed Central

    Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.

    2013-01-01

    Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162

  6. Comparison of CdZnTe neutron detector models using MCNP6 and Geant4

    NASA Astrophysics Data System (ADS)

    Wilson, Emma; Anderson, Mike; Prendergasty, David; Cheneler, David

    2018-01-01

    The production of accurate detector models is of high importance in the development and use of detectors. Initially, MCNP and Geant were developed to specialise in neutral particle models and accelerator models, respectively; there is now a greater overlap of the capabilities of both, and it is therefore useful to produce comparative models to evaluate detector characteristics. In a collaboration between Lancaster University, UK, and Innovative Physics Ltd., UK, models have been developed in both MCNP6 and Geant4 of Cadmium Zinc Telluride (CdZnTe) detectors developed by Innovative Physics Ltd. Herein, a comparison is made of the relative strengths of MCNP6 and Geant4 for modelling neutron flux and secondary γ-ray emission. Given the increasing overlap of the modelling capabilities of MCNP6 and Geant4, it is worthwhile to comment on differences in results for simulations which have similarities in terms of geometries and source configurations.

  7. An MCNP-based model for the evaluation of the photoneutron dose in high energy medical electron accelerators.

    PubMed

    Carinou, Eleutheria; Stamatelatos, Ion Evangelos; Kamenopoulou, Vassiliki; Georgolopoulou, Paraskevi; Sandilos, Panayotis

    The development of a computational model for the treatment head of a medical electron accelerator (Elekta/Philips SL-18) by the Monte Carlo code mcnp-4C2 is discussed. The model includes the major components of the accelerator head and a pmma phantom representing the patient body. Calculations were performed for a 14 MeV electron beam impinging on the accelerator target and a 10 cmx10 cm beam area at the isocentre. The model was used in order to predict the neutron ambient dose equivalent at the isocentre level and moreover the neutron absorbed dose distribution within the phantom. Calculations were validated against experimental measurements performed by gold foil activation detectors. The results of this study indicated that the equivalent dose at tissues or organs adjacent to the treatment field due to photoneutrons could be up to 10% of the total peripheral dose, for the specific accelerator characteristics examined. Therefore, photoneutrons should be taken into account when accurate dose calculations are required to sensitive tissues that are adjacent to the therapeutic X-ray beam. The method described can be extended to other accelerators and collimation configurations as well, upon specification of treatment head component dimensions, composition and nominal accelerating potential.

  8. Evaluation and Testing of the ADVANTG Code on SNM Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Pacific Northwest National Laboratory (PNNL) has been tasked with evaluating the effectiveness of ORNL’s new hybrid transport code, ADVANTG, on scenarios of interest to our NA-22 sponsor, specifically of detection of diversion of special nuclear material (SNM). PNNL staff have determined that acquisition and installation of ADVANTG was relatively straightforward for a code in its phase of development, but probably not yet sufficient for mass distribution to the general user. PNNL staff also determined that with little effort, ADVANTG generated weight windows that typically worked for the problems and generated results consistent with MCNP. With slightly greater effort of choosingmore » a finer mesh around detectors or sample reaction tally regions, the figure of merit (FOM) could be further improved in most cases. This does take some limited knowledge of deterministic transport methods. The FOM could also be increased by limiting the energy range for a tally to the energy region of greatest interest. It was then found that an MCNP run with the full energy range for the tally showed improved statistics in the region used for the ADVANTG run. The specific case of interest chosen by the sponsor is the CIPN project from Las Alamos National Laboratory (LANL), which is an active interrogation, non-destructive assay (NDA) technique to quantify the fissile content in a spent fuel assembly and is also sensitive to cases of material diversion. Unfortunately, weight windows for the CIPN problem cannot currently be properly generated with ADVANTG due to inadequate accommodations for source definition. ADVANTG requires that a fixed neutron source be defined within the problem and cannot account for neutron multiplication. As such, it is rendered useless in active interrogation scenarios. It is also interesting to note that this is a difficult problem to solve and that the automated weight windows generator in MCNP actually slowed down the problem. Therefore, PNNL had

  9. Benchmark of neutron production cross sections with Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Tsai, Pi-En; Lai, Bo-Lun; Heilbronn, Lawrence H.; Sheu, Rong-Jiun

    2018-02-01

    Aiming to provide critical information in the fields of heavy ion therapy, radiation shielding in space, and facility design for heavy-ion research accelerators, the physics models in three Monte Carlo simulation codes - PHITS, FLUKA, and MCNP6, were systematically benchmarked with comparisons to fifteen sets of experimental data for neutron production cross sections, which include various combinations of 12C, 20Ne, 40Ar, 84Kr and 132Xe projectiles and natLi, natC, natAl, natCu, and natPb target nuclides at incident energies between 135 MeV/nucleon and 600 MeV/nucleon. For neutron energies above 60% of the specific projectile energy per nucleon, the LAQGMS03.03 in MCNP6, the JQMD/JQMD-2.0 in PHITS, and the RQMD-2.4 in FLUKA all show a better agreement with data in heavy-projectile systems than with light-projectile systems, suggesting that the collective properties of projectile nuclei and nucleon interactions in the nucleus should be considered for light projectiles. For intermediate-energy neutrons whose energies are below the 60% projectile energy per nucleon and above 20 MeV, FLUKA is likely to overestimate the secondary neutron production, while MCNP6 tends towards underestimation. PHITS with JQMD shows a mild tendency for underestimation, but the JQMD-2.0 model with a modified physics description for central collisions generally improves the agreement between data and calculations. For low-energy neutrons (below 20 MeV), which are dominated by the evaporation mechanism, PHITS (which uses GEM linked with JQMD and JQMD-2.0) and FLUKA both tend to overestimate the production cross section, whereas MCNP6 tends to underestimate more systems than to overestimate. For total neutron production cross sections, the trends of the benchmark results over the entire energy range are similar to the trends seen in the dominate energy region. Also, the comparison of GEM coupled with either JQMD or JQMD-2.0 in the PHITS code indicates that the model used to describe the first

  10. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  11. Mechanism-based biomarker gene sets for glutathione depletion-related hepatotoxicity in rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Weihua; Mizukawa, Yumiko; Nakatsu, Noriyuki

    Chemical-induced glutathione depletion is thought to be caused by two types of toxicological mechanisms: PHO-type glutathione depletion [glutathione conjugated with chemicals such as phorone (PHO) or diethyl maleate (DEM)], and BSO-type glutathione depletion [i.e., glutathione synthesis inhibited by chemicals such as L-buthionine-sulfoximine (BSO)]. In order to identify mechanism-based biomarker gene sets for glutathione depletion in rat liver, male SD rats were treated with various chemicals including PHO (40, 120 and 400 mg/kg), DEM (80, 240 and 800 mg/kg), BSO (150, 450 and 1500 mg/kg), and bromobenzene (BBZ, 10, 100 and 300 mg/kg). Liver samples were taken 3, 6, 9 andmore » 24 h after administration and examined for hepatic glutathione content, physiological and pathological changes, and gene expression changes using Affymetrix GeneChip Arrays. To identify differentially expressed probe sets in response to glutathione depletion, we focused on the following two courses of events for the two types of mechanisms of glutathione depletion: a) gene expression changes occurring simultaneously in response to glutathione depletion, and b) gene expression changes after glutathione was depleted. The gene expression profiles of the identified probe sets for the two types of glutathione depletion differed markedly at times during and after glutathione depletion, whereas Srxn1 was markedly increased for both types as glutathione was depleted, suggesting that Srxn1 is a key molecule in oxidative stress related to glutathione. The extracted probe sets were refined and verified using various compounds including 13 additional positive or negative compounds, and they established two useful marker sets. One contained three probe sets (Akr7a3, Trib3 and Gstp1) that could detect conjugation-type glutathione depletors any time within 24 h after dosing, and the other contained 14 probe sets that could detect glutathione depletors by any mechanism. These two sets, with appropriate

  12. Depletion optimization of lumped burnable poisons in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodah, Z.H.

    1982-01-01

    Techniques were developed to construct a set of basic poison depletion curves which deplete in a monotonical manner. These curves were combined to match a required optimized depletion profile by utilizing either linear or non-linear programming methods. Three computer codes, LEOPARD, XSDRN, and EXTERMINATOR-2 were used in the analyses. A depletion routine was developed and incorporated into the XSDRN code to allow the depletion of fuel, fission products, and burnable poisons. The Three Mile Island Unit-1 reactor core was used in this work as a typical PWR core. Two fundamental burnable poison rod designs were studied. They are a solidmore » cylindrical poison rod and an annular cylindrical poison rod with water filling the central region.These two designs have either a uniform mixture of burnable poisons or lumped spheroids of burnable poisons in the poison region. Boron and gadolinium are the two burnable poisons which were investigated in this project. Thermal self-shielding factor calculations for solid and annular poison rods were conducted. Also expressions for overall thermal self-shielding factors for one or more than one size group of poison spheroids inside solid and annular poison rods were derived and studied. Poison spheroids deplete at a slower rate than the poison mixture because each spheroid exhibits some self-shielding effects of its own. The larger the spheroid, the higher the self-shielding effects due to the increase in poison concentration.« less

  13. Gas Core Reactor Numerical Simulation Using a Coupled MHD-MCNP Model

    NASA Technical Reports Server (NTRS)

    Kazeminezhad, F.; Anghaie, S.

    2008-01-01

    Analysis is provided in this report of using two head-on magnetohydrodynamic (MHD) shocks to achieve supercritical nuclear fission in an axially elongated cylinder filled with UF4 gas as an energy source for deep space missions. The motivation for each aspect of the design is explained and supported by theory and numerical simulations. A subsequent report will provide detail on relevant experimental work to validate the concept. Here the focus is on the theory of and simulations for the proposed gas core reactor conceptual design from the onset of shock generations to the supercritical state achieved when the shocks collide. The MHD model is coupled to a standard nuclear code (MCNP) to observe the neutron flux and fission power attributed to the supercritical state brought about by the shock collisions. Throughout the modeling, realistic parameters are used for the initial ambient gaseous state and currents to ensure a resulting supercritical state upon shock collisions.

  14. Nuclear Fuel Depletion Analysis Using Matlab Software

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Nematollahi, M. R.

    Coupled first order IVPs are frequently used in many parts of engineering and sciences. In this article, we presented a code including three computer programs which are joint with the Matlab software to solve and plot the solutions of the first order coupled stiff or non-stiff IVPs. Some engineering and scientific problems related to IVPs are given and fuel depletion (production of the 239Pu isotope) in a Pressurized Water Nuclear Reactor (PWR) are computed by the present code.

  15. Characterizing scintillator detector response for correlated fission experiments with MCNP and associated packages

    DOE PAGES

    Andrews, M. T.; Rising, M. E.; Meierbachtol, K.; ...

    2018-06-15

    Wmore » hen multiple neutrons are emitted in a fission event they are correlated in both energy and their relative angle, which may impact the design of safeguards equipment and other instrumentation for non-proliferation applications. The most recent release of MCNP 6 . 2 contains the capability to simulate correlated fission neutrons using the event generators CGMF and FREYA . These radiation transport simulations will be post-processed by the detector response code, DRiFT , and compared directly to correlated fission measurements. DRiFT has been previously compared to single detector measurements, its capabilities have been recently expanded with correlated fission simulations in mind. Finally, this paper details updates to DRiFT specific to correlated fission measurements, including tracking source particle energy of all detector events (and non-events), expanded output formats, and digitizer waveform generation.« less

  16. Characterizing scintillator detector response for correlated fission experiments with MCNP and associated packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M. T.; Rising, M. E.; Meierbachtol, K.

    Wmore » hen multiple neutrons are emitted in a fission event they are correlated in both energy and their relative angle, which may impact the design of safeguards equipment and other instrumentation for non-proliferation applications. The most recent release of MCNP 6 . 2 contains the capability to simulate correlated fission neutrons using the event generators CGMF and FREYA . These radiation transport simulations will be post-processed by the detector response code, DRiFT , and compared directly to correlated fission measurements. DRiFT has been previously compared to single detector measurements, its capabilities have been recently expanded with correlated fission simulations in mind. Finally, this paper details updates to DRiFT specific to correlated fission measurements, including tracking source particle energy of all detector events (and non-events), expanded output formats, and digitizer waveform generation.« less

  17. Characterization of Filters Loaded With Reactor Strontium Carbonate - 13203

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josephson, Walter S.; Steen, Franciska H.

    A collection of three highly radioactive filters containing reactor strontium carbonate were being prepared for disposal. All three filters were approximately characterized at the time of manufacture by gravimetric methods. The first filter had been partially emptied, and the quantity of residual activity was uncertain. Dose rate to activity modeling using the Monte-Carlo N Particle (MCNP) code was selected to confirm the gravimetric characterization of the full filters, and to fully characterize the partially emptied filter. Although dose rate to activity modeling using MCNP is a common technique, it is not often used for Bremsstrahlung-dominant materials such as reactor strontium.more » As a result, different MCNP modeling options were compared to determine the optimum approach. This comparison indicated that the accuracy of the results were heavily dependent on the MCNP modeling details and the location of the dose rate measurement point. The optimum model utilized a photon spectrum generated by the Oak Ridge Isotope Generation and Depletion (ORIGEN) code and dose rates measured at 30 cm. Results from the optimum model agreed with the gravimetric estimates within 15%. It was demonstrated that dose rate to activity modeling can be successful for Bremsstrahlung-dominant radioactive materials. However, the degree of success is heavily dependent on the choice of modeling techniques. (authors)« less

  18. CESAR5.3: Isotopic depletion for Research and Testing Reactor decommissioning

    NASA Astrophysics Data System (ADS)

    Ritter, Guillaume; Eschbach, Romain; Girieud, Richard; Soulard, Maxime

    2018-05-01

    CESAR stands in French for "simplified depletion applied to reprocessing". The current version is now number 5.3 as it started 30 years ago from a long lasting cooperation with ORANO, co-owner of the code with CEA. This computer code can characterize several types of nuclear fuel assemblies, from the most regular PWR power plants to the most unexpected gas cooled and graphite moderated old timer research facility. Each type of fuel can also include numerous ranges of compositions like UOX, MOX, LEU or HEU. Such versatility comes from a broad catalog of cross section libraries, each corresponding to a specific reactor and fuel matrix design. CESAR goes beyond fuel characterization and can also provide an evaluation of structural materials activation. The cross-sections libraries are generated using the most refined assembly or core level transport code calculation schemes (CEA APOLLO2 or ERANOS), based on the European JEFF3.1.1 nuclear data base. Each new CESAR self shielded cross section library benefits all most recent CEA recommendations as for deterministic physics options. Resulting cross sections are organized as a function of burn up and initial fuel enrichment which allows to condensate this costly process into a series of Legendre polynomials. The final outcome is a fast, accurate and compact CESAR cross section library. Each library is fully validated, against a stochastic transport code (CEA TRIPOLI 4) if needed and against a reference depletion code (CEA DARWIN). Using CESAR does not require any of the neutron physics expertise implemented into cross section libraries generation. It is based on top quality nuclear data (JEFF3.1.1 for ˜400 isotopes) and includes up to date Bateman equation solving algorithms. However, defining a CESAR computation case can be very straightforward. Most results are only 3 steps away from any beginner's ambition: Initial composition, in core depletion and pool decay scenario. On top of a simple utilization architecture

  19. Testing the Delayed Gamma Capability in MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weldon, Robert A.; Fensin, Michael L.; McKinney, Gregg W.

    systems. We examine five different decay chains (two-stage decay to stable) and show the predictability of the MCNP6 delayed gamma feature. Results do show that while the default delayed gamma calculations available in the MCNP6 1.0 release can give accurate results for some isotopes (e.g., 137Ba), the percent differences between the closed form analytic solutions and the MCNP6 calculations were often >40% ( 28Mg, 28Al, 42K, 47Ca, 47Sc, 60Co). With the MCNP6 1.1 Beta release, the tenth entry on the DBCN card allows improved calculation within <5% as compared to the closed form analytic solutions for immediate parent emissions and transient equilibrium systems. While the tenth entry on the DBCN card for MCNP6 1.1 gives much better results for transient equilibrium systems and parent emissions in general, it does little to improve daughter emissions of secular equilibrium systems. Finally, hypotheses were presented as to why daughter emissions of secular equilibrium systems might be mispredicted in some cases and not in others.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  1. Maintaining a Critical Spectra within Monteburns for a Gas-Cooled Reactor Array by Way of Control Rod Manipulation

    DOE PAGES

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.; ...

    2016-10-01

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  2. SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, J.T.; Murphy, J.

    SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.

  3. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  4. The development of a thermal hydraulic feedback mechanism with a quasi-fixed point iteration scheme for control rod position modeling for the TRIGSIMS-TH application

    NASA Astrophysics Data System (ADS)

    Karriem, Veronica V.

    Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the

  5. The design of a multisource americium-beryllium (Am-Be) neutron irradiation facility using MCNP for the neutronic performance calculation.

    PubMed

    Sogbadji, R B M; Abrefah, R G; Nyarko, B J B; Akaho, E H K; Odoi, H C; Attakorah-Birinkorang, S

    2014-08-01

    The americium-beryllium neutron irradiation facility at the National Nuclear Research Institute (NNRI), Ghana, was re-designed with four 20 Ci sources using Monte Carlo N-Particle (MCNP) code to investigate the maximum amount of flux that is produced by the combined sources. The results were compared with a single source Am-Be irradiation facility. The main objective was to enable us to harness the maximum amount of flux for the optimization of neutron activation analysis and to enable smaller sample sized samples to be irradiated. Using MCNP for the design construction and neutronic performance calculation, it was realized that the single-source Am-Be design produced a thermal neutron flux of (1.8±0.0007)×10(6) n/cm(2)s and the four-source Am-Be design produced a thermal neutron flux of (5.4±0.0007)×10(6) n/cm(2)s which is a factor of 3.5 fold increase compared to the single-source Am-Be design. The criticality effective, k(eff), of the single-source and the four-source Am-Be designs were found to be 0.00115±0.0008 and 0.00143±0.0008, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  7. Assessing local planning to control groundwater depletion: California as a microcosm of global issues

    NASA Astrophysics Data System (ADS)

    Nelson, Rebecca L.

    2012-01-01

    Groundwater pumping has caused excessive groundwater depletion around the world, yet regulating pumping remains a profound challenge. California uses more groundwater than any other U.S. state, and serves as a microcosm of the adverse effects of pumping felt worldwide—land subsidence, impaired water quality, and damaged ecosystems, all against the looming threat of climate change. The state largely entrusts the control of depletion to the local level. This study uses internationally accepted water resources planning theories systematically to investigate three key aspects of controlling groundwater depletion in California, with an emphasis on local-level action: (a) making decisions and engaging stakeholders; (b) monitoring groundwater; and (c) using mandatory, fee-based and voluntary approaches to control groundwater depletion (e.g., pumping restrictions, pumping fees, and education about water conservation, respectively). The methodology used is the social science-derived technique of content analysis, which involves using a coding scheme to record these three elements in local rules and plans, and State legislation, then analyzing patterns and trends. The study finds that Californian local groundwater managers rarely use, or plan to use, mandatory and fee-based measures to control groundwater depletion. Most use only voluntary approaches or infrastructure to attempt to reduce depletion, regardless of whether they have more severe groundwater problems, or problems which are more likely to have irreversible adverse effects. The study suggests legal reforms to the local groundwater planning system, drawing upon its empirical findings. Considering the content of these recommendations may also benefit other jurisdictions that use a local groundwater management planning paradigm.

  8. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    NASA Astrophysics Data System (ADS)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those

  9. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    NASA Technical Reports Server (NTRS)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  10. Modeling and Simulations for the High Flux Isotope Reactor Cycle 400

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Chandler, David; Ade, Brian J

    2015-03-01

    A concerted effort over the past few years has been focused on enhancing the core model for the High Flux Isotope Reactor (HFIR), as part of a comprehensive study for HFIR conversion from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel. At this time, the core model used to perform analyses in support of HFIR operation is an MCNP model for the beginning of Cycle 400, which was documented in detail in a 2005 technical report. A HFIR core depletion model that is based on current state-of-the-art methods and nuclear data was needed to serve as reference for the designmore » of an LEU fuel for HFIR. The recent enhancements in modeling and simulations for HFIR that are discussed in the present report include: (1) revision of the 2005 MCNP model for the beginning of Cycle 400 to improve the modeling data and assumptions as necessary based on appropriate primary reference sources HFIR drawings and reports; (2) improvement of the fuel region model, including an explicit representation for the involute fuel plate geometry that is characteristic to HFIR fuel; and (3) revision of the Monte Carlo-based depletion model for HFIR in use since 2009 but never documented in detail, with the development of a new depletion model for the HFIR explicit fuel plate representation. The new HFIR models for Cycle 400 are used to determine various metrics of relevance to reactor performance and safety assessments. The calculated metrics are compared, where possible, with measurement data from preconstruction critical experiments at HFIR, data included in the current HFIR safety analysis report, and/or data from previous calculations performed with different methods or codes. The results of the analyses show that the models presented in this report provide a robust and reliable basis for HFIR analyses.« less

  11. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  12. Los Alamos and Lawrence Livermore National Laboratories Code-to-Code Comparison of Inter Lab Test Problem 1 for Asteroid Impact Hazard Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Robert P.; Miller, Paul; Howley, Kirsten

    The NNSA Laboratories have entered into an interagency collaboration with the National Aeronautics and Space Administration (NASA) to explore strategies for prevention of Earth impacts by asteroids. Assessment of such strategies relies upon use of sophisticated multi-physics simulation codes. This document describes the task of verifying and cross-validating, between Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL), modeling capabilities and methods to be employed as part of the NNSA-NASA collaboration. The approach has been to develop a set of test problems and then to compare and contrast results obtained by use of a suite of codes, includingmore » MCNP, RAGE, Mercury, Ares, and Spheral. This document provides a short description of the codes, an overview of the idealized test problems, and discussion of the results for deflection by kinetic impactors and stand-off nuclear explosions.« less

  13. Method for depleting BWRs using optimal control rod patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less

  14. Evaluation of RAPID for a UNF cask benchmark problem

    NASA Astrophysics Data System (ADS)

    Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.

    2017-09-01

    This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.

  15. Rapid Acute Dose Assessment Using MCNP6

    NASA Astrophysics Data System (ADS)

    Owens, Andrew Steven

    Acute radiation doses due to physical contact with a high-activity radioactive source have proven to be an occupational hazard. Multiple radiation injuries have been reported due to manipulating a radioactive source with bare hands or by placing a radioactive source inside a shirt or pants pocket. An effort to reconstruct the radiation dose must be performed to properly assess and medically manage the potential biological effects from such doses. Using the reference computational phantoms defined by the International Commission on Radiological Protection (ICRP) and the Monte Carlo N-Particle transport code (MCNP6), dose rate coefficients are calculated to assess doses for common acute doses due to beta and photon radiation sources. The research investigates doses due to having a radioactive source in either a breast pocket or pants back pocket. The dose rate coefficients are calculated for discrete energies and can be used to interpolate for any given energy of photon or beta emission. The dose rate coefficients allow for quick calculation of whole-body dose, organ dose, and/or skin dose if the source, activity, and time of exposure are known. Doses are calculated with the dose rate coefficients and compared to results from the International Atomic Energy Agency (IAEA) reports from accidents that occurred in Gilan, Iran and Yanango, Peru. Skin and organ doses calculated with the dose rate coefficients appear to agree, but there is a large discrepancy when comparing whole-body doses assessed using biodosimetry and whole-body doses assessed using the dose rate coefficients.

  16. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Plutonium Metals, Oxides, and Solutions on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; Gough, Sean T.

    This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.

  17. An evaluation of a manganese bath system having a new geometry through MCNP modelling.

    PubMed

    Khabaz, Rahim

    2012-12-01

    In this study, an approximate symmetric cylindrical manganese bath system with equal diameter and height was appraised using a Monte Carlo simulation. For nine sizes of the tank filled with MnSO(4).H(2)O solution of three different concentrations, the necessary correction factors involved in the absolute measurement of neutron emission rate were determined by a detailed modelling of the MCNP4C code with the ENDF/B-VII.0 neutron cross section data library. The results obtained were also used to determine the optimum dimensions of the bath for each concentration of solution in the calibration of (241)Am-Be and (252)Cf sources. Also, the amount of gamma radiation produced as a result of (n,γ) the reaction with the nuclei of the manganese sulphate solution that escaped from the boundary of each tank was evaluated. This gamma can be important for the background in NaI(Tl) detectors and issues concerned with radiation protection.

  18. Self-Regulatory Capacities Are Depleted in a Domain-Specific Manner

    PubMed Central

    Zhang, Rui; Stock, Ann-Kathrin; Rzepus, Anneka; Beste, Christian

    2017-01-01

    Performing an act of self-regulation such as making decisions has been suggested to deplete a common limited resource, which impairs all subsequent self-regulatory actions (ego depletion theory). It has however remained unclear whether self-referred decisions truly impair behavioral control even in seemingly unrelated cognitive domains, and which neurophysiological mechanisms are affected by these potential depletion effects. In the current study, we therefore used an inter-individual design to compare two kinds of depletion, namely a self-referred choice-based depletion and a categorization-based switching depletion, to a non-depleted control group. We used a backward inhibition (BI) paradigm to assess the effects of depletion on task switching and associated inhibition processes. It was combined with EEG and source localization techniques to assess both behavioral and neurophysiological depletion effects. The results challenge the ego depletion theory in its current form: Opposing the theory’s prediction of a general limited resource, which should have yielded comparable effects in both depletion groups, or maybe even a larger depletion in the self-referred choice group, there were stronger performance impairments following a task domain-specific depletion (i.e., the switching-based depletion) than following a depletion based on self-referred choices. This suggests at least partly separate and independent resources for various cognitive control processes rather than just one joint resource for all self-regulation activities. The implications are crucial to consider for people making frequent far-reaching decisions e.g., in law or economy. PMID:29033798

  19. Self-Regulatory Capacities Are Depleted in a Domain-Specific Manner.

    PubMed

    Zhang, Rui; Stock, Ann-Kathrin; Rzepus, Anneka; Beste, Christian

    2017-01-01

    Performing an act of self-regulation such as making decisions has been suggested to deplete a common limited resource, which impairs all subsequent self-regulatory actions (ego depletion theory). It has however remained unclear whether self-referred decisions truly impair behavioral control even in seemingly unrelated cognitive domains, and which neurophysiological mechanisms are affected by these potential depletion effects. In the current study, we therefore used an inter-individual design to compare two kinds of depletion, namely a self-referred choice-based depletion and a categorization-based switching depletion, to a non-depleted control group. We used a backward inhibition (BI) paradigm to assess the effects of depletion on task switching and associated inhibition processes. It was combined with EEG and source localization techniques to assess both behavioral and neurophysiological depletion effects. The results challenge the ego depletion theory in its current form: Opposing the theory's prediction of a general limited resource, which should have yielded comparable effects in both depletion groups, or maybe even a larger depletion in the self-referred choice group, there were stronger performance impairments following a task domain-specific depletion (i.e., the switching-based depletion) than following a depletion based on self-referred choices. This suggests at least partly separate and independent resources for various cognitive control processes rather than just one joint resource for all self-regulation activities. The implications are crucial to consider for people making frequent far-reaching decisions e.g., in law or economy.

  20. Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems

    NASA Astrophysics Data System (ADS)

    Favorite, Jeffrey

    2017-09-01

    The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.

  1. Source terms, shielding calculations and soil activation for a medical cyclotron.

    PubMed

    Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E

    2016-12-01

    Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .

  2. Implementation and testing of the on-the-fly thermal scattering Monte Carlo sampling method for graphite and light water in MCNP6

    DOE PAGES

    Pavlou, Andrew T.; Ji, Wei; Brown, Forrest B.

    2016-01-23

    Here, a proper treatment of thermal neutron scattering requires accounting for chemical binding through a scattering law S(α,β,T). Monte Carlo codes sample the secondary neutron energy and angle after a thermal scattering event from probability tables generated from S(α,β,T) tables at discrete temperatures, requiring a large amount of data for multiscale and multiphysics problems with detailed temperature gradients. We have previously developed a method to handle this temperature dependence on-the-fly during the Monte Carlo random walk using polynomial expansions in 1/T to directly sample the secondary energy and angle. In this paper, the on-the-fly method is implemented into MCNP6 andmore » tested in both graphite-moderated and light water-moderated systems. The on-the-fly method is compared with the thermal ACE libraries that come standard with MCNP6, yielding good agreement with integral reactor quantities like k-eigenvalue and differential quantities like single-scatter secondary energy and angle distributions. The simulation runtimes are comparable between the two methods (on the order of 5–15% difference for the problems tested) and the on-the-fly fit coefficients only require 5–15 MB of total data storage.« less

  3. Comparison of GATE/GEANT4 with EGSnrc and MCNP for electron dose calculations at energies between 15 keV and 20 MeV.

    PubMed

    Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V

    2011-02-07

    The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.

  4. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  5. Validation of absolute axial neutron flux distribution calculations with MCNP with 197Au(n,γ)198Au reaction rate distribution measurements at the JSI TRIGA Mark II reactor.

    PubMed

    Radulović, Vladimir; Štancar, Žiga; Snoj, Luka; Trkov, Andrej

    2014-02-01

    The calculation of axial neutron flux distributions with the MCNP code at the JSI TRIGA Mark II reactor has been validated with experimental measurements of the (197)Au(n,γ)(198)Au reaction rate. The calculated absolute reaction rate values, scaled according to the reactor power and corrected for the flux redistribution effect, are in good agreement with the experimental results. The effect of different cross-section libraries on the calculations has been investigated and shown to be minor. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A Patch to MCNP5 for Multiplication Inference: Description and User Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Jr., Clell J.

    2014-05-05

    A patch to MCNP5 has been written to allow generation of multiple neutrons from a spontaneous-fission event and generate list-mode output. This report documents the implementation and usage of this patch.

  7. MCNP modelling of vaginal and uterine applicators used in intracavitary brachytherapy and comparison with radiochromic film measurements

    NASA Astrophysics Data System (ADS)

    Ceccolini, E.; Gerardy, I.; Ródenas, J.; van Dycke, M.; Gallardo, S.; Mostacci, D.

    Brachytherapy is an advanced cancer treatment that is minimally invasive, minimising radiation exposure to the surrounding healthy tissues. Microselectron© Nucletron devices with 192Ir source can be used for gynaecological brachytherapy, in patients with vaginal or uterine cancer. Measurements of isodose curves have been performed in a PMMA phantom and compared with Monte Carlo calculations and TPS (Plato software of Nucletron BPS 14.2) evaluation. The isodose measurements have been performed with radiochromic films (Gafchromic EBT©). The dose matrix has been obtained after digitalisation and use of a dose calibration curve obtained with a 6 MV photon beam provided by a medical linear accelerator. A comparison between the calculated and the measured matrix has been performed. The calculated dose matrix is obtained with a simulation using the MCNP5 Monte Carlo code (F4MESH tally).

  8. MCNP simulations of material exposure experiments (u)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, Brian A

    2010-12-08

    Simulations of proposed material exposure experiments were performed using MCNP6. The experiments will expose ampules containing different materials of interest with radiation to observe the chemical breakdown of the materials. Simulations were performed to map out dose in materials as a function of distance from the source, dose variation between materials, dose variation due to ampule orientation, and dose variation due to different source energy. This write up is an overview of the simulations and will provide guidance on how to use the data in the spreadsheet.

  9. Analysis of MCNP simulated gamma spectra of CdTe detectors for boron neutron capture therapy.

    PubMed

    Winkler, Alexander; Koivunoro, Hanna; Savolainen, Sauli

    2017-06-01

    The next step in the boron neutron capture therapy (BNCT) is the real time imaging of the boron concentration in healthy and tumor tissue. Monte Carlo simulations are employed to predict the detector response required to realize single-photon emission computed tomography in BNCT, but have failed to correctly resemble measured data for cadmium telluride detectors. In this study we have tested the gamma production cross-section data tables of commonly used libraries in the Monte Carlo code MCNP in comparison to measurements. The cross section data table TENDL-2008-ACE is reproducing measured data best, whilst the commonly used ENDL92 and other studied libraries do not include correct tables for the gamma production from the cadmium neutron capture reaction that is occurring inside the detector. Furthermore, we have discussed the size of the annihilation peaks of spectra obtained by cadmium telluride and germanium detectors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Design of boron carbide-shielded irradiation channel of the outer irradiation channel of the Ghana Research Reactor-1 using MCNP.

    PubMed

    Abrefah, R G; Sogbadji, R B M; Ampomah-Amoako, E; Birikorang, S A; Odoi, H C; Nyarko, B J B

    2011-01-01

    The MCNP model for the Ghana Research Reactor-1 was redesigned to incorporate a boron carbide-shielded irradiation channel in one of the outer irradiation channels. Extensive investigations were made before arriving at the final design of only one boron carbide covered outer irradiation channel; as all the other designs that were considered did not give desirable results of neutronic performance. The concept of redesigning a new MCNP model, which has a boron carbide-shielded channel is to equip the Ghana Research Reactor-1 with the means of performing efficient epithermal neutron activation analysis. After the simulation, a comparison of the results from the original MCNP model for the Ghana Research Reactor-1 and the new redesigned model of the boron carbide shielded channel was made. The final effective criticality of the original MCNP model for the GHARR-1 was recorded as 1.00402 while that of the new boron carbide designed model was recorded as 1.00282. Also, a final prompt neutron lifetime of 1.5245 × 10(-4)s was recorded for the new boron carbide designed model while a value of 1.5571 × 10(-7)s was recorded for the original MCNP design of the GHARR-1. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Optimization of Neutron Spectrum in Northwest Beam Tube of Tehran Research Reactor for BNCT, by MCNP Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamani, M.; End of North Kargar st, Atomic Energy Organization of Iran, P.O. Box: 14155-1339, Tehran; Kasesaz, Y.

    2015-07-01

    In order to gain the neutron spectrum with proper components specification for BNCT, it is necessary to design a Beam Shape Assembling (BSA), include of moderator, collimator, reflector, gamma filter and thermal neutrons filter, in front of the initial radiation beam from the source. According to the result of MCNP4C simulation, the Northwest beam tube has the most optimized neuron flux between three north beam tubes of Tehran Research Reactor (TRR). So, it has been chosen for this purpose. Simulation of the BSA has been done in four above mentioned phases. In each stage, ten best configurations of materials withmore » different length and width were selected as the candidates for the next stage. The last BSA configuration includes of: 78 centimeters of air as an empty space, 40 centimeters of Iron plus 52 centimeters of heavy-water as moderator, 30 centimeters of water or 90 centimeters of Aluminum-Oxide as a reflector, 1 millimeters of lithium (Li) as thermal neutrons filter and finally 3 millimeters of Bismuth (Bi) as a filter of gamma radiation. The result of Calculations shows that if we use this BSA configuration for TRR Northwest beam tube, then the best neutron flux and spectrum will be achieved for BNCT. (authors)« less

  12. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  13. Assessment of doses caused by electrons in thin layers of tissue-equivalent materials, using MCNP.

    PubMed

    Heide, Bernd

    2013-10-01

    Absorbed doses caused by electron irradiation were calculated with Monte Carlo N-Particle transport code (MCNP) for thin layers of tissue-equivalent materials. The layers were so thin that the calculation of energy deposition was on the border of the scope of MCNP. Therefore, in this article application of three different methods of calculation of energy deposition is discussed. This was done by means of two scenarios: in the first one, electrons were emitted from the centre of a sphere of water and also recorded in that sphere; and in the second, an irradiation with the PTB Secondary Standard BSS2 was modelled, where electrons were emitted from an (90)Sr/(90)Y area source and recorded inside a cuboid phantom made of tissue-equivalent material. The speed and accuracy of the different methods were of interest. While a significant difference in accuracy was visible for one method in the first scenario, the difference in accuracy of the three methods was insignificant for the second one. Considerable differences in speed were found for both scenarios. In order to demonstrate the need for calculating the dose in thin small zones, a third scenario was constructed and simulated as well. The third scenario was nearly equal to the second one, but a pike of lead was assumed to be inside the phantom in addition. A dose enhancement (caused by the pike of lead) of ∼113 % was recorded for a thin hollow cylinder at a depth of 0.007 cm, which the basal-skin layer is referred to in particular. Dose enhancements between 68 and 88 % were found for a slab with a radius of 0.09 cm for all depths. All dose enhancements were hardly noticeable for a slab with a cross-sectional area of 1 cm(2), which is usually applied to operational radiation protection.

  14. Simulation of the GCR spectrum in the Mars curiosity rover's RAD detector using MCNP6

    NASA Astrophysics Data System (ADS)

    Ratliff, Hunter N.; Smith, Michael B. R.; Heilbronn, Lawrence

    2017-08-01

    The paper presents results from MCNP6 simulations of galactic cosmic ray (GCR) propagation down through the Martian atmosphere to the surface and comparison with RAD measurements made there. This effort is part of a collaborative modeling workshop for space radiation hosted by Southwest Research Institute (SwRI). All modeling teams were tasked with simulating the galactic cosmic ray (GCR) spectrum through the Martian atmosphere and the Radiation Assessment Detector (RAD) on-board the Curiosity rover. The detector had two separate particle acceptance angles, 4π and 30 ° off zenith. All ions with Z = 1 through Z = 28 were tracked in both scenarios while some additional secondary particles were only tracked in the 4π cases. The MCNP6 4π absorbed dose rate was 307.3 ± 1.3 μGy/day while RAD measured 233 μGy/day. Using the ICRP-60 dose equivalent conversion factors built into MCNP6, the simulated 4π dose equivalent rate was found to be 473.1 ± 2.4 μSv/day while RAD reported 710 μSv/day.

  15. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    PubMed

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  16. Intrinsic Radiation Source Generation with the ISC Package: Data Comparisons and Benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Clell J. Jr.

    The characterization of radioactive emissions from unstable isotopes (intrinsic radiation) is necessary for shielding and radiological-dose calculations from radioactive materials. While most radiation transport codes, e.g., MCNP [X-5 Monte Carlo Team, 2003], provide the capability to input user prescribed source definitions, such as radioactive emissions, they do not provide the capability to calculate the correct radioactive-source definition given the material compositions. Special modifications to MCNP have been developed in the past to allow the user to specify an intrinsic source, but these modification have not been implemented into the primary source base [Estes et al., 1988]. To facilitate the descriptionmore » of the intrinsic radiation source from a material with a specific composition, the Intrinsic Source Constructor library (LIBISC) and MCNP Intrinsic Source Constructor (MISC) utility have been written. The combination of LIBISC and MISC will be herein referred to as the ISC package. LIBISC is a statically linkable C++ library that provides the necessary functionality to construct the intrinsic-radiation source generated by a material. Furthermore, LIBISC provides the ability use different particle-emission databases, radioactive-decay databases, and natural-abundance databases allowing the user flexibility in the specification of the source, if one database is preferred over others. LIBISC also provides functionality for aging materials and producing a thick-target bremsstrahlung photon source approximation from the electron emissions. The MISC utility links to LIBISC and facilitates the description of intrinsic-radiation sources into a format directly usable with the MCNP transport code. Through a series of input keywords and arguments the MISC user can specify the material, age the material if desired, and produce a source description of the radioactive emissions from the material in an MCNP readable format. Further details of using the MISC

  17. Measured and calculated fast neutron spectra in a depleted uranium and lithium hydride shielded reactor

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.; Mueller, R. A.

    1973-01-01

    Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.

  18. Estimation of coolant void reactivity for CANDU-NG lattice using DRAGON and validation using MCNP5 and TRIPOLI-4.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, R.; Tellier, R. L.; Hebert, A.

    2006-07-01

    The Coolant Void Reactivity (CVR) is an important safety parameter that needs to be estimated at the design stage of a nuclear reactor. It helps to have an a priori knowledge of the behavior of the system during a transient initiated by the loss of coolant. In the present paper, we have attempted to estimate the CVR for a CANDU New Generation (CANDU-NG) lattice, as proposed at an early stage of the Advanced CANDU Reactor (ACR) development. We have attempted to estimate the CVR with development version of the code DRAGON, using the method of characteristics. DRAGON has several advancedmore » self-shielding models incorporated in it, each of them compatible with the method of characteristics. This study will bring to focus the performance of these self-shielding models, especially when there is voiding of such a tight lattice. We have also performed assembly calculations in 2 x 2 pattern for the CANDU-NG fuel, with special emphasis on checkerboard voiding. The results obtained have been validated against Monte Carlo codes MCNP5 and TRIPOLI-4.3. (authors)« less

  19. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less

  20. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less

  1. Soil nutrients, aboveground productivity and vegetative diversity after 10 years of experimental acidification and base cation depletion

    Treesearch

    Mary Beth Adams; James A. Burger

    2010-01-01

    Soil acidification and base cation depletion are concerns for those wishing to manage central Appalachian hardwood forests sustainably. In this research, 2 experiments were established in 1996 and 1997 in two forest types common in the central Appalachian hardwood forests, to examine how these important forests respond to depletion of nutrients such as calcium and...

  2. Incorporating Code-Based Software in an Introductory Statistics Course

    ERIC Educational Resources Information Center

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  3. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  4. Severe accident skyshine radiation analysis by MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eurajoki, T.

    1994-12-31

    If a severe accident with a considerable core damage occurs at a nuclear power plant whose containment top is remarkably thin compared with the walls, the radiation transported through the top and scattered in air may cause high dose rates at the power plant area. Noble gases and other fission products released to the containment act as sources. The dose rates caused by skyshine have been calculated by MCNP3A for the Loviisa nuclear power plant (two-unit, 445-MW VVER) for the outside area and inside some buildings, taking the attenuation in the roofs of the buildings into account.

  5. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  6. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  7. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  8. Evaluation of computational models and cross sections used by MCNP6 for simulation of characteristic X-ray emission from thick targets bombarded by kiloelectronvolt electrons

    NASA Astrophysics Data System (ADS)

    Poškus, A.

    2016-09-01

    This paper evaluates the accuracy of the single-event (SE) and condensed-history (CH) models of electron transport in MCNP6.1 when simulating characteristic Kα, total K (=Kα + Kβ) and Lα X-ray emission from thick targets bombarded by electrons with energies from 5 keV to 30 keV. It is shown that the MCNP6.1 implementation of the CH model for the K-shell impact ionization leads to underestimation of the K yield by 40% or more for the elements with atomic numbers Z < 15 and overestimation of the Kα yield by more than 40% for the elements with Z > 25. The Lα yields are underestimated by more than an order of magnitude in CH mode, because MCNP6.1 neglects X-ray emission caused by electron-impact ionization of L, M and higher shells in CH mode (the Lα yields calculated in CH mode reflect only X-ray fluorescence, which is mainly caused by photoelectric absorption of bremsstrahlung photons). The X-ray yields calculated by MCNP6.1 in SE mode (using ENDF/B-VII.1 library data) are more accurate: the differences of the calculated and experimental K yields are within the experimental uncertainties for the elements C, Al and Si, and the calculated Kα yields are typically underestimated by (20-30)% for the elements with Z > 25, whereas the Lα yields are underestimated by (60-70)% for the elements with Z > 49. It is also shown that agreement of the experimental X-ray yields with those calculated in SE mode is additionally improved by replacing the ENDF/B inner-shell electron-impact ionization cross sections with the set of cross sections obtained from the distorted-wave Born approximation (DWBA), which are also used in the PENELOPE code system. The latter replacement causes a decrease of the average relative difference of the experimental X-ray yields and the simulation results obtained in SE mode to approximately 10%, which is similar to accuracy achieved with PENELOPE. This confirms that the DWBA inner-shell impact ionization cross sections are significantly more

  9. Four year-olds use norm-based coding for face identity.

    PubMed

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-05-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    PubMed

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  11. Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has

  12. Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination

    PubMed Central

    Liu, B; Xu, J; Liu, T; Ouyang, X

    2012-01-01

    Objective To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Methods Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a 252Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D–D neutron generator can create neutrons at up to 1013 n s−1 with current technology. All these enable an effective and low-cost method of killing anthrax spores. Results There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. Conclusion The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g 252Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D–D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D–D neutron generator output >1013 n s−1 should be attainable in the near future. This indicates that we could use a D–D neutron generator to sterilise anthrax contamination within several seconds. PMID:22573293

  13. Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination.

    PubMed

    Liu, B; Xu, J; Liu, T; Ouyang, X

    2012-10-01

    To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a (252)Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D-D neutron generator can create neutrons at up to 10(13) n s(-1) with current technology. All these enable an effective and low-cost method of killing anthrax spores. There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g (252)Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D-D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D-D neutron generator output >10(13) n s(-1) should be attainable in the near future. This indicates that we could use a D-D neutron generator to sterilise anthrax contamination within several seconds.

  14. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  15. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  16. Four Year-Olds Use Norm-Based Coding for Face Identity

    ERIC Educational Resources Information Center

    Jeffery, Linda; Read, Ainsley; Rhodes, Gillian

    2013-01-01

    Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…

  17. Radiation shielding evaluation of the BNCT treatment room at THOR: a TORT-coupled MCNP Monte Carlo simulation study.

    PubMed

    Chen, A Y; Liu, Y-W H; Sheu, R J

    2008-01-01

    This study investigates the radiation shielding design of the treatment room for boron neutron capture therapy at Tsing Hua Open-pool Reactor using "TORT-coupled MCNP" method. With this method, the computational efficiency is improved significantly by two to three orders of magnitude compared to the analog Monte Carlo MCNP calculation. This makes the calculation feasible using a single CPU in less than 1 day. Further optimization of the photon weight windows leads to additional 50-75% improvement in the overall computational efficiency.

  18. Ovarian cancer therapeutic potential of glutamine depletion based on GS expression.

    PubMed

    Furusawa, Akiko; Miyamoto, Morikazu; Takano, Masashi; Tsuda, Hitoshi; Song, Yong Sang; Aoki, Daisuke; Miyasaka, Naoyuki; Inazawa, Johji; Inoue, Jun

    2018-05-28

    Amino acids (AAs) are biologically important nutrient compounds necessary for the survival of any cell. Of the 20 AAs, cancer cells depend on the uptake of several extracellular AAs for survival. However, which extracellular AA is indispensable for the survival of cancer cells and the molecular mechanism involved have not been fully defined. In this study, we found that the reduction of cell survival caused by glutamine (Gln) depletion is inversely correlated with the expression level of glutamine synthetase (GS) in ovarian cancer (OVC) cells. GS expression was downregulated in 45 of 316 OVC cases (14.2%). The depletion of extracellular Gln by treatment with l-asparaginase, in addition to inhibiting Gln uptake via the knockdown of a Gln transporter, led to the inhibition of cell growth in OVC cells with low expression of GS (GSlow-OVC cells). Furthermore, the re-expression of GS in GSlow-OVC cells induced the inhibition of tumor growth in vitro and in vivo. Thus, these findings provide novel insight into the development of an OVC therapy based on the requirement of Gln.

  19. EXPERIMENTAL ACIDIFICATION CAUSES SOIL BASE-CATION DEPLETION AT THE BEAR BROOK WATERSHED IN MAINE

    EPA Science Inventory

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to...

  20. Simulation of the GCR spectrum in the Mars curiosity rover's RAD detector using MCNP6.

    PubMed

    Ratliff, Hunter N; Smith, Michael B R; Heilbronn, Lawrence

    2017-08-01

    The paper presents results from MCNP6 simulations of galactic cosmic ray (GCR) propagation down through the Martian atmosphere to the surface and comparison with RAD measurements made there. This effort is part of a collaborative modeling workshop for space radiation hosted by Southwest Research Institute (SwRI). All modeling teams were tasked with simulating the galactic cosmic ray (GCR) spectrum through the Martian atmosphere and the Radiation Assessment Detector (RAD) on-board the Curiosity rover. The detector had two separate particle acceptance angles, 4π and 30 ° off zenith. All ions with Z = 1 through Z = 28 were tracked in both scenarios while some additional secondary particles were only tracked in the 4π cases. The MCNP6 4π absorbed dose rate was 307.3 ± 1.3 µGy/day while RAD measured 233 µGy/day. Using the ICRP-60 dose equivalent conversion factors built into MCNP6, the simulated 4π dose equivalent rate was found to be 473.1 ± 2.4 µSv/day while RAD reported 710 µSv/day. Copyright © 2017 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  1. Visual coding of human bodies: perceptual aftereffects reveal norm-based, opponent coding of body identity.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J

    2013-04-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.

  2. The use of tetrahedral mesh geometries in Monte Carlo simulation of applicator based brachytherapy dose distributions

    NASA Astrophysics Data System (ADS)

    Paiva Fonseca, Gabriel; Landry, Guillaume; White, Shane; D'Amours, Michel; Yoriyaz, Hélio; Beaulieu, Luc; Reniers, Brigitte; Verhaegen, Frank

    2014-10-01

    Accounting for brachytherapy applicator attenuation is part of the recommendations from the recent report of AAPM Task Group 186. To do so, model based dose calculation algorithms require accurate modelling of the applicator geometry. This can be non-trivial in the case of irregularly shaped applicators such as the Fletcher Williamson gynaecological applicator or balloon applicators with possibly irregular shapes employed in accelerated partial breast irradiation (APBI) performed using electronic brachytherapy sources (EBS). While many of these applicators can be modelled using constructive solid geometry (CSG), the latter may be difficult and time-consuming. Alternatively, these complex geometries can be modelled using tessellated geometries such as tetrahedral meshes (mesh geometries (MG)). Recent versions of Monte Carlo (MC) codes Geant4 and MCNP6 allow for the use of MG. The goal of this work was to model a series of applicators relevant to brachytherapy using MG. Applicators designed for 192Ir sources and 50 kV EBS were studied; a shielded vaginal applicator, a shielded Fletcher Williamson applicator and an APBI balloon applicator. All applicators were modelled in Geant4 and MCNP6 using MG and CSG for dose calculations. CSG derived dose distributions were considered as reference and used to validate MG models by comparing dose distribution ratios. In general agreement within 1% for the dose calculations was observed for all applicators between MG and CSG and between codes when considering volumes inside the 25% isodose surface. When compared to CSG, MG required longer computation times by a factor of at least 2 for MC simulations using the same code. MCNP6 calculation times were more than ten times shorter than Geant4 in some cases. In conclusion we presented methods allowing for high fidelity modelling with results equivalent to CSG. To the best of our knowledge MG offers the most accurate representation of an irregular APBI balloon applicator.

  3. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  4. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect.

    PubMed

    Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  5. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  6. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    PubMed

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  7. A new definition of maternal depletion syndrome.

    PubMed Central

    Winkvist, A; Rasmussen, K M; Habicht, J P

    1992-01-01

    BACKGROUND. Although the term "maternal depletion syndrome" has been commonly used to explain poor maternal and infant health, whether such a syndrome actually exists remains unclear. This uncertainty may be due to the lack of a clear definition of the syndrome and the absence of theoretical frameworks that account for the many factors related to reproductive nutrition. METHODS. We propose a new definition of maternal depletion syndrome within a framework that accounts for potential confounding factors. RESULTS. Our conceptual framework distinguishes between childbearing pattern and inadequate diet as causes of poor maternal health; hence, our definition of maternal depletion syndrome has both biological and practical meaning. The new definition is based on overall change in maternal nutritional status over one reproductive cycle in relation to possible depletion and repletion phases and in relation to initial nutritional status. CONCLUSIONS. The empirical application of this approach should permit the testing of the existence of maternal depletion syndrome in the developing world, and the distinction between populations where family planning will alleviate maternal depletion and those in which an improved diet is also necessary. PMID:1566948

  8. MCNP simulation to optimise in-pile and shielding parts of the Portuguese SANS instrument.

    PubMed

    Gonçalves, I F; Salgado, J; Falcão, A; Margaça, F M A; Carvalho, F G

    2005-01-01

    A Small Angle Neutron Scattering instrument is being installed at one end of the tangential beam tube of the Portuguese Research Reactor. The instrument is fed using a neutron scatterer positioned in the middle of the beam tube. The scatterer consists of circulating H2O contained in a hollow disc of Al. The in-pile shielding components and the shielding installed around the neutron selector have been the object of an MCNP simulation study. The quantities calculated were the neutron and gamma-ray fluxes in different positions, the energy deposited in the material by the neutron and gamma-ray fields, the material activation resulting from the neutron field and radiation doses at the exit wall of the shutter and around the shielding. The MCNP results are presented and compared with results of an analytical approach and with experimental data collected after installation.

  9. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    NASA Astrophysics Data System (ADS)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  10. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  11. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  12. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  13. ZEPrompt: An Algorithm for Rapid Estimation of Building Attenuation for Prompt Radiation from a Nuclear Detonation

    DTIC Science & Technology

    2014-01-01

    and 50 kT, to within 30% of first-principles code ( MCNP ) for complicated cities and 10% for simpler cities. 15. SUBJECT TERMS Radiation Transport...Use of MCNP for Dose Calculations .................................................................... 3 2.3 MCNP Open-Field Absorbed Dose...Calculations .................................................. 4 2.4 The MCNP Urban Model

  14. Downtown Waterfront Form-Based Code Workshop

    EPA Pesticide Factsheets

    This document is a description of a Smart Growth Implementation Assistance for Coastal Communities project in Marquette, Michigan, to develop a form-based code that would attract and support vibrant development.

  15. Provably secure identity-based identification and signature schemes from code assumptions

    PubMed Central

    Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940

  16. Provably secure identity-based identification and signature schemes from code assumptions.

    PubMed

    Song, Bo; Zhao, Yiming

    2017-01-01

    Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.

  17. New Approach For Prediction Groundwater Depletion

    NASA Astrophysics Data System (ADS)

    Moustafa, Mahmoud

    2017-01-01

    Current approaches to quantify groundwater depletion involve water balance and satellite gravity. However, the water balance technique includes uncertain estimation of parameters such as evapotranspiration and runoff. The satellite method consumes time and effort. The work reported in this paper proposes using failure theory in a novel way to predict groundwater saturated thickness depletion. An important issue in the failure theory proposed is to determine the failure point (depletion case). The proposed technique uses depth of water as the net result of recharge/discharge processes in the aquifer to calculate remaining saturated thickness resulting from the applied pumping rates in an area to evaluate the groundwater depletion. Two parameters, the Weibull function and Bayes analysis were used to model and analyze collected data from 1962 to 2009. The proposed methodology was tested in a nonrenewable aquifer, with no recharge. Consequently, the continuous decline in water depth has been the main criterion used to estimate the depletion. The value of the proposed approach is to predict the probable effect of the current applied pumping rates on the saturated thickness based on the remaining saturated thickness data. The limitation of the suggested approach is that it assumes the applied management practices are constant during the prediction period. The study predicted that after 300 years there would be an 80% probability of the saturated aquifer which would be expected to be depleted. Lifetime or failure theory can give a simple alternative way to predict the remaining saturated thickness depletion with no time-consuming processes such as the sophisticated software required.

  18. Semiconductor-based photoelectrochemical water splitting at the limit of very wide depletion region

    DOE PAGES

    Liu, Mingzhao; Lyons, John L.; Yan, Danhua H.; ...

    2015-11-23

    In semiconductor-based photoelectrochemical (PEC) water splitting, carrier separation and delivery largely relies on the depletion region formed at the semiconductor/water interface. As a Schottky junction device, the trade-off between photon collection and minority carrier delivery remains a persistent obstacle for maximizing the performance of a water splitting photoelectrode. Here, it is demonstrated that the PEC water splitting efficiency for an n-SrTiO 3 (n-STO) photoanode is improved very significantly despite its weak indirect band gap optical absorption (α < 10⁴ cm⁻¹), by widening the depletion region through engineering its doping density and profile. Graded doped n-SrTiO 3 photoanodes are fabricated withmore » their bulk heavily doped with oxygen vacancies but their surface lightly doped over a tunable depth of a few hundred nanometers, through a simple low temperature re-oxidation technique. The graded doping profile widens the depletion region to over 500 nm, thus leading to very efficient charge carrier separation and high quantum efficiency (>70%) for the weak indirect transition. As a result, this simultaneous optimization of the light absorption, minority carrier (hole) delivery, and majority carrier (electron) transport by means of a graded doping architecture may be useful for other indirect band gap photocatalysts that suffer from a similar problem of weak optical absorption.« less

  19. Satellite-based estimates of groundwater depletion in India.

    PubMed

    Rodell, Matthew; Velicogna, Isabella; Famiglietti, James S

    2009-08-20

    Groundwater is a primary source of fresh water in many parts of the world. Some regions are becoming overly dependent on it, consuming groundwater faster than it is naturally replenished and causing water tables to decline unremittingly. Indirect evidence suggests that this is the case in northwest India, but there has been no regional assessment of the rate of groundwater depletion. Here we use terrestrial water storage-change observations from the NASA Gravity Recovery and Climate Experiment satellites and simulated soil-water variations from a data-integrating hydrological modelling system to show that groundwater is being depleted at a mean rate of 4.0 +/- 1.0 cm yr(-1) equivalent height of water (17.7 +/- 4.5 km(3) yr(-1)) over the Indian states of Rajasthan, Punjab and Haryana (including Delhi). During our study period of August 2002 to October 2008, groundwater depletion was equivalent to a net loss of 109 km(3) of water, which is double the capacity of India's largest surface-water reservoir. Annual rainfall was close to normal throughout the period and we demonstrate that the other terrestrial water storage components (soil moisture, surface waters, snow, glaciers and biomass) did not contribute significantly to the observed decline in total water levels. Although our observational record is brief, the available evidence suggests that unsustainable consumption of groundwater for irrigation and other anthropogenic uses is likely to be the cause. If measures are not taken soon to ensure sustainable groundwater usage, the consequences for the 114,000,000 residents of the region may include a reduction of agricultural output and shortages of potable water, leading to extensive socioeconomic stresses.

  20. Monte Carlo modelling of large scale NORM sources using MCNP.

    PubMed

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  1. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  2. A semi-empirical model for the formation and depletion of the high burnup structure in UO 2

    DOE PAGES

    Pizzocri, D.; Cappia, F.; Luzzi, L.; ...

    2017-01-31

    In the rim zone of UO 2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. To this end, we per-formed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Moreover, based on these new experimental data, we assume an exponential reduction of the average grain size withmore » local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.« less

  3. Ego depletion in visual perception: Ego-depleted viewers experience less ambiguous figure reversal.

    PubMed

    Wimmer, Marina C; Stirk, Steven; Hancock, Peter J B

    2017-10-01

    This study examined the effects of ego depletion on ambiguous figure perception. Adults (N = 315) received an ego depletion task and were subsequently tested on their inhibitory control abilities that were indexed by the Stroop task (Experiment 1) and their ability to perceive both interpretations of ambiguous figures that was indexed by reversal (Experiment 2). Ego depletion had a very small effect on reducing inhibitory control (Cohen's d = .15) (Experiment 1). Ego-depleted participants had a tendency to take longer to respond in Stroop trials. In Experiment 2, ego depletion had small to medium effects on the experience of reversal. Ego-depleted viewers tended to take longer to reverse ambiguous figures (duration to first reversal) when naïve of the ambiguity and experienced less reversal both when naïve and informed of the ambiguity. Together, findings suggest that ego depletion has small effects on inhibitory control and small to medium effects on bottom-up and top-down perceptual processes. The depletion of cognitive resources can reduce our visual perceptual experience.

  4. Design of ACM system based on non-greedy punctured LDPC codes

    NASA Astrophysics Data System (ADS)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  5. The X6XS. 0 cross section library for MCNP-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruvost, N.L.; Seamon, R.E.; Rombaugh, C.T.

    1991-06-01

    This report documents the work done by X-6, HSE-6, and CTR Technical Services to produce a comprehensive working cross-section library for MCNP-4 suitable for SUN workstations and similar environments. The resulting library consists of a total of 436 files (one file for each ZAID). The library is 152 Megabytes in Type 1 format and 32 Megabytes in Type 2 format. Type 2 can be used when porting the library from one computer to another of the same make. Otherwise, Type 1 must be used to ensure portability between different computer systems. Instructions for installing the library and adding ZAIDs tomore » it are included here. Also included is a description of the steps necessary to install and test version 4 of MCNP. To improve readability of this report, certain commands and filenames are given in uppercase letters. The actual command or filename on the SUN workstation, however, must be specified in lowercase letters. Any questions regarding the data contained in the library should be directed to X-6 and any questions regarding the installation of the library and the testing that was performed should be directed to HSE-6. 9 refs., 7 tabs.« less

  6. Testing of ENDF71x: A new ACE-formatted neutron data library based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardiner, S. J.; Conlin, J. L.; Kiedrowski, B. C.

    The ENDF71x library [1] is the most thoroughly tested set of ACE-format data tables ever released by the Nuclear Data Team at Los Alamos National Laboratory (LANL). It is based on ENDF/B-VII. 1, the most recently released set of evaluated nuclear data files produced by the US Cross Section Evaluation Working Group (CSEWG). A variety of techniques were used to test and verify the ENDF7 1x library before its public release. These include the use of automated checking codes written by members of the Nuclear Data Team, visual inspections of key neutron data, MCNP6 calculations designed to test data formore » every included combination of isotope and temperature as comprehensively as possible, and direct comparisons between ENDF71x and previous ACE library releases. Visual inspection of some of the most important neutron data revealed energy balance problems and unphysical discontinuities in the cross sections for some nuclides. Doppler broadening of the total cross sections with increasing temperature was found to be qualitatively correct. Test calculations performed using MCNP prompted two modifications to the MCNP6 source code and also exposed bad secondary neutron yields for {sup 231,233}Pa that are present in both ENDF/B-VII.1 and ENDF/B-VII.0. A comparison of ENDF71x with its predecessor ACE library, ENDF70, showed that dramatic changes have been made in the neutron cross section data for a number of isotopes between ENDF/B-VII.0 and ENDF/B-VII.1. Based on the results of these verification tests and the validation tests performed by Kahler, et al. [2], the ENDF71x library is recommended for use in all Monte Carlo applications. (authors)« less

  7. MCNP5 evaluation of photoneutron production from the Alexandria University 15 MV Elekta Precise medical LINAC.

    PubMed

    Abou-Taleb, W M; Hassan, M H; El Mallah, E A; Kotb, S M

    2018-05-01

    Photoneutron production, and the dose equivalent, in the head assembly of the 15 MV Elekta Precise medical linac; operating in the faculty of Medicine at Alexandria University were estimated with the MCNP5 code. Photoneutron spectra were calculated in air and inside a water phantom to different depths as a function of the radiation field sizes. The maximum neutron fluence is 3.346×10 -9 n/cm 2 -e for a 30×30 cm 2 field size to 2-4 cm-depth in the phantom. The dose equivalent due to fast neutron increases as the field size increases, being a maximum of 0.912 ± 0.05 mSv/Gy at depth between 2 and 4 cm in the water phantom for 40×40 cm 2 field size. Photoneutron fluence and dose equivalent are larger to 100 cm from the isocenter than to 35 cm from the treatment room wall. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Experimental Acidification Causes Soil Base-Cation Depletion at the Bear Brook Watershed in Maine

    Treesearch

    Ivan J. Fernandez; Lindsey E. Rustad; Stephen A. Norton; Jeffrey S. Kahl; Bernard J. Cosby

    2003-01-01

    There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to elevated N and S deposition through bimonthly additions of (NH4)2SO4. Quantitative soil...

  9. Water Depletion Threatens Agriculture

    NASA Astrophysics Data System (ADS)

    Brauman, K. A.; Richter, B. D.; Postel, S.; Floerke, M.; Malsy, M.

    2014-12-01

    Irrigated agriculture is the human activity that has by far the largest impact on water, constituting 85% of global water consumption and 67% of global water withdrawals. Much of this water use occurs in places where water depletion, the ratio of water consumption to water availability, exceeds 75% for at least one month of the year. Although only 17% of global watershed area experiences depletion at this level or more, nearly 30% of total cropland and 60% of irrigated cropland are found in these depleted watersheds. Staple crops are particularly at risk, with 75% of global irrigated wheat production and 65% of irrigated maize production found in watersheds that are at least seasonally depleted. Of importance to textile production, 75% of cotton production occurs in the same watersheds. For crop production in depleted watersheds, we find that one half to two-thirds of production occurs in watersheds that have not just seasonal but annual water shortages, suggesting that re-distributing water supply over the course of the year cannot be an effective solution to shortage. We explore the degree to which irrigated production in depleted watersheds reflects limitations in supply, a byproduct of the need for irrigation in perennially or seasonally dry landscapes, and identify heavy irrigation consumption that leads to watershed depletion in more humid climates. For watersheds that are not depleted, we evaluate the potential impact of an increase in irrigated production. Finally, we evaluate the benefits of irrigated agriculture in depleted and non-depleted watersheds, quantifying the fraction of irrigated production going to food production, animal feed, and biofuels.

  10. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  11. Transient Treg depletion enhances therapeutic anti-cancer vaccination.

    PubMed

    Fisher, Scott A; Aston, Wayne J; Chee, Jonathan; Khong, Andrea; Cleaver, Amanda L; Solin, Jessica N; Ma, Shaokang; Lesterhuis, W Joost; Dick, Ian; Holt, Robert A; Creaney, Jenette; Boon, Louis; Robinson, Bruce; Lake, Richard A

    2017-03-01

    Regulatory T cells (Treg) play an important role in suppressing anti- immunity and their depletion has been linked to improved outcomes. To better understand the role of Treg in limiting the efficacy of anti-cancer immunity, we used a Diphtheria toxin (DTX) transgenic mouse model to specifically target and deplete Treg. Tumor bearing BALB/c FoxP3.dtr transgenic mice were subjected to different treatment protocols, with or without Treg depletion and tumor growth and survival monitored. DTX specifically depleted Treg in a transient, dose-dependent manner. Treg depletion correlated with delayed tumor growth, increased effector T cell (Teff) activation, and enhanced survival in a range of solid tumors. Tumor regression was dependent on Teffs as depletion of both CD4 and CD8 T cells completely abrogated any survival benefit. Severe morbidity following Treg depletion was only observed, when consecutive doses of DTX were given during peak CD8 T cell activation, demonstrating that Treg can be depleted on multiple occasions, but only when CD8 T cell activation has returned to base line levels. Finally, we show that even minimal Treg depletion is sufficient to significantly improve the efficacy of tumor-peptide vaccination. BALB/c.FoxP3.dtr mice are an ideal model to investigate the full therapeutic potential of Treg depletion to boost anti-tumor immunity. DTX-mediated Treg depletion is transient, dose-dependent, and leads to strong anti-tumor immunity and complete tumor regression at high doses, while enhancing the efficacy of tumor-specific vaccination at low doses. Together this data highlight the importance of Treg manipulation as a useful strategy for enhancing current and future cancer immunotherapies.

  12. Investigation of some possible changes in Am-Be neutron source configuration in order to increase the thermal neutron flux using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Basiri, H.; Tavakoli-Anbaran, H.

    2018-01-01

    Am-Be neutrons source is based on (α, n) reaction and generates neutrons in the energy range of 0-11 MeV. Since the thermal neutrons are widely used in different fields, in this work, we investigate how to improve the source configuration in order to increase the thermal flux. These suggested changes include a spherical moderator instead of common cylindrical geometry, a reflector layer and an appropriate materials selection in order to achieve the maximum thermal flux. All calculations were done by using MCNP1 Monte Carlo code. Our final results indicated that a spherical paraffin moderator, a layer of beryllium as a reflector can efficiently increase the thermal neutron flux of Am-Be source.

  13. “When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect

    PubMed Central

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion. PMID:25009523

  14. Network Coding in Relay-based Device-to-Device Communications

    PubMed Central

    Huang, Jun; Gharavi, Hamid; Yan, Huifang; Xing, Cong-cong

    2018-01-01

    Device-to-Device (D2D) communications has been realized as an effective means to improve network throughput, reduce transmission latency, and extend cellular coverage in 5G systems. Network coding is a well-established technique known for its capability to reduce the number of retransmissions. In this article, we review state-of-the-art network coding in relay-based D2D communications, in terms of application scenarios and network coding techniques. We then apply two representative network coding techniques to dual-hop D2D communications and present an efficient relay node selecting mechanism as a case study. We also outline potential future research directions, according to the current research challenges. Our intention is to provide researchers and practitioners with a comprehensive overview of the current research status in this area and hope that this article may motivate more researchers to participate in developing network coding techniques for different relay-based D2D communications scenarios. PMID:29503504

  15. Integral experiments on thorium assemblies with D-T neutron source

    NASA Astrophysics Data System (ADS)

    Liu, Rong; Yang, Yiwei; Feng, Song; Zheng, Lei; Lai, Caifeng; Lu, Xinxin; Wang, Mei; Jiang, Li

    2017-09-01

    To validate nuclear data and code in the neutronics design of a hybrid reactor with thorium, integral experiments in two kinds of benchmark thorium assemblies with a D-T fusion neutron source have been performed. The one kind of 1D assemblies consists of polyethylene and depleted uranium shells. The other kind of 2D assemblies consists of three thorium oxide cylinders. The capture reaction rates, fission reaction rates, and (n, 2n) reaction rates in 232Th in the assemblies are measured by ThO2 foils. The leakage neutron spectra from the ThO2 cylinders are measured by a liquid scintillation detector. The experimental uncertainties in all the results are analyzed. The measured results are compared to the calculated ones with MCNP code and ENDF/B-VII.0 library data.

  16. Determining the mass attenuation coefficient, effective atomic number, and electron density of raw wood and binderless particleboards of Rhizophora spp. by using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh

    Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zieb, Kristofer James Ekhart; Hughes, Henry Grady III; Xu, X. George

    The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. Here, this article discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models’ theories are included as well.

  18. Transient Treg depletion enhances therapeutic anti‐cancer vaccination

    PubMed Central

    Aston, Wayne J.; Chee, Jonathan; Khong, Andrea; Cleaver, Amanda L.; Solin, Jessica N.; Ma, Shaokang; Lesterhuis, W. Joost; Dick, Ian; Holt, Robert A.; Creaney, Jenette; Boon, Louis; Robinson, Bruce; Lake, Richard A.

    2016-01-01

    Abstract Introduction Regulatory T cells (Treg) play an important role in suppressing anti‐ immunity and their depletion has been linked to improved outcomes. To better understand the role of Treg in limiting the efficacy of anti‐cancer immunity, we used a Diphtheria toxin (DTX) transgenic mouse model to specifically target and deplete Treg. Methods Tumor bearing BALB/c FoxP3.dtr transgenic mice were subjected to different treatment protocols, with or without Treg depletion and tumor growth and survival monitored. Results DTX specifically depleted Treg in a transient, dose‐dependent manner. Treg depletion correlated with delayed tumor growth, increased effector T cell (Teff) activation, and enhanced survival in a range of solid tumors. Tumor regression was dependent on Teffs as depletion of both CD4 and CD8 T cells completely abrogated any survival benefit. Severe morbidity following Treg depletion was only observed, when consecutive doses of DTX were given during peak CD8 T cell activation, demonstrating that Treg can be depleted on multiple occasions, but only when CD8 T cell activation has returned to base line levels. Finally, we show that even minimal Treg depletion is sufficient to significantly improve the efficacy of tumor‐peptide vaccination. Conclusions BALB/c.FoxP3.dtr mice are an ideal model to investigate the full therapeutic potential of Treg depletion to boost anti‐tumor immunity. DTX‐mediated Treg depletion is transient, dose‐dependent, and leads to strong anti‐tumor immunity and complete tumor regression at high doses, while enhancing the efficacy of tumor‐specific vaccination at low doses. Together this data highlight the importance of Treg manipulation as a useful strategy for enhancing current and future cancer immunotherapies. PMID:28250921

  19. Simulation of image detectors in radiology for determination of scatter-to-primary ratios using Monte Carlo radiation transport code MCNP/MCNPX.

    PubMed

    Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde

    2010-05-01

    The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.

  20. 12. VIEW OF DEPLETED URANIUM INGOT AND MOLDS. DEPLETED URANIUM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. VIEW OF DEPLETED URANIUM INGOT AND MOLDS. DEPLETED URANIUM CASTING OPERATIONS CEASED IN 1988. (11/14/57) - Rocky Flats Plant, Non-Nuclear Production Facility, South of Cottonwood Avenue, west of Seventh Avenue & east of Building 460, Golden, Jefferson County, CO

  1. A Bioinformatics-Based Alternative mRNA Splicing Code that May Explain Some Disease Mutations Is Conserved in Animals.

    PubMed

    Qu, Wen; Cingolani, Pablo; Zeeberg, Barry R; Ruden, Douglas M

    2017-01-01

    Deep sequencing of cDNAs made from spliced mRNAs indicates that most coding genes in many animals and plants have pre-mRNA transcripts that are alternatively spliced. In pre-mRNAs, in addition to invariant exons that are present in almost all mature mRNA products, there are at least 6 additional types of exons, such as exons from alternative promoters or with alternative polyA sites, mutually exclusive exons, skipped exons, or exons with alternative 5' or 3' splice sites. Our bioinformatics-based hypothesis is that, in analogy to the genetic code, there is an "alternative-splicing code" in introns and flanking exon sequences, analogous to the genetic code, that directs alternative splicing of many of the 36 types of introns. In humans, we identified 42 different consensus sequences that are each present in at least 100 human introns. 37 of the 42 top consensus sequences are significantly enriched or depleted in at least one of the 36 types of introns. We further supported our hypothesis by showing that 96 out of 96 analyzed human disease mutations that affect RNA splicing, and change alternative splicing from one class to another, can be partially explained by a mutation altering a consensus sequence from one type of intron to that of another type of intron. Some of the alternative splicing consensus sequences, and presumably their small-RNA or protein targets, are evolutionarily conserved from 50 plant to animal species. We also noticed the set of introns within a gene usually share the same splicing codes, thus arguing that one sub-type of splicesosome might process all (or most) of the introns in a given gene. Our work sheds new light on a possible mechanism for generating the tremendous diversity in protein structure by alternative splicing of pre-mRNAs.

  2. Preparation and Immunoaffinity Depletion of Fresh Frozen Tissue Homogenates for Mass Spectrometry-Based Proteomics in the Context of Drug Target/Biomarker Discovery.

    PubMed

    Prieto, DaRue A; Chan, King C; Johann, Donald J; Ye, Xiaoying; Whitely, Gordon; Blonder, Josip

    2017-01-01

    The discovery of novel drug targets and biomarkers via mass spectrometry (MS)-based proteomic analysis of clinical specimens has proven to be challenging. The wide dynamic range of protein concentration in clinical specimens and the high background/noise originating from highly abundant proteins in tissue homogenates and serum/plasma encompass two major analytical obstacles. Immunoaffinity depletion of highly abundant blood-derived proteins from serum/plasma is a well-established approach adopted by numerous researchers; however, the utilization of this technique for immunodepletion of tissue homogenates obtained from fresh frozen clinical specimens is lacking. We first developed immunoaffinity depletion of highly abundant blood-derived proteins from tissue homogenates, using renal cell carcinoma as a model disease, and followed this study by applying it to different tissue types. Tissue homogenate immunoaffinity depletion of highly abundant proteins may be equally important as is the recognized need for depletion of serum/plasma, enabling more sensitive MS-based discovery of novel drug targets, and/or clinical biomarkers from complex clinical samples. Provided is a detailed protocol designed to guide the researcher through the preparation and immunoaffinity depletion of fresh frozen tissue homogenates for two-dimensional liquid chromatography, tandem mass spectrometry (2D-LC-MS/MS)-based molecular profiling of tissue specimens in the context of drug target and/or biomarker discovery.

  3. Understanding the haling power depletion (HPD) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, S.; Blyth, T.; Ivanov, K.

    2012-07-01

    The Pennsylvania State Univ. (PSU) is using the university version of the Studsvik Scandpower Code System (CMS) for research and education purposes. Preparations have been made to incorporate the CMS into the PSU Nuclear Engineering graduate class 'Nuclear Fuel Management' course. The information presented in this paper was developed during the preparation of the material for the course. The Haling Power Depletion (HPD) was presented in the course for the first time. The HPD method has been criticized as not valid by many in the field even though it has been successfully applied at PSU for the past 20 years.more » It was noticed that the radial power distribution (RPD) for low leakage cores during depletion remained similar to that of the HPD during most of the cycle. Thus, the Haling Power Depletion (HPD) may be used conveniently mainly for low leakage cores. Studies were then made to better understand the HPD and the results are presented in this paper. Many different core configurations can be computed quickly with the HPD without using Burnable Poisons (BP) to produce several excellent low leakage core configurations that are viable for power production. Once the HPD core configuration is chosen for further analysis, techniques are available for establishing the BP design to prevent violating any of the safety constraints in such HPD calculated cores. In summary, in this paper it has been shown that the HPD method can be used for guiding the design for the low leakage core. (authors)« less

  4. How Ego Depletion Affects Sexual Self-Regulation: Is It More Than Resource Depletion?

    PubMed

    Nolet, Kevin; Rouleau, Joanne-Lucine; Benbouriche, Massil; Carrier Emond, Fannie; Renaud, Patrice

    2015-12-21

    Rational thinking and decision making are impacted when in a state of sexual arousal. The inability to self-regulate arousal can be linked to numerous problems, like sexual risk taking, infidelity, and sexual coercion. Studies have shown that most men are able to exert voluntary control over their sexual excitation with various levels of success. Both situational and dispositional factors can influence self-regulation achievement. The goal of this research was to investigate how ego depletion, a state of low self-control capacity, interacts with personality traits-propensities for sexual excitation and inhibition-and cognitive absorption, to cause sexual self-regulation failure. The sexual responses of 36 heterosexual males were assessed using penile plethysmography. They were asked to control their sexual arousal in two conditions, with and without ego depletion. Results suggest that ego depletion has opposite effects based on the trait sexual inhibition, as individuals moderately inhibited showed an increase in performance while highly inhibited ones showed a decrease. These results challenge the limited resource model of self-regulation and point to the importance of considering how people adapt to acute and high challenging conditions.

  5. Validation of the MCNP6 electron-photon transport algorithm: multiple-scattering of 13- and 20-MeV electrons in thin foils

    NASA Astrophysics Data System (ADS)

    Dixon, David A.; Hughes, H. Grady

    2017-09-01

    This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied.

  6. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  7. Auto Code Generation for Simulink-Based Attitude Determination Control System

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  8. Performance tradeoff between lateral and interdigitated doping patterns for high speed carrier-depletion based silicon modulators.

    PubMed

    Yu, Hui; Pantouvaki, Marianna; Van Campenhout, Joris; Korn, Dietmar; Komorowska, Katarzyna; Dumon, Pieter; Li, Yanlu; Verheyen, Peter; Absil, Philippe; Alloatti, Luca; Hillerkuss, David; Leuthold, Juerg; Baets, Roel; Bogaerts, Wim

    2012-06-04

    Carrier-depletion based silicon modulators with lateral and interdigitated PN junctions are compared systematically on the same fabrication platform. The interdigitated diode is shown to outperform the lateral diode in achieving a low VπLπ of 0.62 V∙cm with comparable propagation loss at the expense of a higher depletion capacitance. The low VπLπ of the interdigitated PN junction is employed to demonstrate 10 Gbit/s modulation with 7.5 dB extinction ration from a 500 µm long device whose static insertion loss is 2.8 dB. In addition, up to 40 Gbit/s modulation is demonstrated for a 3 mm long device comprising a lateral diode and a co-designed traveling wave electrode.

  9. MCNP simulation of the dose distribution in liver cancer treatment for BNC therapy

    NASA Astrophysics Data System (ADS)

    Krstic, Dragana; Jovanovic, Zoran; Markovic, Vladimir; Nikezic, Dragoslav; Urosevic, Vlade

    2014-10-01

    The Boron Neutron Capture Therapy ( BNCT) is based on selective uptake of boron in tumour tissue compared to the surrounding normal tissue. Infusion of compounds with boron is followed by irradiation with neutrons. Neutron capture on 10B, which gives rise to an alpha particle and recoiled 7Li ion, enables the therapeutic dose to be delivered to tumour tissue while healthy tissue can be spared. Here, therapeutic abilities of BNCT were studied for possible treatment of liver cancer using thermal and epithermal neutron beam. For neutron transport MCNP software was used and doses in organs of interest in ORNL phantom were evaluated. Phantom organs were filled with voxels in order to obtain depth-dose distributions in them. The result suggests that BNCT using an epithermal neutron beam could be applied for liver cancer treatment.

  10. The Toxicity of Depleted Uranium

    PubMed Central

    Briner, Wayne

    2010-01-01

    Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed. PMID:20195447

  11. It Is Chloride Depletion Alkalosis, Not Contraction Alkalosis

    PubMed Central

    Galla, John H.

    2012-01-01

    Maintenance of metabolic alkalosis generated by chloride depletion is often attributed to volume contraction. In balance and clearance studies in rats and humans, we showed that chloride repletion in the face of persisting alkali loading, volume contraction, and potassium and sodium depletion completely corrects alkalosis by a renal mechanism. Nephron segment studies strongly suggest the corrective response is orchestrated in the collecting duct, which has several transporters integral to acid-base regulation, the most important of which is pendrin, a luminal Cl/HCO3− exchanger. Chloride depletion alkalosis should replace the notion of contraction alkalosis. PMID:22223876

  12. Optical image encryption based on real-valued coding and subtracting with the help of QR code

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng

    2015-08-01

    A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.

  13. Evaluation of three high abundance protein depletion kits for umbilical cord serum proteomics

    PubMed Central

    2011-01-01

    Background High abundance protein depletion is a major challenge in the study of serum/plasma proteomics. Prior to this study, most commercially available kits for depletion of highly abundant proteins had only been tested and evaluated in adult serum/plasma, while the depletion efficiency on umbilical cord serum/plasma had not been clarified. Structural differences between some adult and fetal proteins (such as albumin) make it likely that depletion approaches for adult and umbilical cord serum/plasma will be variable. Therefore, the primary purposes of the present study are to investigate the efficiencies of several commonly-used commercial kits during high abundance protein depletion from umbilical cord serum and to determine which kit yields the most effective and reproducible results for further proteomics research on umbilical cord serum. Results The immunoaffinity based kits (PROTIA-Sigma and 5185-Agilent) displayed higher depletion efficiency than the immobilized dye based kit (PROTBA-Sigma) in umbilical cord serum samples. Both the PROTIA-Sigma and 5185-Agilent kit maintained high depletion efficiency when used three consecutive times. Depletion by the PROTIA-Sigma Kit improved 2DE gel quality by reducing smeared bands produced by the presence of high abundance proteins and increasing the intensity of other protein spots. During image analysis using the identical detection parameters, 411 ± 18 spots were detected in crude serum gels, while 757 ± 43 spots were detected in depleted serum gels. Eight spots unique to depleted serum gels were identified by MALDI- TOF/TOF MS, seven of which were low abundance proteins. Conclusions The immunoaffinity based kits exceeded the immobilized dye based kit in high abundance protein depletion of umbilical cord serum samples and dramatically improved 2DE gel quality for detection of trace biomarkers. PMID:21554704

  14. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  15. Reference View Selection in DIBR-Based Multiview Coding.

    PubMed

    Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-04-01

    Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.

  16. 26 CFR 1.613-7 - Application of percentage depletion rates provided in section 613(b) to certain taxable years...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAXES (CONTINUED) Natural Resources § 1.613-7 Application of percentage depletion rates provided in... Code). In the case of mines, wells, or other natural deposits listed in section 613(b), the election...

  17. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    PubMed

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. Control of the Low-energy X-rays by Using MCNP5 and Numerical Analysis for a New Concept Intra-oral X-ray Imaging System

    NASA Astrophysics Data System (ADS)

    Huh, Jangyong; Ji, Yunseo; Lee, Rena

    2018-05-01

    An X-ray control algorithm to modulate the X-ray intensity distribution over the FOV (field of view) has been developed by using numerical analysis and MCNP5, a particle transport simulation code on the basis of the Monte Carlo method. X-rays, which are widely used in medical diagnostic imaging, should be controlled in order to maximize the performance of the X-ray imaging system. However, transporting X-rays, like a liquid or a gas is conveyed through a physical form such as pipes, is not possible. In the present study, an X-ray control algorithm and technique to uniformize the Xray intensity projected on the image sensor were developed using a flattening filter and a collimator in order to alleviate the anisotropy of the distribution of X-rays due to intrinsic features of the X-ray generator. The proposed method, which is combined with MCNP5 modeling and numerical analysis, aimed to optimize a flattening filter and a collimator for a uniform distribution of X-rays. Their size and shape were estimated from the method. The simulation and the experimental results both showed that the method yielded an intensity distribution over an X-ray field of 6×4 cm2 at SID (source to image-receptor distance) of 5 cm with a uniformity of more than 90% when the flattening filter and the collimator were mounted on the system. The proposed algorithm and technique are not only confined to flattening filter development but can also be applied for other X-ray related research and development efforts.

  19. Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, P.J.; Grasseschi, G.L.; Olsen, D.N.

    1992-01-01

    Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclearmore » data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.« less

  20. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  1. Tuning donut profile for spatial resolution in stimulated emission depletion microscopy.

    PubMed

    Neupane, Bhanu; Chen, Fang; Sun, Wei; Chiu, Daniel T; Wang, Gufeng

    2013-04-01

    In stimulated emission depletion (STED)-based or up-conversion depletion-based super-resolution optical microscopy, the donut-shaped depletion beam profile is of critical importance to its resolution. In this study, we investigate the transformation of the donut-shaped depletion beam focused by a high numerical aperture (NA) microscope objective, and model STED point spread function (PSF) as a function of donut beam profile. We show experimentally that the intensity profile of the dark kernel of the donut can be approximated as a parabolic function, whose slope is determined by the donut beam size before the objective back aperture, or the effective NA. Based on this, we derive the mathematical expression for continuous wave (CW) STED PSF as a function of focal plane donut and excitation beam profiles, as well as dye properties. We find that the effective NA and the residual intensity at the center are critical factors for STED imaging quality and the resolution. The effective NA is critical for STED resolution in that it not only determines the donut shape but also the area the depletion laser power is dispersed. An improperly expanded depletion beam will have negligible improvement in resolution. The polarization of the depletion beam also plays an important role as it affects the residual intensity in the center of the donut. Finally, we construct a CW STED microscope operating at 488 nm excitation and 592 nm depletion with a resolution of 70 nm. Our study provides detailed insight to the property of donut beam, and parameters that are important for the optimal performance of STED microscopes. This paper will provide a useful guide for the construction and future development of STED microscopes.

  2. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  3. Groundwater depletion embedded in international food trade.

    PubMed

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J

    2017-03-29

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world's food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world's population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  4. Groundwater Depletion Embedded in International Food Trade

    NASA Technical Reports Server (NTRS)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-01-01

    Recent hydrological modeling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world's food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world's population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  5. Groundwater depletion embedded in international food trade

    NASA Astrophysics Data System (ADS)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-03-01

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  6. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  7. An investigation of MCNP6.1 beryllium oxide S(α, β) cross sections

    DOE PAGES

    Sartor, Raymond F.; Glazener, Natasha N.

    2016-03-08

    In MCNP6.1, materials are constructed by identifying the constituent isotopes (or elements in a few cases) individually. This list selects the corresponding microscopic cross sections calculated from the free-gas model to create the material macroscopic cross sections. Furthermore, the free-gas model and the corresponding material macroscopic cross sections assume that the interactions of atoms do not affect the nuclear cross sections.

  8. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  9. Cross-site comparison of ribosomal depletion kits for Illumina RNAseq library construction.

    PubMed

    Herbert, Zachary T; Kershner, Jamie P; Butty, Vincent L; Thimmapuram, Jyothi; Choudhari, Sulbha; Alekseyev, Yuriy O; Fan, Jun; Podnar, Jessica W; Wilcox, Edward; Gipson, Jenny; Gillaspy, Allison; Jepsen, Kristen; BonDurant, Sandra Splinter; Morris, Krystalynne; Berkeley, Maura; LeClerc, Ashley; Simpson, Stephen D; Sommerville, Gary; Grimmett, Leslie; Adams, Marie; Levine, Stuart S

    2018-03-15

    Ribosomal RNA (rRNA) comprises at least 90% of total RNA extracted from mammalian tissue or cell line samples. Informative transcriptional profiling using massively parallel sequencing technologies requires either enrichment of mature poly-adenylated transcripts or targeted depletion of the rRNA fraction. The latter method is of particular interest because it is compatible with degraded samples such as those extracted from FFPE and also captures transcripts that are not poly-adenylated such as some non-coding RNAs. Here we provide a cross-site study that evaluates the performance of ribosomal RNA removal kits from Illumina, Takara/Clontech, Kapa Biosystems, Lexogen, New England Biolabs and Qiagen on intact and degraded RNA samples. We find that all of the kits are capable of performing significant ribosomal depletion, though there are differences in their ease of use. All kits were able to remove ribosomal RNA to below 20% with intact RNA and identify ~ 14,000 protein coding genes from the Universal Human Reference RNA sample at >1FPKM. Analysis of differentially detected genes between kits suggests that transcript length may be a key factor in library production efficiency. These results provide a roadmap for labs on the strengths of each of these methods and how best to utilize them.

  10. A Clustering-Based Approach to Enriching Code Foraging Environment.

    PubMed

    Niu, Nan; Jin, Xiaoyu; Niu, Zhendong; Cheng, Jing-Ru C; Li, Ling; Kataev, Mikhail Yu

    2016-09-01

    Developers often spend valuable time navigating and seeking relevant code in software maintenance. Currently, there is a lack of theoretical foundations to guide tool design and evaluation to best shape the code base to developers. This paper contributes a unified code navigation theory in light of the optimal food-foraging principles. We further develop a novel framework for automatically assessing the foraging mechanisms in the context of program investigation. We use the framework to examine to what extent the clustering of software entities affects code foraging. Our quantitative analysis of long-lived open-source projects suggests that clustering enriches the software environment and improves foraging efficiency. Our qualitative inquiry reveals concrete insights into real developer's behavior. Our research opens the avenue toward building a new set of ecologically valid code navigation tools.

  11. Ego depletion increases risk-taking.

    PubMed

    Fischer, Peter; Kastenmüller, Andreas; Asal, Kathrin

    2012-01-01

    We investigated how the availability of self-control resources affects risk-taking inclinations and behaviors. We proposed that risk-taking often occurs from suboptimal decision processes and heuristic information processing (e.g., when a smoker suppresses or neglects information about the health risks of smoking). Research revealed that depleted self-regulation resources are associated with reduced intellectual performance and reduced abilities to regulate spontaneous and automatic responses (e.g., control aggressive responses in the face of frustration). The present studies transferred these ideas to the area of risk-taking. We propose that risk-taking is increased when individuals find themselves in a state of reduced cognitive self-control resources (ego-depletion). Four studies supported these ideas. In Study 1, ego-depleted participants reported higher levels of sensation seeking than non-depleted participants. In Study 2, ego-depleted participants showed higher levels of risk-tolerance in critical road traffic situations than non-depleted participants. In Study 3, we ruled out two alternative explanations for these results: neither cognitive load nor feelings of anger mediated the effect of ego-depletion on risk-taking. Finally, Study 4 clarified the underlying psychological process: ego-depleted participants feel more cognitively exhausted than non-depleted participants and thus are more willing to take risks. Discussion focuses on the theoretical and practical implications of these findings.

  12. ELEMENTAL DEPLETIONS IN THE MAGELLANIC CLOUDS AND THE EVOLUTION OF DEPLETIONS WITH METALLICITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tchernyshyov, Kirill; Meixner, Margaret; Seale, Jonathan

    2015-10-01

    We present a study of the composition of gas and dust in the Large and Small Magellanic Clouds (LMC and SMC) using UV absorption spectroscopy. We measure P ii and Fe ii along 84 spatially distributed sightlines toward the MCs using archival Far Ultraviolet Spectroscopic Explorer observations. For 16 of those sightlines, we also measure Si ii, Cr ii, and Zn ii from new Hubble Space Telescope Cosmic Origins Spectrograph observations. We analyze these spectra using a new spectral line analysis technique based on a semi-parametric Voigt profile model. We have combined these measurements with H i and H{sub 2}more » column densities and reference stellar abundances from the literature to derive gas-phase abundances, depletions, and gas-to-dust ratios (GDRs). Of our 84 P and 16 Zn measurements, 80 and 13, respectively, are depleted by more than 0.1 dex, suggesting that P and Zn abundances are not accurate metallicity indicators at and above the metallicity of the SMC. Si, Cr, and Fe are systematically less depleted in the SMC than in the Milky Way (MW) or LMC. The minimum Si depletion in the SMC is consistent with zero. We find GDR ranges of 190–565 in the LMC and 480–2100 in the SMC, which is broadly consistent with GDRs from the literature. These ranges represent actual location to location variation and are evidence of dust destruction and/or growth in the diffuse neutral phase of the interstellar medium. Where they overlap in metallicity, the gas-phase abundances of the MW, LMC, and SMC and damped Lyα systems evolve similarly with metallicity.« less

  13. MCNP6 unstructured mesh application to estimate the photoneutron distribution and induced activity inside a linac bunker

    NASA Astrophysics Data System (ADS)

    Juste, B.; Morató, S.; Miró, R.; Verdú, G.; Díez, S.

    2017-08-01

    Unwanted neutrons in radiation therapy treatments are typically generated by photonuclear reactions. High-energy beams emitted by medical Linear Accelerators (LinAcs) interact with high atomic number materials situated in the accelerator head and release neutrons. Since neutrons have a high relative biological effectiveness, even low neutron doses may imply significant exposure of patients. It is also important to study radioactivity induced by these photoneutrons when interacting with the different materials and components of the treatment head facility and the shielding room walls, since persons not present during irradiation (e.g. medical staff) may be exposed to them even when the accelerator is not operating. These problems are studied in this work in order to contribute to challenge the radiation protection in these treatment locations. The work has been performed by simulation using the latest state of the art of Monte-Carlo computer code MCNP6. To that, a detailed model of particles transport inside the bunker and treatment head has been carried out using a meshed geometry model. The LinAc studied is an Elekta Precise accelerator with a treatment photon energy of 15 MeV used at the Hospital Clinic Universitari de Valencia, Spain.

  14. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  15. Depleting high-abundant and enriching low-abundant proteins in human serum: An evaluation of sample preparation methods using magnetic nanoparticle, chemical depletion and immunoaffinity techniques.

    PubMed

    de Jesus, Jemmyson Romário; da Silva Fernandes, Rafael; de Souza Pessôa, Gustavo; Raimundo, Ivo Milton; Arruda, Marco Aurélio Zezzi

    2017-08-01

    The efficiency of three different depletion methods to remove the most abundant proteins, enriching those human serum proteins with low abundance is checked to make more efficient the search and discovery of biomarkers. These methods utilize magnetic nanoparticles (MNPs), chemical reagents (sequential application of dithiothreitol and acetonitrile, DTT/ACN), and commercial apparatus based on immunoaffinity (ProteoMiner, PM). The comparison between methods shows significant removal of abundant protein, remaining in the supernatant at concentrations of 4.6±0.2, 3.6±0.1, and 3.3±0.2µgµL -1 (n=3) for MNPs, DTT/ACN and PM respectively, from a total protein content of 54µgµL -1 . Using GeLC-MS/MS analysis, MNPs depletion shows good efficiency in removing high molecular weight proteins (>80kDa). Due to the synergic effect between the reagents DTT and ACN, DTT/ACN-based depletion offers good performance in the depletion of thiol-rich proteins, such as albumin and transferrin (DTT action), as well as of high molecular weight proteins (ACN action). Furthermore, PM equalization confirms its efficiency in concentrating low-abundant proteins, decreasing the dynamic range of protein levels in human serum. Direct comparison between the treatments reveals 72 proteins identified when using MNP depletion (43 of them exclusively by this method), but only 20 proteins using DTT/ACN (seven exclusively by this method). Additionally, after PM treatment 30 proteins were identified, seven exclusively by this method. Thus, MNPs and DTT/ACN depletion can be simple, quick, cheap, and robust alternatives for immunochemistry-based protein depletion, providing a potential strategy in the search for disease biomarkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  17. Bond rupture between colloidal particles with a depletion interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, Kathryn A.; Furst, Eric M., E-mail: furst@udel.edu

    The force required to break the bonds of a depletion gel is measured by dynamically loading pairs of colloidal particles suspended in a solution of a nonadsorbing polymer. Sterically stabilized poly(methyl methacrylate) colloids that are 2.7 μm diameter are brought into contact in a solvent mixture of cyclohexane-cyclohexyl bromide and polystyrene polymer depletant. The particle pairs are subject to a tensile load at a constant loading rate over many approach-retraction cycles. The stochastic nature of the thermal rupture events results in a distribution of bond rupture forces with an average magnitude and variance that increases with increasing depletant concentration. The measuredmore » force distribution is described by the flux of particle pairs sampling the energy barrier of the bond interaction potential based on the Asakura–Oosawa depletion model. A transition state model demonstrates the significance of lubrication hydrodynamic interactions and the effect of the applied loading rate on the rupture force of bonds in a depletion gel.« less

  18. Terrestrial Ozone Depletion Due to a Milky Way Gamma-Ray Burst

    NASA Technical Reports Server (NTRS)

    Thomas, Brian C.; Jackman, Charles H.; Melott, Adrian L.; Laird, Claude M.; Stolarski, Richard S.; Gehrels, Neil; Cannizzo, John K.; Hogan, Daniel P.

    2005-01-01

    Based on cosmological rates, it is probable that at least once in the last Gy the Earth has been irradiated by a gamma-ray burst in our Galaxy from within 2 kpc. Using a two-dimensional atmospheric model we have computed the effects upon the Earth's atmosphere of one such burst. A ten second burst delivering 100 kJ/sq m to the Earth results in globally averaged ozone depletion of 35%, with depletion reaching 55% at some latitudes. Significant global depletion persists for over 5 years after the burst. This depletion would have dramatic implications for life since a 50% decrease in ozone column density results in approximately three times the normal UVB flux. Widespread extinctions are likely, based on extrapolation from UVB sensitivity of modern organisms.

  19. A Multilab Preregistered Replication of the Ego-Depletion Effect.

    PubMed

    Hagger, Martin S; Chatzisarantis, Nikos L D; Alberts, Hugo; Anggono, Calvin Octavianus; Batailler, Cédric; Birt, Angela R; Brand, Ralf; Brandt, Mark J; Brewer, Gene; Bruyneel, Sabrina; Calvillo, Dustin P; Campbell, W Keith; Cannon, Peter R; Carlucci, Marianna; Carruth, Nicholas P; Cheung, Tracy; Crowell, Adrienne; De Ridder, Denise T D; Dewitte, Siegfried; Elson, Malte; Evans, Jacqueline R; Fay, Benjamin A; Fennis, Bob M; Finley, Anna; Francis, Zoë; Heise, Elke; Hoemann, Henrik; Inzlicht, Michael; Koole, Sander L; Koppel, Lina; Kroese, Floor; Lange, Florian; Lau, Kevin; Lynch, Bridget P; Martijn, Carolien; Merckelbach, Harald; Mills, Nicole V; Michirev, Alexej; Miyake, Akira; Mosser, Alexandra E; Muise, Megan; Muller, Dominique; Muzi, Milena; Nalis, Dario; Nurwanti, Ratri; Otgaar, Henry; Philipp, Michael C; Primoceri, Pierpaolo; Rentzsch, Katrin; Ringos, Lara; Schlinkert, Caroline; Schmeichel, Brandon J; Schoch, Sarah F; Schrama, Michel; Schütz, Astrid; Stamos, Angelos; Tinghög, Gustav; Ullrich, Johannes; vanDellen, Michelle; Wimbarti, Supra; Wolff, Wanja; Yusainy, Cleoputri; Zerhouni, Oulmann; Zwienenberg, Maria

    2016-07-01

    Good self-control has been linked to adaptive outcomes such as better health, cohesive personal relationships, success in the workplace and at school, and less susceptibility to crime and addictions. In contrast, self-control failure is linked to maladaptive outcomes. Understanding the mechanisms by which self-control predicts behavior may assist in promoting better regulation and outcomes. A popular approach to understanding self-control is the strength or resource depletion model. Self-control is conceptualized as a limited resource that becomes depleted after a period of exertion resulting in self-control failure. The model has typically been tested using a sequential-task experimental paradigm, in which people completing an initial self-control task have reduced self-control capacity and poorer performance on a subsequent task, a state known as ego depletion Although a meta-analysis of ego-depletion experiments found a medium-sized effect, subsequent meta-analyses have questioned the size and existence of the effect and identified instances of possible bias. The analyses served as a catalyst for the current Registered Replication Report of the ego-depletion effect. Multiple laboratories (k = 23, total N = 2,141) conducted replications of a standardized ego-depletion protocol based on a sequential-task paradigm by Sripada et al. Meta-analysis of the studies revealed that the size of the ego-depletion effect was small with 95% confidence intervals (CIs) that encompassed zero (d = 0.04, 95% CI [-0.07, 0.15]. We discuss implications of the findings for the ego-depletion effect and the resource depletion model of self-control. © The Author(s) 2016.

  20. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  1. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  2. Transequatorial Propagation and Depletion Precursors

    NASA Astrophysics Data System (ADS)

    Miller, E. S.; Bust, G. S.; Kaeppler, S. R.; Frissell, N. A.; Paxton, L. J.

    2014-12-01

    The bottomside equatorial ionosphere in the afternoon and evening sector frequently evolves rapidly from smoothly stratified to violently unstable with large wedges of depleted plasma growing through to the topside on timescales of a few tens of minutes. These depletions have numerous practical impacts on radio propagation, including amplitude scintillation, field-aligned irregularity scatter, HF blackouts, and long-distance transequatorial propagation at frequencies above the MUF. Practical impacts notwithstanding, the pathways and conditions under which depletions form remain a topic of vigorous inquiry some 80 years after their first report. Structuring of the pre-sunset ionosphere---morphology of the equatorial anomalies and long-wavelength undulations of the isodensity contours on the bottomside---are likely to hold some clues to conditions that are conducive to depletion formation. The Conjugate Depletion Experiment is an upcoming transequatorial forward-scatter HF/VHF experiment to investigate pre-sunset undulations and their connection with depletion formation. We will present initial results from the Conjugate Depletion Experiment, as well as a companion analysis of a massive HF propagation data set.

  3. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  4. Methods used to calculate doses resulting from inhalation of Capstone depleted uranium aerosols.

    PubMed

    Miller, Guthrie; Cheng, Yung Sung; Traub, Richard J; Little, Tom T; Guilmette, Raymond A

    2009-03-01

    The methods used to calculate radiological and toxicological doses to hypothetical persons inside either a U.S. Army Abrams tank or Bradley Fighting Vehicle that has been perforated by depleted uranium munitions are described. Data from time- and particle-size-resolved measurements of depleted uranium aerosol as well as particle-size-resolved measurements of aerosol solubility in lung fluids for aerosol produced in the breathing zones of the hypothetical occupants were used. The aerosol was approximated as a mixture of nine monodisperse (single particle size) components corresponding to particle size increments measured by the eight stages plus the backup filter of the cascade impactors used. A Markov Chain Monte Carlo Bayesian analysis technique was employed, which straightforwardly calculates the uncertainties in doses. Extensive quality control checking of the various computer codes used is described.

  5. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  6. Background-Source Cosmic-Photon Elevation Scaling and Cosmic-Neutron/Photon Date Scaling in MCNP6

    NASA Astrophysics Data System (ADS)

    Tutt, J.; Anderson, C.; McKinney, G.

    Cosmic neutron and photon fluxes are known to scale exponentially with elevation. Consequently, cosmic neutron elevation scaling was implemented for use with the background-source option shortly after its introduction into MCNP6, whereby the neutron flux weight factor was adjusted by the elevation scaling factor when the user-specified elevation differed from the selected background.dat grid-point elevation. At the same time, an elevation scaling factor was suggested for the cosmic photon flux, however, cosmic photon elevation scaling is complicated by the fact that the photon background consists of two components: cosmic and terrestrial. Previous versions of the background.dat file did not provide any way to separate these components. With Rel. 4 of this file in 2015, two new columns were added that provide the energy grid and differential cosmic photon flux separately from the total photon flux. Here we show that the cosmic photon flux component can now be scaled independently and combined with the terrestrial component to form the total photon flux at a user-specified elevation in MCNP6. Cosmic background fluxes also scale with the solar cycle due to solar modulation. This modulation has been shown to be nearly sinusoidal over time, with an inverse effect - increased modulation leads to a decrease in cosmic fluxes. This effect was initially included with the cosmic source option in MCNP6 and has now been extended for use with the background source option when: (1) the date is specified in the background.dat file, and (2) when the user specifies a date on the source definition card. A description of the cosmic-neutron/photon date scaling feature will be presented along with scaling results for past and future date extrapolations.

  7. Background-Source Cosmic-Photon Elevation Scaling and Cosmic-Neutron/Photon Date Scaling in MCNP6

    DOE PAGES

    Tutt, James Robert; Anderson, Casey Alan; McKinney, Gregg Walter

    2017-10-26

    Here, cosmic neutron and photon fluxes are known to scale exponentially with elevation. Consequently, cosmic neutron elevation scaling was implemented for use with the background-source option shortly after its introduction into MCNP6, whereby the neutron flux weight factor was adjusted by the elevation scaling factor when the user-specified elevation differed from the selected background.dat grid-point elevation. At the same time, an elevation scaling factor was suggested for the cosmic photon flux, however, cosmic photon elevation scaling is complicated by the fact that the photon background consists of two components: cosmic and terrestrial. Previous versions of the background.dat file did notmore » provide any way to separate these components. With Rel. 4 of this file in 2015, two new columns were added that provide the energy grid and differential cosmic photon flux separately from the total photon flux. Here we show that the cosmic photon flux component can now be scaled independently and combined with the terrestrial component to form the total photon flux at a user-specified elevation in MCNP6.« less

  8. Background-Source Cosmic-Photon Elevation Scaling and Cosmic-Neutron/Photon Date Scaling in MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tutt, James Robert; Anderson, Casey Alan; McKinney, Gregg Walter

    Here, cosmic neutron and photon fluxes are known to scale exponentially with elevation. Consequently, cosmic neutron elevation scaling was implemented for use with the background-source option shortly after its introduction into MCNP6, whereby the neutron flux weight factor was adjusted by the elevation scaling factor when the user-specified elevation differed from the selected background.dat grid-point elevation. At the same time, an elevation scaling factor was suggested for the cosmic photon flux, however, cosmic photon elevation scaling is complicated by the fact that the photon background consists of two components: cosmic and terrestrial. Previous versions of the background.dat file did notmore » provide any way to separate these components. With Rel. 4 of this file in 2015, two new columns were added that provide the energy grid and differential cosmic photon flux separately from the total photon flux. Here we show that the cosmic photon flux component can now be scaled independently and combined with the terrestrial component to form the total photon flux at a user-specified elevation in MCNP6.« less

  9. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  10. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  11. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  12. Human podocyte depletion in association with older age and hypertension.

    PubMed

    Puelles, Victor G; Cullen-McEwen, Luise A; Taylor, Georgina E; Li, Jinhua; Hughson, Michael D; Kerr, Peter G; Hoy, Wendy E; Bertram, John F

    2016-04-01

    Podocyte depletion plays a major role in the development and progression of glomerulosclerosis. Many kidney diseases are more common in older age and often coexist with hypertension. We hypothesized that podocyte depletion develops in association with older age and is exacerbated by hypertension. Kidneys from 19 adult Caucasian American males without overt renal disease were collected at autopsy in Mississippi. Demographic data were obtained from medical and autopsy records. Subjects were categorized by age and hypertension as potential independent and additive contributors to podocyte depletion. Design-based stereology was used to estimate individual glomerular volume and total podocyte number per glomerulus, which allowed the calculation of podocyte density (number per volume). Podocyte depletion was defined as a reduction in podocyte number (absolute depletion) or podocyte density (relative depletion). The cortical location of glomeruli (outer or inner cortex) and presence of parietal podocytes were also recorded. Older age was an independent contributor to both absolute and relative podocyte depletion, featuring glomerular hypertrophy, podocyte loss, and thus reduced podocyte density. Hypertension was an independent contributor to relative podocyte depletion by exacerbating glomerular hypertrophy, mostly in glomeruli from the inner cortex. However, hypertension was not associated with podocyte loss. Absolute and relative podocyte depletion were exacerbated by the combination of older age and hypertension. The proportion of glomeruli with parietal podocytes increased with age but not with hypertension alone. These findings demonstrate that older age and hypertension are independent and additive contributors to podocyte depletion in white American men without kidney disease. Copyright © 2016 the American Physiological Society.

  13. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes

    NASA Astrophysics Data System (ADS)

    Marvian, Milad; Lidar, Daniel A.

    2017-01-01

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  14. Error Suppression for Hamiltonian-Based Quantum Computation Using Subsystem Codes.

    PubMed

    Marvian, Milad; Lidar, Daniel A

    2017-01-20

    We present general conditions for quantum error suppression for Hamiltonian-based quantum computation using subsystem codes. This involves encoding the Hamiltonian performing the computation using an error detecting subsystem code and the addition of a penalty term that commutes with the encoded Hamiltonian. The scheme is general and includes the stabilizer formalism of both subspace and subsystem codes as special cases. We derive performance bounds and show that complete error suppression results in the large penalty limit. To illustrate the power of subsystem-based error suppression, we introduce fully two-local constructions for protection against local errors of the swap gate of adiabatic gate teleportation and the Ising chain in a transverse field.

  15. Calculation of Absorbed Dose in Target Tissue and Equivalent Dose in Sensitive Tissues of Patients Treated by BNCT Using MCNP4C

    NASA Astrophysics Data System (ADS)

    Zamani, M.; Kasesaz, Y.; Khalafi, H.; Pooya, S. M. Hosseini

    Boron Neutron Capture Therapy (BNCT) is used for treatment of many diseases, including brain tumors, in many medical centers. In this method, a target area (e.g., head of patient) is irradiated by some optimized and suitable neutron fields such as research nuclear reactors. Aiming at protection of healthy tissues which are located in the vicinity of irradiated tissue, and based on the ALARA principle, it is required to prevent unnecessary exposure of these vital organs. In this study, by using numerical simulation method (MCNP4C Code), the absorbed dose in target tissue and the equiavalent dose in different sensitive tissues of a patiant treated by BNCT, are calculated. For this purpose, we have used the parameters of MIRD Standard Phantom. Equiavelent dose in 11 sensitive organs, located in the vicinity of target, and total equivalent dose in whole body, have been calculated. The results show that the absorbed dose in tumor and normal tissue of brain equal to 30.35 Gy and 0.19 Gy, respectively. Also, total equivalent dose in 11 sensitive organs, other than tumor and normal tissue of brain, is equal to 14 mGy. The maximum equivalent doses in organs, other than brain and tumor, appear to the tissues of lungs and thyroid and are equal to 7.35 mSv and 3.00 mSv, respectively.

  16. Work plan for improving the DARWIN2.3 depleted material balance calculation of nuclides of interest for the fuel cycle

    NASA Astrophysics Data System (ADS)

    Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain

    2017-09-01

    DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.

  17. [Acute tryptophan depletion in eating disorders].

    PubMed

    Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L

    2006-01-01

    This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.

  18. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  19. Dietary arginine depletion reduces depressive-like responses in male, but not female, mice.

    PubMed

    Workman, Joanna L; Weber, Michael D; Nelson, Randy J

    2011-09-30

    Previous behavioral studies have manipulated nitric oxide (NO) production either by pharmacological inhibition of its synthetic enzyme, nitric oxide synthase (NOS), or by deletion of the genes that code for NOS. However manipulation of dietary intake of the NO precursor, L-arginine, has been understudied in regard to behavioral regulation. L-Arginine is a common amino acid present in many mammalian diets and is essential during development. In the brain L-arginine is converted into NO and citrulline by the enzyme, neuronal NOS (nNOS). In Experiment 1, paired mice were fed a diet comprised either of an L-arginine-depleted, L-arginine-supplemented, or standard level of L-arginine during pregnancy. Offspring were continuously fed the same diets and were tested in adulthood in elevated plus maze, forced swim, and resident-intruder aggression tests. L-Arginine depletion reduced depressive-like responses in male, but not female, mice and failed to significantly alter anxiety-like or aggressive behaviors. Arginine depletion throughout life reduced body mass overall and eliminated the sex difference in body mass. Additionally, arginine depletion significantly increased corticosterone concentrations, which negatively correlated with time spent floating. In Experiment 2, adult mice were fed arginine-defined diets two weeks prior to and during behavioral testing, and again tested in the aforementioned tests. Arginine depletion reduced depressive-like responses in the forced swim test, but did not alter behavior in the elevated plus maze or the resident intruder aggression test. Corticosterone concentrations were not altered by arginine diet manipulation in adulthood. These results indicate that arginine depletion throughout development, as well as during a discrete period during adulthood ameliorates depressive-like responses. These results may yield new insights into the etiology and sex differences of depression. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Effects of L-histidine depletion and L-tyrosine/L-phenylalanine depletion on sensory and motor processes in healthy volunteers

    PubMed Central

    van Ruitenbeek, P; Sambeth, A; Vermeeren, A; Young, SN; Riedel, WJ

    2009-01-01

    Background and purpose: Animal studies show that histamine plays a role in cognitive functioning and that histamine H3-receptor antagonists, which increase histaminergic function through presynaptic receptors, improve cognitive performance in models of clinical cognitive deficits. In order to test such new drugs in humans, a model for cognitive impairments induced by low histaminergic functions would be useful. Studies with histamine H1-receptor antagonists have shown limitations as a model. Here we evaluated whether depletion of L-histidine, the precursor of histamine, was effective in altering measures associated with histamine in humans and the behavioural and electrophysiological (event-related-potentials) effects. Experimental approach: Seventeen healthy volunteers completed a three-way, double-blind, crossover study with L-histidine depletion, L-tyrosine/L-phenylalanine depletion (active control) and placebo as treatments. Interactions with task manipulations in a choice reaction time task were studied. Task demands were increased using visual stimulus degradation and increased response complexity. In addition, subjective and objective measures of sedation and critical tracking task performance were assessed. Key results: Measures of sedation and critical tracking task performance were not affected by treatment. L-histidine depletion was effective and enlarged the effect of response complexity as measured with the response-locked lateralized readiness potential onset latency. Conclusions and implications: L-histidine depletion affected response- but not stimulus-related processes, in contrast to the effects of H1-receptor antagonists which were previously found to affect primarily stimulus-related processes. L-histidine depletion is promising as a model for histamine-based cognitive impairment. However, these effects need to be confirmed by further studies. PMID:19413574

  1. A neutron spectrum unfolding computer code based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2014-02-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in

  2. Estimates of radiological risk from depleted uranium weapons in war scenarios.

    PubMed

    Durante, Marco; Pugliese, Mariagabriella

    2002-01-01

    Several weapons used during the recent conflict in Yugoslavia contain depleted uranium, including missiles and armor-piercing incendiary rounds. Health concern is related to the use of these weapons, because of the heavy-metal toxicity and radioactivity of uranium. Although chemical toxicity is considered the more important source of health risk related to uranium, radiation exposure has been allegedly related to cancers among veterans of the Balkan conflict, and uranium munitions are a possible source of contamination in the environment. Actual measurements of radioactive contamination are needed to assess the risk. In this paper, a computer simulation is proposed to estimate radiological risk related to different exposure scenarios. Dose caused by inhalation of radioactive aerosols and ground contamination induced by Tomahawk missile impact are simulated using a Gaussian plume model (HOTSPOT code). Environmental contamination and committed dose to the population resident in contaminated areas are predicted by a food-web model (RESRAD code). Small values of committed effective dose equivalent appear to be associated with missile impacts (50-y CEDE < 5 mSv), or population exposure by water-independent pathways (50-y CEDE < 80 mSv). The greatest hazard is related to the water contamination in conditions of effective leaching of uranium in the groundwater (50-y CEDE < 400 mSv). Even in this worst case scenario, the chemical toxicity largely predominates over radiological risk. These computer simulations suggest that little radiological risk is associated to the use of depleted uranium weapons.

  3. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  4. A novel construction method of QC-LDPC codes based on CRT for optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  5. Ego depletion impairs implicit learning.

    PubMed

    Thompson, Kelsey R; Sanchez, Daniel J; Wesley, Abigail H; Reber, Paul J

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  6. Ego Depletion Impairs Implicit Learning

    PubMed Central

    Thompson, Kelsey R.; Sanchez, Daniel J.; Wesley, Abigail H.; Reber, Paul J.

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent. PMID:25275517

  7. SU-F-T-140: Assessment of the Proton Boron Fusion Reaction for Practical Radiation Therapy Applications Using MCNP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adam, D; Bednarz, B

    Purpose: The proton boron fusion reaction is a reaction that describes the creation of three alpha particles as the result of the interaction of a proton incident upon a 11B target. Theoretically, the proton boron fusion reaction is a desirable reaction for radiation therapy applications in that, with the appropriate boron delivery agent, it could potentially combine the localized dose delivery protons exhibit (Bragg peak) and the local deposition of high LET alpha particles in cancerous sites. Previous efforts have shown significant dose enhancement using the proton boron fusion reaction; the overarching purpose of this work is an attempt tomore » validate previous Monte Carlo results of the proton boron fusion reaction. Methods: The proton boron fusion reaction, 11B(p, 3α), is investigated using MCNP6 to assess the viability for potential use in radiation therapy. Simple simulations of a proton pencil beam incident upon both a water phantom and a water phantom with an axial region containing 100ppm boron were modeled using MCNP6 in order to determine the extent of the impact boron had upon the calculated energy deposition. Results: The maximum dose increase calculated was 0.026% for the incident 250 MeV proton beam scenario. The MCNP simulations performed demonstrated that the proton boron fusion reaction rate at clinically relevant boron concentrations was too small in order to have any measurable impact on the absorbed dose. Conclusion: For all MCNP6 simulations conducted, the increase of absorbed dose of a simple water phantom due to the 11B(p, 3α) reaction was found to be inconsequential. In addition, it was determined that there are no good evaluations of the 11B(p, 3α) reaction for use in MCNPX/6 and further work should be conducted in cross section evaluations in order to definitively evaluate the feasibility of the proton boron fusion reaction for use in radiation therapy applications.« less

  8. Exposure to nature counteracts aggression after depletion.

    PubMed

    Wang, Yan; She, Yihan; Colarelli, Stephen M; Fang, Yuan; Meng, Hui; Chen, Qiuju; Zhang, Xin; Zhu, Hongwei

    2018-01-01

    Acts of self-control are more likely to fail after previous exertion of self-control, known as the ego depletion effect. Research has shown that depleted participants behave more aggressively than non-depleted participants, especially after being provoked. Although exposure to nature (e.g., a walk in the park) has been predicted to replenish resources common to executive functioning and self-control, the extent to which exposure to nature may counteract the depletion effect on aggression has yet to be determined. The present study investigated the effects of exposure to nature on aggression following depletion. Aggression was measured by the intensity of noise blasts participants delivered to an ostensible opponent in a competition reaction-time task. As predicted, an interaction occurred between depletion and environmental manipulations for provoked aggression. Specifically, depleted participants behaved more aggressively in response to provocation than non-depleted participants in the urban condition. However, provoked aggression did not differ between depleted and non-depleted participants in the natural condition. Moreover, within the depletion condition, participants in the natural condition had lower levels of provoked aggression than participants in the urban condition. This study suggests that a brief period of nature exposure may restore self-control and help depleted people regain control over aggressive urges. © 2017 Wiley Periodicals, Inc.

  9. Valence-dependent influence of serotonin depletion on model-based choice strategy

    PubMed Central

    Worbe, Y; Palminteri, S; Savulich, G; Daw, N D; Fernandez-Egea, E; Robbins, T W; Voon, V

    2016-01-01

    Human decision-making arises from both reflective and reflexive mechanisms, which underpin goal-directed and habitual behavioural control. Computationally, these two systems of behavioural control have been described by different learning algorithms, model-based and model-free learning, respectively. Here, we investigated the effect of diminished serotonin (5-hydroxytryptamine) neurotransmission using dietary tryptophan depletion (TD) in healthy volunteers on the performance of a two-stage decision-making task, which allows discrimination between model-free and model-based behavioural strategies. A novel version of the task was used, which not only examined choice balance for monetary reward but also for punishment (monetary loss). TD impaired goal-directed (model-based) behaviour in the reward condition, but promoted it under punishment. This effect on appetitive and aversive goal-directed behaviour is likely mediated by alteration of the average reward representation produced by TD, which is consistent with previous studies. Overall, the major implication of this study is that serotonin differentially affects goal-directed learning as a function of affective valence. These findings are relevant for a further understanding of psychiatric disorders associated with breakdown of goal-directed behavioural control such as obsessive-compulsive disorders or addictions. PMID:25869808

  10. Comparison of methods for evaluating ground-contact copper preservative depletion

    Treesearch

    Stan Lebow; Steven Halverson

    2008-01-01

    Depletion of the biocide(s) used to treat wood has a major influence on service life and environmental concerns. However, little is known about the extent of depletion from the specific leaching method employed. Wood treated with two types of copper-based preservatives were leached using three different methods: field stakes (American Wood Protection Association (AWPA...

  11. Abundances and Depletions of Neutron-capture Elements in the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Ritchey, A. M.; Federman, S. R.; Lambert, D. L.

    2018-06-01

    We present an extensive analysis of the gas-phase abundances and depletion behaviors of neutron-capture elements in the interstellar medium (ISM). Column densities (or upper limits to the column densities) of Ga II, Ge II, As II, Kr I, Cd II, Sn II, and Pb II are determined for a sample of 69 sight lines with high- and/or medium-resolution archival spectra obtained with the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope. An additional 59 sight lines with column density measurements reported in the literature are included in our analysis. Parameters that characterize the depletion trends of the elements are derived according to the methodology developed by Jenkins. (In an appendix, we present similar depletion results for the light element B.) The depletion patterns exhibited by Ga and Ge comport with expectations based on the depletion results obtained for many other elements. Arsenic exhibits much less depletion than expected, and its abundance in low-depletion sight lines may even be supersolar. We confirm a previous finding by Jenkins that the depletion of Kr increases as the overall depletion level increases from one sight line to another. Cadmium shows no such evidence of increasing depletion. We find a significant amount of scatter in the gas-phase abundances of Sn and Pb. For Sn, at least, the scatter may be evidence of real intrinsic abundance variations due to s-process enrichment combined with inefficient mixing in the ISM.

  12. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  13. Evaluation of playground injuries based on ICD, E codes, international classification of external cause of injury codes (ICECI), and abbreviated injury scale coding systems.

    PubMed

    Tan, N C; Ang, A; Heng, D; Chen, J; Wong, H B

    2007-01-01

    The survey is aimed to describe the epidemiology of playground related injuries in Singapore based on the ICD-9, AIS/ ISS and PTS scoring systems, and mechanisms and causes of such injuries according to E codes and ICECI codes. A cross-sectional questionnaire survey examined children (< 16 years old), who sought treatment for or died of unintentional injuries in the ED of three hospitals, two primary care centers and the sole Forensic Medicine Department of Singapore. A data dictionary was compiled using guidelines from CDC/WHO. The ISS, AIS and PTS, ICD-9, ICECI v1 and E codes were used to describe the details of the injuries. 19,094 childhood injuries were recorded in the database, of which 1617 were playground injuries (8.5%). The injured children (mean age=6.8 years, SD 2.9 years) were predo-minantly male (M:F ratio = 1.71:1). Falls were the most frequent in-juries (70.7%) using ICECI. 25.0% of injuries involved radial and ulnar fractures (ICD-9 code). 99.4% of these injuries were minor, with PTS scores of 9-12. Children aged 6-10 years, were prone to upper limb injuries (71.1%) based on AIS. The use of international coding systems in injury surveillance facilitated standardisation of description and comparison of playground injuries.

  14. Analysis of protein-coding genetic variation in 60,706 humans.

    PubMed

    Lek, Monkol; Karczewski, Konrad J; Minikel, Eric V; Samocha, Kaitlin E; Banks, Eric; Fennell, Timothy; O'Donnell-Luria, Anne H; Ware, James S; Hill, Andrew J; Cummings, Beryl B; Tukiainen, Taru; Birnbaum, Daniel P; Kosmicki, Jack A; Duncan, Laramie E; Estrada, Karol; Zhao, Fengmei; Zou, James; Pierce-Hoffman, Emma; Berghout, Joanne; Cooper, David N; Deflaux, Nicole; DePristo, Mark; Do, Ron; Flannick, Jason; Fromer, Menachem; Gauthier, Laura; Goldstein, Jackie; Gupta, Namrata; Howrigan, Daniel; Kiezun, Adam; Kurki, Mitja I; Moonshine, Ami Levy; Natarajan, Pradeep; Orozco, Lorena; Peloso, Gina M; Poplin, Ryan; Rivas, Manuel A; Ruano-Rubio, Valentin; Rose, Samuel A; Ruderfer, Douglas M; Shakir, Khalid; Stenson, Peter D; Stevens, Christine; Thomas, Brett P; Tiao, Grace; Tusie-Luna, Maria T; Weisburd, Ben; Won, Hong-Hee; Yu, Dongmei; Altshuler, David M; Ardissino, Diego; Boehnke, Michael; Danesh, John; Donnelly, Stacey; Elosua, Roberto; Florez, Jose C; Gabriel, Stacey B; Getz, Gad; Glatt, Stephen J; Hultman, Christina M; Kathiresan, Sekar; Laakso, Markku; McCarroll, Steven; McCarthy, Mark I; McGovern, Dermot; McPherson, Ruth; Neale, Benjamin M; Palotie, Aarno; Purcell, Shaun M; Saleheen, Danish; Scharf, Jeremiah M; Sklar, Pamela; Sullivan, Patrick F; Tuomilehto, Jaakko; Tsuang, Ming T; Watkins, Hugh C; Wilson, James G; Daly, Mark J; MacArthur, Daniel G

    2016-08-18

    Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

  15. The Impact of an Ego Depletion Manipulation on Performance-Based and Self-Report Assessment Measures.

    PubMed

    Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L

    2016-10-01

    We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures. © The Author(s) 2015.

  16. Detecting the borders between coding and non-coding DNA regions in prokaryotes based on recursive segmentation and nucleotide doublets statistics

    PubMed Central

    2012-01-01

    Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225

  17. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  18. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  19. Trellis-coded CPM for satellite-based mobile communications

    NASA Technical Reports Server (NTRS)

    Abrishamkar, Farrokh; Biglieri, Ezio

    1988-01-01

    Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.

  20. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  1. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  2. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  3. Associative Interactions in Crowded Solutions of Biopolymers Counteract Depletion Effects.

    PubMed

    Groen, Joost; Foschepoth, David; te Brinke, Esra; Boersma, Arnold J; Imamura, Hiromi; Rivas, Germán; Heus, Hans A; Huck, Wilhelm T S

    2015-10-14

    The cytosol of Escherichia coli is an extremely crowded environment, containing high concentrations of biopolymers which occupy 20-30% of the available volume. Such conditions are expected to yield depletion forces, which strongly promote macromolecular complexation. However, crowded macromolecule solutions, like the cytosol, are very prone to nonspecific associative interactions that can potentially counteract depletion. It remains unclear how the cytosol balances these opposing interactions. We used a FRET-based probe to systematically study depletion in vitro in different crowded environments, including a cytosolic mimic, E. coli lysate. We also studied bundle formation of FtsZ protofilaments under identical crowded conditions as a probe for depletion interactions at much larger overlap volumes of the probe molecule. The FRET probe showed a more compact conformation in synthetic crowding agents, suggesting strong depletion interactions. However, depletion was completely negated in cell lysate and other protein crowding agents, where the FRET probe even occupied slightly more volume. In contrast, bundle formation of FtsZ protofilaments proceeded as readily in E. coli lysate and other protein solutions as in synthetic crowding agents. Our experimental results and model suggest that, in crowded biopolymer solutions, associative interactions counterbalance depletion forces for small macromolecules. Furthermore, the net effects of macromolecular crowding will be dependent on both the size of the macromolecule and its associative interactions with the crowded background.

  4. Depletion of mesospheric sodium during extended period of pulsating aurora

    NASA Astrophysics Data System (ADS)

    Takahashi, T.; Hosokawa, K.; Nozawa, S.; Tsuda, T. T.; Ogawa, Y.; Tsutsumi, M.; Hiraki, Y.; Fujiwara, H.; Kawahara, T. D.; Saito, N.; Wada, S.; Kawabata, T.; Hall, C.

    2017-01-01

    We quantitatively evaluated the Na density depletion due to charge transfer reactions between Na atoms and molecular ions produced by high-energy electron precipitation during a pulsating aurora (PsA). An extended period of PsA was captured by an all-sky camera at the European Incoherent Scatter (EISCAT) radar Tromsø site (69.6°N, 19.2°E) during a 2 h interval from 00:00 to 02:00 UT on 25 January 2012. During this period, using the EISCAT very high frequency (VHF) radar, we detected three intervals of intense ionization below 100 km that were probably caused by precipitation of high-energy electrons during the PsA. In these intervals, the sodium lidar at Tromsø observed characteristic depletion of Na density at altitudes between 97 and 100 km. These Na density depletions lasted for 8 min and represented 5-8% of the background Na layer. To examine the cause of this depletion, we modeled the depletion rate based on charge transfer reactions with NO+ and O2+ while changing the R value which is defined as the ratio of NO+ to O2+ densities, from 1 to 10. The correlation coefficients between observed and modeled Na density depletion calculated with typical value R = 3 for time intervals T1, T2, and T3 were 0.66, 0.80, and 0.67, respectively. The observed Na density depletion rates fall within the range of modeled depletion rate calculated with R from 1 to 10. This suggests that the charge transfer reactions triggered by the auroral impact ionization at low altitudes are the predominant process responsible for Na density depletion during PsA intervals.

  5. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  6. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  7. Induced nanoparticle aggregation for short nucleic acid quantification by depletion isotachophoresis.

    PubMed

    Marczak, Steven; Senapati, Satyajyoti; Slouka, Zdenek; Chang, Hsueh-Chia

    2016-12-15

    A rapid (<20min) gel-membrane biochip platform for the detection and quantification of short nucleic acids is presented based on a sandwich assay with probe-functionalized gold nanoparticles and their separation into concentrated bands by depletion-generated gel isotachophoresis. The platform sequentially exploits the enrichment and depletion phenomena of an ion-selective cation-exchange membrane created under an applied electric field. Enrichment is used to concentrate the nanoparticles and targets at a localized position at the gel-membrane interface for rapid hybridization. The depletion generates an isotachophoretic zone without the need for different conductivity buffers, and is used to separate linked nanoparticles from isolated ones in the gel medium and then by field-enhanced aggregation of only the linked particles at the depletion front. The selective field-induced aggregation of the linked nanoparticles during the subsequent depletion step produces two lateral-flow like bands within 1cm for easy visualization and quantification as the aggregates have negligible electrophoretic mobility in the gel and the isolated nanoparticles are isotachophoretically packed against the migrating depletion front. The detection limit for 69-base single-stranded DNA targets is 10 pM (about 10 million copies for our sample volume) with high selectivity against nontargets and a three decade linear range for quantification. The selectivity and signal intensity are maintained in heterogeneous mixtures where the nontargets outnumber the targets 10,000 to 1. The selective field-induced aggregation of DNA-linked nanoparticles at the ion depletion front is attributed to their trailing position at the isotachophoretic front with a large field gradient. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. LSB-Based Steganography Using Reflected Gray Code

    NASA Astrophysics Data System (ADS)

    Chen, Chang-Chu; Chang, Chin-Chen

    Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.

  9. Benchmark study for charge deposition by high energy electrons in thick slabs

    NASA Technical Reports Server (NTRS)

    Jun, I.

    2002-01-01

    The charge deposition profiles created when highenergy (1, 10, and 100 MeV) electrons impinge ona thick slab of elemental aluminum, copper, andtungsten are presented in this paper. The chargedeposition profiles were computed using existing representative Monte Carlo codes: TIGER3.0 (1D module of ITS3.0) and MCNP version 4B. The results showed that TIGER3.0 and MCNP4B agree very well (within 20% of each other) in the majority of the problem geometry. The TIGER results were considered to be accurate based on previous studies. Thus, it was demonstrated that MCNP, with its powerful geometry capability and flexible source and tally options, could be used in calculations of electron charging in high energy electron-rich space radiation environments.

  10. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies

    PubMed Central

    Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.

    2016-01-01

    Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331

  11. Comparative evaluation of rRNA depletion procedures for the improved analysis of bacterial biofilm and mixed pathogen culture transcriptomes

    PubMed Central

    Petrova, Olga E.; Garcia-Alcalde, Fernando; Zampaloni, Claudia; Sauer, Karin

    2017-01-01

    Global transcriptomic analysis via RNA-seq is often hampered by the high abundance of ribosomal (r)RNA in bacterial cells. To remove rRNA and enrich coding sequences, subtractive hybridization procedures have become the approach of choice prior to RNA-seq, with their efficiency varying in a manner dependent on sample type and composition. Yet, despite an increasing number of RNA-seq studies, comparative evaluation of bacterial rRNA depletion methods has remained limited. Moreover, no such study has utilized RNA derived from bacterial biofilms, which have potentially higher rRNA:mRNA ratios and higher rRNA carryover during RNA-seq analysis. Presently, we evaluated the efficiency of three subtractive hybridization-based kits in depleting rRNA from samples derived from biofilm, as well as planktonic cells of the opportunistic human pathogen Pseudomonas aeruginosa. Our results indicated different rRNA removal efficiency for the three procedures, with the Ribo-Zero kit yielding the highest degree of rRNA depletion, which translated into enhanced enrichment of non-rRNA transcripts and increased depth of RNA-seq coverage. The results indicated that, in addition to improving RNA-seq sensitivity, efficient rRNA removal enhanced detection of low abundance transcripts via qPCR. Finally, we demonstrate that the Ribo-Zero kit also exhibited the highest efficiency when P. aeruginosa/Staphylococcus aureus co-culture RNA samples were tested. PMID:28117413

  12. Ozone Depletion Caused by Rocket Engine Emissions: A Fundamental Limit on the Scale and Viability of Space-Based Geoengineering Schemes

    NASA Astrophysics Data System (ADS)

    Ross, M. N.; Toohey, D.

    2008-12-01

    Emissions from solid and liquid propellant rocket engines reduce global stratospheric ozone levels. Currently ~ one kiloton of payloads are launched into earth orbit annually by the global space industry. Stratospheric ozone depletion from present day launches is a small fraction of the ~ 4% globally averaged ozone loss caused by halogen gases. Thus rocket engine emissions are currently considered a minor, if poorly understood, contributor to ozone depletion. Proposed space-based geoengineering projects designed to mitigate climate change would require order of magnitude increases in the amount of material launched into earth orbit. The increased launches would result in comparable increases in the global ozone depletion caused by rocket emissions. We estimate global ozone loss caused by three space-based geoengineering proposals to mitigate climate change: (1) mirrors, (2) sunshade, and (3) space-based solar power (SSP). The SSP concept does not directly engineer climate, but is touted as a mitigation strategy in that SSP would reduce CO2 emissions. We show that launching the mirrors or sunshade would cause global ozone loss between 2% and 20%. Ozone loss associated with an economically viable SSP system would be at least 0.4% and possibly as large as 3%. It is not clear which, if any, of these levels of ozone loss would be acceptable under the Montreal Protocol. The large uncertainties are mainly caused by a lack of data or validated models regarding liquid propellant rocket engine emissions. Our results offer four main conclusions. (1) The viability of space-based geoengineering schemes could well be undermined by the relatively large ozone depletion that would be caused by the required rocket launches. (2) Analysis of space- based geoengineering schemes should include the difficult tradeoff between the gain of long-term (~ decades) climate control and the loss of short-term (~ years) deep ozone loss. (3) The trade can be properly evaluated only if our

  13. Effects of developer depletion on image quality of Kodak Insight and Ektaspeed Plus films.

    PubMed

    Casanova, M S; Casanova, M L S; Haiter-Neto, F

    2004-03-01

    To evaluate the effect of processing solution depletion on the image quality of F-speed dental X-ray film (Insight), compared with Ektaspeed Plus. The films were exposed with a phantom and developed in manual and automatic conditions, in fresh and progressively depleted solutions. The comparison was based on densitometric analysis and subjective appraisal. The processing solution depletion presented a different behaviour depending on whether manual or automatic technique was used. The films were distinctly affected by depleted processing solutions. The developer depletion was faster in automatic than manual conditions. Insight film was more resistant than Ektaspeed Plus to the effects of processing solution depletion. In the present study there was agreement between the objective and subjective appraisals.

  14. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  15. Three-dimensional modeling of the neutral gas depletion effect in a helicon discharge plasma

    NASA Astrophysics Data System (ADS)

    Kollasch, Jeffrey; Schmitz, Oliver; Norval, Ryan; Reiter, Detlev; Sovinec, Carl

    2016-10-01

    Helicon discharges provide an attractive radio-frequency driven regime for plasma, but neutral-particle dynamics present a challenge to extending performance. A neutral gas depletion effect occurs when neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. The Monte Carlo neutral particle tracking code EIRENE was setup for the MARIA helicon experiment at UW Madison to study its neutral particle dynamics. Prescribed plasma temperature and density profiles similar to those in the MARIA device are used in EIRENE to investigate the main causes of the neutral gas depletion effect. The most dominant plasma-neutral interactions are included so far, namely electron impact ionization of neutrals, charge exchange interactions of neutrals with plasma ions, and recycling at the wall. Parameter scans show how the neutral depletion effect depends on parameters such as Knudsen number, plasma density and temperature, and gas-surface interaction accommodation coefficients. Results are compared to similar analytic studies in the low Knudsen number limit. Plans to incorporate a similar Monte Carlo neutral model into a larger helicon modeling framework are discussed. This work is funded by the NSF CAREER Award PHY-1455210.

  16. Single neuron firing properties impact correlation-based population coding

    PubMed Central

    Hong, Sungho; Ratté, Stéphanie; Prescott, Steven A.; De Schutter, Erik

    2012-01-01

    Correlated spiking has been widely observed but its impact on neural coding remains controversial. Correlation arising from co-modulation of rates across neurons has been shown to vary with the firing rates of individual neurons. This translates into rate and correlation being equivalently tuned to the stimulus; under those conditions, correlated spiking does not provide information beyond that already available from individual neuron firing rates. Such correlations are irrelevant and can reduce coding efficiency by introducing redundancy. Using simulations and experiments in rat hippocampal neurons, we show here that pairs of neurons receiving correlated input also exhibit correlations arising from precise spike-time synchronization. Contrary to rate co-modulation, spike-time synchronization is unaffected by firing rate, thus enabling synchrony- and rate-based coding to operate independently. The type of output correlation depends on whether intrinsic neuron properties promote integration or coincidence detection: “ideal” integrators (with spike generation sensitive to stimulus mean) exhibit rate co-modulation whereas “ideal” coincidence detectors (with spike generation sensitive to stimulus variance) exhibit precise spike-time synchronization. Pyramidal neurons are sensitive to both stimulus mean and variance, and thus exhibit both types of output correlation proportioned according to which operating mode is dominant. Our results explain how different types of correlations arise based on how individual neurons generate spikes, and why spike-time synchronization and rate co-modulation can encode different stimulus properties. Our results also highlight the importance of neuronal properties for population-level coding insofar as neural networks can employ different coding schemes depending on the dominant operating mode of their constituent neurons. PMID:22279226

  17. Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone.

    PubMed

    Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling

    2017-03-01

    K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 µg g -1 bone mineral using a cadmium zinc telluride detector. In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment.

  18. Theoretical modeling of a portable x-ray tube based KXRF system to measure lead in bone

    PubMed Central

    Specht, Aaron J; Weisskopf, Marc G; Nie, Linda Huiling

    2017-01-01

    Objective K-shell x-ray fluorescence (KXRF) techniques have been used to identify health effects resulting from exposure to metals for decades, but the equipment is bulky and requires significant maintenance and licensing procedures. A portable x-ray fluorescence (XRF) device was developed to overcome these disadvantages, but introduced a measurement dependency on soft tissue thickness. With recent advances to detector technology, an XRF device utilizing the advantages of both systems should be feasible. Approach In this study, we used Monte Carlo simulations to test the feasibility of an XRF device with a high-energy x-ray tube and detector operable at room temperature. Main Results We first validated the use of Monte Carlo N-particle transport code (MCNP) for x-ray tube simulations, and found good agreement between experimental and simulated results. Then, we optimized x-ray tube settings and found the detection limit of the high-energy x-ray tube based XRF device for bone lead measurements to be 6.91 μg g−1 bone mineral using a cadmium zinc telluride detector. Significance In conclusion, this study validated the use of MCNP in simulations of x-ray tube physics and XRF applications, and demonstrated the feasibility of a high-energy x-ray tube based XRF for metal exposure assessment. PMID:28169835

  19. Computer-based coding of free-text job descriptions to efficiently identify occupations in epidemiological studies.

    PubMed

    Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C

    2016-06-01

    Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  1. Finger Vein Recognition Based on Local Directional Code

    PubMed Central

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-01-01

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194

  2. Finger vein recognition based on local directional code.

    PubMed

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-11-05

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.

  3. Triboelectric-Based Transparent Secret Code.

    PubMed

    Yuan, Zuqing; Du, Xinyu; Li, Nianwu; Yin, Yingying; Cao, Ran; Zhang, Xiuling; Zhao, Shuyu; Niu, Huidan; Jiang, Tao; Xu, Weihua; Wang, Zhong Lin; Li, Congju

    2018-04-01

    Private and security information for personal identification requires an encrypted tool to extend communication channels between human and machine through a convenient and secure method. Here, a triboelectric-based transparent secret code (TSC) that enables self-powered sensing and information identification simultaneously in a rapid process method is reported. The transparent and hydrophobic TSC can be conformed to any cambered surface due to its high flexibility, which extends the application scenarios greatly. Independent of the power source, the TSC can induce obvious electric signals only by surface contact. This TSC is velocity-dependent and capable of achieving a peak voltage of ≈4 V at a resistance load of 10 MΩ and a sliding speed of 0.1 m s -1 , according to a 2 mm × 20 mm rectangular stripe. The fabricated TSC can maintain its performance after reciprocating rolling for about 5000 times. The applications of TSC as a self-powered code device are demonstrated, and the ordered signals can be recognized through the height of the electric peaks, which can be further transferred into specific information by the processing program. The designed TSC has great potential in personal identification, commodity circulation, valuables management, and security defense applications.

  4. Depletion and capture: revisiting "the source of water derived from wells".

    PubMed

    Konikow, L F; Leake, S A

    2014-09-01

    A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  5. Comparative evaluation of seven commercial products for human serum enrichment/depletion by shotgun proteomics.

    PubMed

    Pisanu, Salvatore; Biosa, Grazia; Carcangiu, Laura; Uzzau, Sergio; Pagnozzi, Daniela

    2018-08-01

    Seven commercial products for human serum depletion/enrichment were tested and compared by shotgun proteomics. Methods were based on four different capturing agents: antibodies (Qproteome Albumin/IgG Depletion kit, ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit, Top 2 Abundant Protein Depletion Spin Columns, and Top 12 Abundant Protein Depletion Spin Columns), specific ligands (Albumin/IgG Removal), mixture of antibodies and ligands (Albumin and IgG Depletion SpinTrap), and combinatorial peptide ligand libraries (ProteoMiner beads), respectively. All procedures, to a greater or lesser extent, allowed an increase of identified proteins. ProteoMiner beads provided the highest number of proteins; Albumin and IgG Depletion SpinTrap and ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit resulted the most efficient in albumin removal; Top 2 and Top 12 Abundant Protein Depletion Spin Columns decreased the overall immunoglobulin levels more than other procedures, whereas specifically gamma immunoglobulins were mostly removed by Albumin and IgG Depletion SpinTrap, ProteoPrep Immunoaffinity Albumin and IgG Depletion Kit, and Top 2 Abundant Protein Depletion Spin Columns. Albumin/IgG Removal, a resin bound to a mixture of protein A and Cibacron Blue, behaved less efficiently than the other products. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Modeling and Depletion Simulations for a High Flux Isotope Reactor Cycle with a Representative Experiment Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandler, David; Betzler, Ben; Hirtz, Gregory John

    2016-09-01

    The purpose of this report is to document a high-fidelity VESTA/MCNP High Flux Isotope Reactor (HFIR) core model that features a new, representative experiment loading. This model, which represents the current, high-enriched uranium fuel core, will serve as a reference for low-enriched uranium conversion studies, safety-basis calculations, and other research activities. A new experiment loading model was developed to better represent current, typical experiment loadings, in comparison to the experiment loading included in the model for Cycle 400 (operated in 2004). The new experiment loading model for the flux trap target region includes full length 252Cf production targets, 75Se productionmore » capsules, 63Ni production capsules, a 188W production capsule, and various materials irradiation targets. Fully loaded 238Pu production targets are modeled in eleven vertical experiment facilities located in the beryllium reflector. Other changes compared to the Cycle 400 model are the high-fidelity modeling of the fuel element side plates and the material composition of the control elements. Results obtained from the depletion simulations with the new model are presented, with a focus on time-dependent isotopic composition of irradiated fuel and single cycle isotope production metrics.« less

  7. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  8. Variable disparity-motion estimation based fast three-view video coding

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  9. Norm-based coding of facial identity in adults with autism spectrum disorder.

    PubMed

    Walsh, Jennifer A; Maurer, Daphne; Vida, Mark D; Rhodes, Gillian; Jeffery, Linda; Rutherford, M D

    2015-03-01

    It is unclear whether reported deficits in face processing in individuals with autism spectrum disorders (ASD) can be explained by deficits in perceptual face coding mechanisms. In the current study, we examined whether adults with ASD showed evidence of norm-based opponent coding of facial identity, a perceptual process underlying the recognition of facial identity in typical adults. We began with an original face and an averaged face and then created an anti-face that differed from the averaged face in the opposite direction from the original face by a small amount (near adaptor) or a large amount (far adaptor). To test for norm-based coding, we adapted participants on different trials to the near versus far adaptor, then asked them to judge the identity of the averaged face. We varied the size of the test and adapting faces in order to reduce any contribution of low-level adaptation. Consistent with the predictions of norm-based coding, high functioning adults with ASD (n = 27) and matched typical participants (n = 28) showed identity aftereffects that were larger for the far than near adaptor. Unlike results with children with ASD, the strength of the aftereffects were similar in the two groups. This is the first study to demonstrate norm-based coding of facial identity in adults with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. CESAR: A Code for Nuclear Fuel and Waste Characterisation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal, J.M.; Grouiller, J.P.; Launay, A.

    2006-07-01

    CESAR (Simplified Evolution Code Applied to Reprocessing) is a depletion code developed through a joint program between CEA and COGEMA. In the late 1980's, the first use of this code dealt with nuclear measurement at the Laboratories of the La Hague reprocessing plant. The use of CESAR was then extended to characterizations of all entrance materials and for characterisation, via tracer, of all produced waste. The code can distinguish more than 100 heavy nuclides, 200 fission products and 100 activation products, and it can characterise both the fuel and the structural material of the fuel. CESAR can also make depletionmore » calculations from 3 months to 1 million years of cooling time. Between 2003-2005, the 5. version of the code was developed. The modifications were related to the harmonisation of the code's nuclear data with the JEF2.2 nuclear data file. This paper describes the code and explains the extensive use of this code at the La Hague reprocessing plant and also for prospective studies. The second part focuses on the modifications of the latest version, and describes the application field and the qualification of the code. Many companies and the IAEA use CESAR today. CESAR offers a Graphical User Interface, which is very user-friendly. (authors)« less

  11. Spatial coding-based approach for partitioning big spatial data in Hadoop

    NASA Astrophysics Data System (ADS)

    Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai

    2017-09-01

    Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.

  12. LITHIUM DEPLETION IS A STRONG TEST OF CORE-ENVELOPE RECOUPLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somers, Garrett; Pinsonneault, Marc H., E-mail: somers@astronomy.ohio-state.edu

    2016-09-20

    Rotational mixing is a prime candidate for explaining the gradual depletion of lithium from the photospheres of cool stars during the main sequence. However, previous mixing calculations have relied primarily on treatments of angular momentum transport in stellar interiors incompatible with solar and stellar data in the sense that they overestimate the internal differential rotation. Instead, recent studies suggest that stars are strongly differentially rotating at young ages but approach a solid body rotation during their lifetimes. We modify our rotating stellar evolution code to include an additional source of angular momentum transport, a necessary ingredient for explaining the openmore » cluster rotation pattern, and examine the consequences for mixing. We confirm that core-envelope recoupling with a ∼20 Myr timescale is required to explain the evolution of the mean rotation pattern along the main sequence, and demonstrate that it also provides a more accurate description of the Li depletion pattern seen in open clusters. Recoupling produces a characteristic pattern of efficient mixing at early ages and little mixing at late ages, thus predicting a flattening of Li depletion at a few Gyr, in agreement with the observed late-time evolution. Using Li abundances we argue that the timescale for core-envelope recoupling during the main sequence decreases sharply with increasing mass. We discuss the implications of this finding for stellar physics, including the viability of gravity waves and magnetic fields as agents of angular momentum transport. We also raise the possibility of intrinsic differences in initial conditions in star clusters using M67 as an example.« less

  13. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  14. Nine-year-old children use norm-based coding to visually represent facial expression.

    PubMed

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  15. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  16. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    NASA Astrophysics Data System (ADS)

    Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald

    2017-09-01

    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  17. nRC: non-coding RNA Classifier based on structural features.

    PubMed

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  18. DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OGDEN DM; KIRCH NW

    2007-10-31

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.

  19. Recalculation with SEACAB of the activation by spent fuel neutrons and residual dose originated in the racks replaced at Cofrentes NPP

    NASA Astrophysics Data System (ADS)

    Ortego, Pedro; Rodriguez, Alain; Töre, Candan; Compadre, José Luis de Diego; Quesada, Baltasar Rodriguez; Moreno, Raul Orive

    2017-09-01

    In order to increase the storage capacity of the East Spent Fuel Pool at the Cofrentes NPP, located in Valencia province, Spain, the existing storage stainless steel racks were replaced by a new design of compact borated stainless steel racks allowing a 65% increase in fuel storing capacity. Calculation of the activation of the used racks was successfully performed with the use of MCNP4B code. Additionally the dose rate at contact with a row of racks in standing position and behind a wall of shielding material has been calculated using MCNP4B code as well. These results allowed a preliminary definition of the burnker required for the storage of racks. Recently the activity in the racks has been recalculated with SEACAB system which combines the mesh tally of MCNP codes with the activation code ACAB, applying the rigorous two-step method (R2S) developed at home, benchmarked with FNG irradiation experiments and usually applied in fusion calculations for ITER project.

  20. Automated Source-Code-Based Testing of Object-Oriented Software

    NASA Astrophysics Data System (ADS)

    Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten

    2014-08-01

    With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.

  1. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  2. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7

  3. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  4. Depletion and capture: revisiting “The source of water derived from wells"

    USGS Publications Warehouse

    Konikow, Leonard F.; Leake, Stanley A.

    2014-01-01

    A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems.

  5. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  6. A Mechanism to Avoid Collusion Attacks Based on Code Passing in Mobile Agent Systems

    NASA Astrophysics Data System (ADS)

    Jaimez, Marc; Esparza, Oscar; Muñoz, Jose L.; Alins-Delgado, Juan J.; Mata-Díaz, Jorge

    Mobile agents are software entities consisting of code, data, state and itinerary that can migrate autonomously from host to host executing their code. Despite its benefits, security issues strongly restrict the use of code mobility. The protection of mobile agents against the attacks of malicious hosts is considered the most difficult security problem to solve in mobile agent systems. In particular, collusion attacks have been barely studied in the literature. This paper presents a mechanism that avoids collusion attacks based on code passing. Our proposal is based on a Multi-Code agent, which contains a different variant of the code for each host. A Trusted Third Party is responsible for providing the information to extract its own variant to the hosts, and for taking trusted timestamps that will be used to verify time coherence.

  7. Biosensors and Bio-Bar Code Assays Based on Biofunctionalized Magnetic Microbeads

    PubMed Central

    Jaffrezic-Renault, Nicole; Martelet, Claude; Chevolot, Yann; Cloarec, Jean-Pierre

    2007-01-01

    This review paper reports the applications of magnetic microbeads in biosensors and bio-bar code assays. Affinity biosensors are presented through different types of transducing systems: electrochemical, piezo electric or magnetic ones, applied to immunodetection and genodetection. Enzymatic biosensors are based on biofunctionalization through magnetic microbeads of a transducer, more often amperometric, potentiometric or conductimetric. The bio-bar code assays relie on a sandwich structure based on specific biological interaction of a magnetic microbead and a nanoparticle with a defined biological molecule. The magnetic particle allows the separation of the reacted target molecules from unreacted ones. The nanoparticles aim at the amplification and the detection of the target molecule. The bio-bar code assays allow the detection at very low concentration of biological molecules, similar to PCR sensitivity.

  8. WE-DE-201-05: Evaluation of a Windowless Extrapolation Chamber Design and Monte Carlo Based Corrections for the Calibration of Ophthalmic Applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, J; Culberson, W; DeWerd, L

    Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate themore » absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an

  9. C code generation from Petri-net-based logic controller specification

    NASA Astrophysics Data System (ADS)

    Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei

    2017-08-01

    The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.

  10. Standardization of formulations for the acute amino acid depletion and loading tests.

    PubMed

    Badawy, Abdulla A-B; Dougherty, Donald M

    2015-04-01

    The acute tryptophan depletion and loading and the acute tyrosine plus phenylalanine depletion tests are powerful tools for studying the roles of cerebral monoamines in behaviour and symptoms related to various disorders. The tests use either amino acid mixtures or proteins. Current amino acid mixtures lack specificity in humans, but not in rodents, because of the faster disposal of branched-chain amino acids (BCAAs) by the latter. The high content of BCAA (30-60%) is responsible for the poor specificity in humans and we recommend, in a 50g dose, a control formulation with a lowered BCAA content (18%) as a common control for the above tests. With protein-based formulations, α-lactalbumin is specific for acute tryptophan loading, whereas gelatine is only partially effective for acute tryptophan depletion. We recommend the use of the whey protein fraction glycomacropeptide as an alternative protein. Its BCAA content is ideal for specificity and the absence of tryptophan, tyrosine and phenylalanine render it suitable as a template for seven formulations (separate and combined depletion or loading and a truly balanced control). We invite the research community to participate in standardization of the depletion and loading methodologies by using our recommended amino acid formulation and developing those based on glycomacropeptide. © The Author(s) 2015.

  11. Phonological Coding Abilities: Identification of Impairments Related to Phonologically Based Reading Problems.

    ERIC Educational Resources Information Center

    Swank, Linda K.

    1994-01-01

    Relationships between phonological coding abilities and reading outcomes have implications for differential diagnosis of language-based reading problems. The theoretical construct of specific phonological coding ability is explained, including phonological encoding, phonological awareness and metaphonology, lexical access, working memory, and…

  12. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE PAGES

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  13. Energy information data base: report number codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1979-09-01

    Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes eachmore » has used. (RWR)« less

  14. GRADSPMHD: A parallel MHD code based on the SPH formalism

    NASA Astrophysics Data System (ADS)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a

  15. Simulations and observations of plasma depletion, ion composition, and airglow emissions in two auroral ionospheric depletion experiments

    NASA Technical Reports Server (NTRS)

    Yau, A. W.; Whalen, B. A.; Harris, F. R.; Gattinger, R. L.; Pongratz, M. B.

    1985-01-01

    Observations of plasma depletion, ion composition modification, and airglow emissions in the Waterhole experiments are presented. The detailed ion chemistry and airglow emission processes related to the ionospheric hole formation in the experiment are examined, and observations are compared with computer simulation results. The latter indicate that the overall depletion rates in different parts of the depletion region are governed by different parameters.

  16. Prompt Radiation Protection Factors

    DTIC Science & Technology

    2018-02-01

    dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat

  17. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  18. Supporting Situated Learning Based on QR Codes with Etiquetar App: A Pilot Study

    ERIC Educational Resources Information Center

    Camacho, Miguel Olmedo; Pérez-Sanagustín, Mar; Alario-Hoyos, Carlos; Soldani, Xavier; Kloos, Carlos Delgado; Sayago, Sergio

    2014-01-01

    EtiquetAR is an authoring tool for supporting the design and enactment of situated learning experiences based on QR tags. Practitioners use etiquetAR for creating, managing and personalizing collections of QR codes with special properties: (1) codes can have more than one link pointing at different multimedia resources, (2) codes can be updated…

  19. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  20. QR code based noise-free optical encryption and decryption of a gray scale image

    NASA Astrophysics Data System (ADS)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  1. Capacitance-Based Dosimetry of Co-60 Radiation using Fully-Depleted Silicon-on-Insulator Devices

    PubMed Central

    Li, Yulong; Porter, Warren M.; Ma, Rui; Reynolds, Margaret A.; Gerbi, Bruce J.; Koester, Steven J.

    2015-01-01

    The capacitance based sensing of fully-depleted silicon-on-insulator (FDSOI) variable capacitors for Co-60 gamma radiation is investigated. Linear response of the capacitance is observed for radiation dose up to 64 Gy, while the percent capacitance change per unit dose is as high as 0.24 %/Gy. An analytical model is developed to study the operational principles of the varactors and the maximum sensitivity as a function of frequency is determined. The results show that FDSOI varactor dosimeters have potential for extremely-high sensitivity as well as the potential for high frequency operation in applications such as wireless radiation sensing. PMID:27840451

  2. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  3. Coding and Billing in Surgical Education: A Systems-Based Practice Education Program.

    PubMed

    Ghaderi, Kimeya F; Schmidt, Scott T; Drolet, Brian C

    Despite increased emphasis on systems-based practice through the Accreditation Council for Graduate Medical Education core competencies, few studies have examined what surgical residents know about coding and billing. We sought to create and measure the effectiveness of a multifaceted approach to improving resident knowledge and performance of documenting and coding outpatient encounters. We identified knowledge gaps and barriers to documentation and coding in the outpatient setting. We implemented a series of educational and workflow interventions with a group of 12 residents in a surgical clinic at a tertiary care center. To measure the effect of this program, we compared billing codes for 1 year before intervention (FY2012) to prospectively collected data from the postintervention period (FY2013). All related documentation and coding were verified by study-blinded auditors. Interventions took place at the outpatient surgical clinic at Rhode Island Hospital, a tertiary-care center. A cohort of 12 plastic surgery residents ranging from postgraduate year 2 through postgraduate year 6 participated in the interventional sequence. A total of 1285 patient encounters in the preintervention group were compared with 1170 encounters in the postintervention group. Using evaluation and management codes (E&M) as a measure of documentation and coding, we demonstrated a significant and durable increase in billing with supporting clinical documentation after the intervention. For established patient visits, the monthly average E&M code level increased from 2.14 to 3.05 (p < 0.01); for new patients the monthly average E&M level increased from 2.61 to 3.19 (p < 0.01). This study describes a series of educational and workflow interventions, which improved resident coding and billing of outpatient clinic encounters. Using externally audited coding data, we demonstrate significantly increased rates of higher complexity E&M coding in a stable patient population based on improved

  4. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  5. 48 CFR 52.223-11 - Ozone-Depleting Substances.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Ozone-Depleting Substances....223-11 Ozone-Depleting Substances. As prescribed in 23.804(a), insert the following clause: Ozone-Depleting Substances (MAY 2001) (a) Definition. Ozone-depleting substance, as used in this clause, means any...

  6. 48 CFR 52.223-11 - Ozone-Depleting Substances.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 2 2013-10-01 2013-10-01 false Ozone-Depleting Substances....223-11 Ozone-Depleting Substances. As prescribed in 23.804(a), insert the following clause: Ozone-Depleting Substances (MAY 2001) (a) Definition. Ozone-depleting substance, as used in this clause, means any...

  7. 48 CFR 52.223-11 - Ozone-Depleting Substances.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 2 2014-10-01 2014-10-01 false Ozone-Depleting Substances....223-11 Ozone-Depleting Substances. As prescribed in 23.804(a), insert the following clause: Ozone-Depleting Substances (MAY 2001) (a) Definition. Ozone-depleting substance, as used in this clause, means any...

  8. 48 CFR 52.223-11 - Ozone-Depleting Substances.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 2 2012-10-01 2012-10-01 false Ozone-Depleting Substances....223-11 Ozone-Depleting Substances. As prescribed in 23.804(a), insert the following clause: Ozone-Depleting Substances (MAY 2001) (a) Definition. Ozone-depleting substance, as used in this clause, means any...

  9. 48 CFR 52.223-11 - Ozone-Depleting Substances.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Ozone-Depleting Substances....223-11 Ozone-Depleting Substances. As prescribed in 23.804(a), insert the following clause: Ozone-Depleting Substances (MAY 2001) (a) Definition. Ozone-depleting substance, as used in this clause, means any...

  10. Time-on-task effects in children with and without ADHD: depletion of executive resources or depletion of motivation?

    PubMed

    Dekkers, Tycho J; Agelink van Rentergem, Joost A; Koole, Alette; van den Wildenberg, Wery P M; Popma, Arne; Bexkens, Anika; Stoffelsen, Reino; Diekmann, Anouk; Huizenga, Hilde M

    2017-12-01

    Children with attention-deficit/hyperactivity disorder (ADHD) are characterized by deficits in their executive functioning and motivation. In addition, these children are characterized by a decline in performance as time-on-task increases (i.e., time-on-task effects). However, it is unknown whether these time-on-task effects should be attributed to deficits in executive functioning or to deficits in motivation. Some studies in typically developing (TD) adults indicated that time-on-task effects should be interpreted as depletion of executive resources, but other studies suggested that they represent depletion of motivation. We, therefore, investigated, in children with and without ADHD, whether there were time-on-task effects on executive functions, such as inhibition and (in)attention, and whether these were best explained by depletion of executive resources or depletion of motivation. The stop-signal task (SST), which generates both indices of inhibition (stop-signal reaction time) and attention (reaction time variability and errors), was administered in 96 children (42 ADHD, 54 TD controls; aged 9-13). To differentiate between depletion of resources and depletion of motivation, the SST was administered twice. Half of the participants was reinforced during second task performance, potentially counteracting depletion of motivation. Multilevel analyses indicated that children with ADHD were more affected by time-on-task than controls on two measures of inattention, but not on inhibition. In the ADHD group, reinforcement only improved performance on one index of attention (i.e., reaction time variability). The current findings suggest that time-on-task effects in children with ADHD occur specifically in the attentional domain, and seem to originate in both depletion of executive resources and depletion of motivation. Clinical implications for diagnostics, psycho-education, and intervention are discussed.

  11. Assessing the threat that anthropogenic calcium depletion poses to forest health and productivity

    Treesearch

    Paul G. Schaberg; Eric K. Miller; Christopher Eagar

    2010-01-01

    Growing evidence from around the globe indicates that anthropogenic factors including pollution-induced acidification, associated aluminum mobility, and nitrogen saturation are disrupting natural nutrient cycles and depleting base cations from forest ecosystems. Although cation depletion can have varied and interacting influences on ecosystem function, it is the loss...

  12. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  13. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  14. Cross-domain expression recognition based on sparse coding and transfer learning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  15. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  16. Location Based Service in Indoor Environment Using Quick Response Code Technology

    NASA Astrophysics Data System (ADS)

    Hakimpour, F.; Zare Zardiny, A.

    2014-10-01

    Today by extensive use of intelligent mobile phones, increased size of screens and enriching the mobile phones by Global Positioning System (GPS) technology use of location based services have been considered by public users more than ever.. Based on the position of users, they can receive the desired information from different LBS providers. Any LBS system generally includes five main parts: mobile devices, communication network, positioning system, service provider and data provider. By now many advances have been gained in relation to any of these parts; however the users positioning especially in indoor environments is propounded as an essential and critical issue in LBS. It is well known that GPS performs too poorly inside buildings to provide usable indoor positioning. On the other hand, current indoor positioning technologies such as using RFID or WiFi network need different hardware and software infrastructures. In this paper, we propose a new method to overcome these challenges. This method is using the Quick Response (QR) Code Technology. QR Code is a 2D encrypted barcode with a matrix structure which consists of black modules arranged in a square grid. Scanning and data retrieving process from QR Code is possible by use of different camera-enabled mobile phones only by installing the barcode reader software. This paper reviews the capabilities of QR Code technology and then discusses the advantages of using QR Code in Indoor LBS (ILBS) system in comparison to other technologies. Finally, some prospects of using QR Code are illustrated through implementation of a scenario. The most important advantages of using this new technology in ILBS are easy implementation, spending less expenses, quick data retrieval, possibility of printing the QR Code on different products and no need for complicated hardware and software infrastructures.

  17. Failure to replicate depletion of self-control.

    PubMed

    Xu, Xiaomeng; Demos, Kathryn E; Leahey, Tricia M; Hart, Chantelle N; Trautvetter, Jennifer; Coward, Pamela; Middleton, Kathryn R; Wing, Rena R

    2014-01-01

    The limited resource or strength model of self-control posits that the use of self-regulatory resources leads to depletion and poorer performance on subsequent self-control tasks. We conducted four studies (two with community samples, two with young adult samples) utilizing a frequently used depletion procedure (crossing out letters protocol) and the two most frequently used dependent measures of self-control (handgrip perseverance and modified Stroop). In each study, participants completed a baseline self-control measure, a depletion or control task (randomized), and then the same measure of self-control a second time. There was no evidence for significant depletion effects in any of these four studies. The null results obtained in four attempts to replicate using strong methodological approaches may indicate that depletion has more limited effects than implied by prior publications. We encourage further efforts to replicate depletion (particularly among community samples) with full disclosure of positive and negative results.

  18. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  19. 3D element imaging using NSECT for the detection of renal cancer: a simulation study in MCNP.

    PubMed

    Viana, R S; Agasthya, G A; Yoriyaz, H; Kapadia, A J

    2013-09-07

    This work describes a simulation study investigating the application of neutron stimulated emission computed tomography (NSECT) for noninvasive 3D imaging of renal cancer in vivo. Using MCNP5 simulations, we describe a method of diagnosing renal cancer in the body by mapping the 3D distribution of elements present in tumors using the NSECT technique. A human phantom containing the kidneys and other major organs was modeled in MCNP5. The element composition of each organ was based on values reported in literature. The two kidneys were modeled to contain elements reported in renal cell carcinoma (RCC) and healthy kidney tissue. Simulated NSECT scans were executed to determine the 3D element distribution of the phantom body. Elements specific to RCC and healthy kidney tissue were then analyzed to identify the locations of the diseased and healthy kidneys and generate tomographic images of the tumor. The extent of the RCC lesion inside the kidney was determined using 3D volume rendering. A similar procedure was used to generate images of each individual organ in the body. Six isotopes were studied in this work - (32)S, (12)C, (23)Na, (14)N, (31)P and (39)K. The results demonstrated that through a single NSECT scan performed in vivo, it is possible to identify the location of the kidneys and other organs within the body, determine the extent of the tumor within the organ, and to quantify the differences between cancer and healthy tissue-related isotopes with p ≤ 0.05. All of the images demonstrated appropriate concentration changes between the organs, with some discrepancy observed in (31)P, (39)K and (23)Na. The discrepancies were likely due to the low concentration of the elements in the tissue that were below the current detection sensitivity of the NSECT technique.

  20. Podocyte Depletion in Thin GBM and Alport Syndrome.

    PubMed

    Wickman, Larysa; Hodgin, Jeffrey B; Wang, Su Q; Afshinnia, Farsad; Kershaw, David; Wiggins, Roger C

    2016-01-01

    The proximate genetic cause of both Thin GBM and Alport Syndrome (AS) is abnormal α3, 4 and 5 collagen IV chains resulting in abnormal glomerular basement membrane (GBM) structure/function. We previously reported that podocyte detachment rate measured in urine is increased in AS, suggesting that podocyte depletion could play a role in causing progressive loss of kidney function. To test this hypothesis podometric parameters were measured in 26 kidney biopsies from 21 patients aged 2-17 years with a clinic-pathologic diagnosis including both classic Alport Syndrome with thin and thick GBM segments and lamellated lamina densa [n = 15] and Thin GBM cases [n = 6]. Protocol biopsies from deceased donor kidneys were used as age-matched controls. Podocyte depletion was present in AS biopsies prior to detectable histologic abnormalities. No abnormality was detected by light microscopy at <30% podocyte depletion, minor pathologic changes (mesangial expansion and adhesions to Bowman's capsule) were present at 30-50% podocyte depletion, and FSGS was progressively present above 50% podocyte depletion. eGFR did not change measurably until >70% podocyte depletion. Low level proteinuria was an early event at about 25% podocyte depletion and increased in proportion to podocyte depletion. These quantitative data parallel those from model systems where podocyte depletion is the causative event. This result supports a hypothesis that in AS podocyte adherence to the GBM is defective resulting in accelerated podocyte detachment causing progressive podocyte depletion leading to FSGS-like pathologic changes and eventual End Stage Kidney Disease. Early intervention to reduce podocyte depletion is projected to prolong kidney survival in AS.

  1. Molecularly imprinted composite cryogels for hemoglobin depletion from human blood.

    PubMed

    Baydemir, Gözde; Andaç, Müge; Perçin, Işιk; Derazshamshir, Ali; Denizli, Adil

    2014-09-01

    A molecularly imprinted composite cryogel (MICC) was prepared for depletion of hemoglobin from human blood prior to use in proteome applications. Poly(hydroxyethyl methacrylate) based MICC was prepared with high gel fraction yields up to 90%, and characterized by Fourier transform infrared spectrophotometer, scanning electron microscopy, swelling studies, flow dynamics and surface area measurements. MICC exhibited a high binding capacity and selectivity for hemoglobin in the presence of immunoglobulin G, albumin and myoglobin. MICC column was successfully applied in fast protein liquid chromatography system for selective depletion of hemoglobin for human blood. The depletion ratio was highly increased by embedding microspheres into the cryogel (93.2%). Finally, MICC can be reused many times with no apparent decrease in hemoglobin adsorption capacity. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Comparison of Depletion Strategies for the Enrichment of Low-Abundance Proteins in Urine.

    PubMed

    Filip, Szymon; Vougas, Konstantinos; Zoidakis, Jerome; Latosinska, Agnieszka; Mullen, William; Spasovski, Goce; Mischak, Harald; Vlahou, Antonia; Jankowski, Joachim

    2015-01-01

    Proteome analysis of complex biological samples for biomarker identification remains challenging, among others due to the extended range of protein concentrations. High-abundance proteins like albumin or IgG of plasma and urine, may interfere with the detection of potential disease biomarkers. Currently, several options are available for the depletion of abundant proteins in plasma. However, the applicability of these methods in urine has not been thoroughly investigated. In this study, we compared different, commercially available immunodepletion and ion-exchange based approaches on urine samples from both healthy subjects and CKD patients, for their reproducibility and efficiency in protein depletion. A starting urine volume of 500 μL was used to simulate conditions of a multi-institutional biomarker discovery study. All depletion approaches showed satisfactory reproducibility (n=5) in protein identification as well as protein abundance. Comparison of the depletion efficiency between the unfractionated and fractionated samples and the different depletion strategies, showed efficient depletion in all cases, with the exception of the ion-exchange kit. The depletion efficiency was found slightly higher in normal than in CKD samples and normal samples yielded more protein identifications than CKD samples when using both initial as well as corresponding depleted fractions. Along these lines, decrease in the amount of albumin and other targets as applicable, following depletion, was observed. Nevertheless, these depletion strategies did not yield a higher number of identifications in neither the urine from normal nor CKD patients. Collectively, when analyzing urine in the context of CKD biomarker identification, no added value of depletion strategies can be observed and analysis of unfractionated starting urine appears to be preferable.

  3. Comparison of Depletion Strategies for the Enrichment of Low-Abundance Proteins in Urine

    PubMed Central

    Filip, Szymon; Vougas, Konstantinos; Zoidakis, Jerome; Latosinska, Agnieszka; Mullen, William; Spasovski, Goce; Mischak, Harald; Vlahou, Antonia; Jankowski, Joachim

    2015-01-01

    Proteome analysis of complex biological samples for biomarker identification remains challenging, among others due to the extended range of protein concentrations. High-abundance proteins like albumin or IgG of plasma and urine, may interfere with the detection of potential disease biomarkers. Currently, several options are available for the depletion of abundant proteins in plasma. However, the applicability of these methods in urine has not been thoroughly investigated. In this study, we compared different, commercially available immunodepletion and ion-exchange based approaches on urine samples from both healthy subjects and CKD patients, for their reproducibility and efficiency in protein depletion. A starting urine volume of 500 μL was used to simulate conditions of a multi-institutional biomarker discovery study. All depletion approaches showed satisfactory reproducibility (n=5) in protein identification as well as protein abundance. Comparison of the depletion efficiency between the unfractionated and fractionated samples and the different depletion strategies, showed efficient depletion in all cases, with the exception of the ion-exchange kit. The depletion efficiency was found slightly higher in normal than in CKD samples and normal samples yielded more protein identifications than CKD samples when using both initial as well as corresponding depleted fractions. Along these lines, decrease in the amount of albumin and other targets as applicable, following depletion, was observed. Nevertheless, these depletion strategies did not yield a higher number of identifications in neither the urine from normal nor CKD patients. Collectively, when analyzing urine in the context of CKD biomarker identification, no added value of depletion strategies can be observed and analysis of unfractionated starting urine appears to be preferable. PMID:26208298

  4. Video coding for 3D-HEVC based on saliency information

    NASA Astrophysics Data System (ADS)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  5. Alignment of gold nanorods by angular photothermal depletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Adam B.; Chow, Timothy T. Y.; Chon, James W. M., E-mail: jchon@swin.edu.au

    2014-02-24

    In this paper, we demonstrate that a high degree of alignment can be imposed upon randomly oriented gold nanorod films by angular photothermal depletion with linearly polarized laser irradiation. The photothermal reshaping of gold nanorods is observed to follow quadratic melting model rather than the threshold melting model, which distorts the angular and spectral hole created on 2D distribution map of nanorods to be an open crater shape. We have accounted these observations to the alignment procedures and demonstrated good agreement between experiment and simulations. The use of multiple laser depletion wavelengths allowed alignment criteria over a large range ofmore » aspect ratios, achieving 80% of the rods in the target angular range. We extend the technique to demonstrate post-alignment in a multilayer of randomly oriented gold nanorod films, with arbitrary control of alignment shown across the layers. Photothermal angular depletion alignment of gold nanorods is a simple, promising post-alignment method for creating future 3D or multilayer plasmonic nanorod based devices and structures.« less

  6. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser

  7. Erythrocyte depletion from bone marrow: performance evaluation after 50 clinical-scale depletions with Spectra Optia BMC.

    PubMed

    Kim-Wanner, Soo-Zin; Bug, Gesine; Steinmann, Juliane; Ajib, Salem; Sorg, Nadine; Poppe, Carolin; Bunos, Milica; Wingenfeld, Eva; Hümmer, Christiane; Luxembourg, Beate; Seifried, Erhard; Bonig, Halvard

    2017-08-11

    Red blood cell (RBC) depletion is a standard graft manipulation technique for ABO-incompatible bone marrow (BM) transplants. The BM processing module for Spectra Optia, "BMC", was previously introduced. We here report the largest series to date of routine quality data after performing 50 clinical-scale RBC-depletions. Fifty successive RBC-depletions from autologous (n = 5) and allogeneic (n = 45) BM transplants were performed with the Spectra Optia BMC apheresis suite. Product quality was assessed before and after processing for volume, RBC and leukocyte content; RBC-depletion and stem cell (CD34+ cells) recovery was calculated there from. Clinical engraftment data were collected from 26/45 allogeneic recipients. Median RBC removal was 98.2% (range 90.8-99.1%), median CD34+ cell recovery was 93.6%, minimum recovery being 72%, total product volume was reduced to 7.5% (range 4.7-23.0%). Products engrafted with expected probability and kinetics. Performance indicators were stable over time. Spectra Optia BMC is a robust and efficient technology for RBC-depletion and volume reduction of BM, providing near-complete RBC removal and excellent CD34+ cell recovery.

  8. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  9. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less

  10. Revisiting Antarctic Ozone Depletion

    NASA Astrophysics Data System (ADS)

    Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf

    2015-04-01

    Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.

  11. Methylphenidate blocks effort-induced depletion of regulatory control in healthy volunteers.

    PubMed

    Sripada, Chandra; Kessler, Daniel; Jonides, John

    2014-06-01

    A recent wave of studies--more than 100 conducted over the last decade--has shown that exerting effort at controlling impulses or behavioral tendencies leaves a person depleted and less able to engage in subsequent rounds of regulation. Regulatory depletion is thought to play an important role in everyday problems (e.g., excessive spending, overeating) as well as psychiatric conditions, but its neurophysiological basis is poorly understood. Using a placebo-controlled, double-blind design, we demonstrated that the psychostimulant methylphenidate (commonly known as Ritalin), a catecholamine reuptake blocker that increases dopamine and norepinephrine at the synaptic cleft, fully blocks effort-induced depletion of regulatory control. Spectral analysis of trial-by-trial reaction times revealed specificity of methylphenidate effects on regulatory depletion in the slow-4 frequency band. This band is associated with the operation of resting-state brain networks that produce mind wandering, which raises potential connections between our results and recent brain-network-based models of control over attention. © The Author(s) 2014.

  12. An Infrastructure for UML-Based Code Generation Tools

    NASA Astrophysics Data System (ADS)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  13. Monte Carlo characterization of PWR spent fuel assemblies to determine the detectability of pin diversion

    NASA Astrophysics Data System (ADS)

    Burdo, James S.

    This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the

  14. An interactive toolbox for atlas-based segmentation and coding of volumetric images

    NASA Astrophysics Data System (ADS)

    Menegaz, G.; Luti, S.; Duay, V.; Thiran, J.-Ph.

    2007-03-01

    Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and legal reasons and yet provide high compression rates for reduced storage and transmission time. The images usually consist of a region of interest representing the part of the body under investigation surrounded by a "background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas based 3D segmentation method combining active contours with information theoretic principles, and the resulting segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The process is initiated by the user through the selection of either one pre-de.ned reference image or one image of the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the performance with respect to both segmentation and coding proved the high potential of the proposed system in providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.

  15. A calibration method for realistic neutron dosimetry in radiobiological experiments assisted by MCNP simulation

    PubMed Central

    Shahmohammadi Beni, Mehrdad; Krstic, Dragana; Nikezic, Dragoslav; Yu, Kwan Ngok

    2016-01-01

    Many studies on biological effects of neutrons involve dose responses of neutrons, which rely on accurately determined absorbed doses in the irradiated cells or living organisms. Absorbed doses are difficult to measure, and are commonly surrogated with doses measured using separate detectors. The present work describes the determination of doses absorbed in the cell layer underneath a medium column (DA) and the doses absorbed in an ionization chamber (DE) from neutrons through computer simulations using the MCNP-5 code, and the subsequent determination of the conversion coefficients R (= DA/DE). It was found that R in general decreased with increase in the medium thickness, which was due to elastic and inelastic scattering. For 2-MeV neutrons, conspicuous bulges in R values were observed at medium thicknesses of about 500, 1500, 2500 and 4000 μm, and these were attributed to carbon, oxygen and nitrogen nuclei, and were reflections of spikes in neutron interaction cross sections with these nuclei. For 0.1-MeV neutrons, no conspicuous bulges in R were observed (except one at ~2000 μm that was due to photon interactions), which was explained by the absence of prominent spikes in the interaction cross-sections with these nuclei for neutron energies <0.1 MeV. The ratio R could be increased by ~50% for small medium thickness if the incident neutron energy was reduced from 2 MeV to 0.1 MeV. As such, the absorbed doses in cells (DA) would vary with the incident neutron energies, even when the absorbed doses shown on the detector were the same. PMID:27380801

  16. Physiological implications of anthropogenic environmental calcium depletion

    Treesearch

    Catherine H. Borer; Paul G. Schaberg; Donald H. DeHayes; Gary J. Hawley

    2001-01-01

    Recent evidence indicates that numerous anthropogenic factors can deplete calcium (Ca) from forested ecosystems. Although it is difficult to quantify the extent of this depletion, some reports indicate that the magnitude of Ca losses may be substantial. The potential for Ca depletion raises important questions about tree health. Only a fraction of foliar Ca is...

  17. ANATOMY OF DEPLETED INTERPLANETARY CORONAL MASS EJECTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocher, M.; Lepri, S. T.; Landi, E.

    We report a subset of interplanetary coronal mass ejections (ICMEs) containing distinct periods of anomalous heavy-ion charge state composition and peculiar ion thermal properties measured by ACE /SWICS from 1998 to 2011. We label them “depleted ICMEs,” identified by the presence of intervals where C{sup 6+}/C{sup 5+} and O{sup 7+}/O{sup 6+} depart from the direct correlation expected after their freeze-in heights. These anomalous intervals within the depleted ICMEs are referred to as “Depletion Regions.” We find that a depleted ICME would be indistinguishable from all other ICMEs in the absence of the Depletion Region, which has the defining property ofmore » significantly low abundances of fully charged species of helium, carbon, oxygen, and nitrogen. Similar anomalies in the slow solar wind were discussed by Zhao et al. We explore two possibilities for the source of the Depletion Region associated with magnetic reconnection in the tail of a CME, using CME simulations of the evolution of two Earth-bound CMEs described by Manchester et al.« less

  18. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  19. The association between controlled interpersonal affect regulation and resource depletion.

    PubMed

    Martínez-Íñigo, David; Poerio, Giulia Lara; Totterdell, Peter

    2013-07-01

    This investigation focuses on what occurs to individuals' self-regulatory resource during controlled Interpersonal Affect Regulation (IAR) which is the process of deliberately influencing the internal feeling states of others. Combining the strength model of self-regulation and the resources conservation model, the investigation tested whether: (1) IAR behaviors are positively related to ego-depletion because goal-directed behaviors demand self-regulatory processes, and (2) the use of affect-improving strategies benefits from a source of resource-recovery because it initiates positive feedback from targets, as proposed from a resource-conservation perspective. To test this, a lab study based on an experimental dual-task paradigm using a sample of pairs of friends in the UK and a longitudinal field study of a sample of healthcare workers in Spain were conducted. The experimental study showed a depleting effect of interpersonal affect-improving IAR on a subsequent self-regulation task. The field study showed that while interpersonal affect-worsening was positively associated with depletion, as indicated by the level of emotional exhaustion, interpersonal affect-improving was only associated with depletion after controlling for the effect of positive feedback from clients. The findings indicate that IAR does have implications for resource depletion, but that social reactions play a role in the outcome. © 2013 The Authors. Applied Psychology: Health and Well-Being © 2013 The International Association of Applied Psychology.

  20. Development of a Grid-Based Gyro-Kinetic Simulation Code

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Brunetti, Maura; Tran, Trach-Minh; Brunner, Stephan

    2006-10-01

    A grid-based semi-Lagrangian code using cubic spline interpolation is being developed at CRPP, for solving the electrostatic drift-kinetic equations [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)] in a cylindrical system. This 4-dim code, CYGNE, is part of a project with long term aim of studying microturbulence in toroidal fusion devices, in the more general frame of gyro-kinetic equations. Towards their non-linear phase, the simulations from this code are subject to significant overshoot problems, reflected by the development of negative value regions of the distribution function, which leads to bad energy conservation. This has motivated the study of alternative schemes. On the one hand, new time integration algorithms are considered in the semi-Lagrangian frame. On the other hand, fully Eulerian schemes, which separate time and space discretisation (method of lines), are investigated. In particular, the Essentially Non Oscillatory (ENO) approach, constructed so as to minimize the overshoot problem, has been considered. All these methods have first been tested in the simpler case of the 2-dim guiding-center model for the Kelvin-Helmholtz instability, which enables to address the specific issue of the E xB drift also met in the more complex gyrokinetic-type equations. Based on these preliminary studies, the most promising methods are being implemented and tested in CYGNE.

  1. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  2. Human oocyte cryopreservation: 5-year experience with a sodium-depleted slow freezing method.

    PubMed

    Boldt, Jeffrey; Tidswell, Non; Sayers, Amy; Kilani, Rami; Cline, Donald

    2006-07-01

    A slow freezing/rapid thawing method for the cryopreservation of human oocytes has been employed using a sodium-depleted culture media. In 53 frozen egg-embryo transfer (FEET) cycles, a 60.4% survival rate post-thaw was obtained and a 62.0% fertilization rate following intracytoplasmic sperm injection. Overall pregnancy rates were 26.4% per thaw attempt, 30.4% per patient, and 32.6% per embryo transfer. Pregnancy rates using sodium-depleted phosphate-buffered saline (PBS) as the base medium were 20.0% per thaw, 21.7% per patient, and 26.3% per transfer. With sodium-depleted modified human tubal fluid (mHTF) as the base for the cryopreservation medium, rates were 32.1% per thaw attempt, 39.1% per patient, 37.5% per transfer. The overall implantation rates were 4.2% per thawed oocyte and 13.6% per embryo, (PBS: 3.0% per egg, 10.6% per embryo; mHTF:5.3% per oocyte; 15.9% per embryo). These data indicate that the use of a sodium-depleted media with slow freezing and rapid thawing can yield acceptable pregnancy rates after FEET.

  3. Overview of the Martian nightside suprathermal electron depletions

    NASA Astrophysics Data System (ADS)

    Steckiewicz, Morgane; Garnier, Philippe; André, Nicolas; Mitchell, David; Andersson, Laila; Penou, Emmanuel; Beth, Arnaud; Fedorov, Andrei; Sauvaud, Jean-André; Mazelle, Christian; Lillis, Robert; Brain, David; Espley, Jared; McFadden, James; Halekas, Jasper; Luhmann, Janet; Soobiah, Yasir; Jakosky, Bruce

    2017-04-01

    Nightside suprathermal electron depletions have been observed at Mars by three spacecraft to date: Mars Global Surveyor (MGS), Mars EXpress (MEX) and the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission. The global coverage of Mars by MEX and MGS at high altitudes (above approximately 250 km) revealed that these structures were mostly observed above strong crustal magnetic field sources which exclude the electrons coming from the dayside or from the tail. The MAVEN orbit now offers the possibility to observe this phenomenon at low altitudes, down to 125 km. A transition region near 170 km has been detected separating the collisional region where electron depletions are mainly due to electron absorption by atmospheric CO2 and the collisionless region where they are mainly due to closed crustal magnetic field loops. MAVEN is now in its third year of data recording and has covered a large range of latitudes, local times and solar zenith angles at low altitudes (<900km) in the nightside. These observations enable us to estimate where the EUV terminator is located, based on the observation that no electron depletions are expected above its location. Through this study the location of the EUV terminator appears to be raised on average by 125 km above the location of the geometrical terminator. However, this location is likely to be different between the dawn and dusk terminator and to vary throughout the different Martian seasons. This coverage has also allowed the observation of regions with recurrent absence of electron depletions even below the transition region near 170 km altitude. These 'no-depletion' areas are localized above the least magnetized area of Mars both in the Northern and Southern hemispheres. A modification in the CO2 density, gravity waves, or the presence of current sheets are potential drivers for that phenomenon.

  4. Depletion of CD20 B cells fails to inhibit relapsing mouse experimental autoimmune encephalomyelitis.

    PubMed

    Sefia, Eseberuo; Pryce, Gareth; Meier, Ute-Christiane; Giovannoni, Gavin; Baker, David

    2017-05-01

    Multiple sclerosis (MS) is often considered to be a CD4, T cell-mediated disease. This is largely based on the capacity of CD4 T cells to induce relapsing experimental autoimmune encephalomyelitis (EAE) in rodents. However, CD4-depletion using a monoclonal antibody was considered unsuccessful and relapsing MS responds well to B cell depletion via CD20 B cell depleting antibodies. The influence of CD20 B cell depletion in relapsing EAE was assessed. Relapsing EAE was induced in Biozzi ABH mice. These were treated with CD20-specific (18B12) antibody and the influence on CD45RA-B220 B cell depletion and clinical course was analysed. Relapsing EAE in Biozzi ABH failed to respond to the marked B cell depletion induced with a CD20-specific antibody. In contrast to CD20 and CD8-specific antibodies, CD4 T cell depletion inhibited EAE. Spinal cord antigen-induced disease in ABH mice is CD4 T cell-dependent. The lack of influence of CD20 B cell depletion in relapsing EAE, coupled with the relatively marginal and inconsistent results obtained in other mouse studies, suggests that rodents may have limited value in understanding the mechanism occurring following CD20 B cell depletion in humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinilla, Maria Isabel

    This report seeks to study and benchmark code predictions against experimental data; determine parameters to match MCNP-simulated detector response functions to experimental stilbene measurements; add stilbene processing capabilities to DRiFT; and improve NEUANCE detector array modeling and analysis using new MCNP6 and DRiFT features.

  7. Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base

    NASA Astrophysics Data System (ADS)

    Savage, B.; Snoke, J. A.

    2017-12-01

    The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years

  8. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  9. Depletion sensitivity predicts unhealthy snack purchases.

    PubMed

    Salmon, Stefanie J; Adriaanse, Marieke A; Fennis, Bob M; De Vet, Emely; De Ridder, Denise T D

    2016-01-01

    The aim of the present research is to examine the relation between depletion sensitivity - a novel construct referring to the speed or ease by which one's self-control resources are drained - and snack purchase behavior. In addition, interactions between depletion sensitivity and the goal to lose weight on snack purchase behavior were explored. Participants included in the study were instructed to report every snack they bought over the course of one week. The dependent variables were the number of healthy and unhealthy snacks purchased. The results of the present study demonstrate that depletion sensitivity predicts the amount of unhealthy (but not healthy) snacks bought. The more sensitive people are to depletion, the more unhealthy snacks they buy. Moreover, there was some tentative evidence that this relation is more pronounced for people with a weak as opposed to a strong goal to lose weight, suggesting that a strong goal to lose weight may function as a motivational buffer against self-control failures. All in all, these findings provide evidence for the external validity of depletion sensitivity and the relevance of this construct in the domain of eating behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A Silent Revolution: From Sketching to Coding--A Case Study on Code-Based Design Tool Learning

    ERIC Educational Resources Information Center

    Xu, Song; Fan, Kuo-Kuang

    2017-01-01

    Along with the information technology rising, Computer Aided Design activities are becoming more modern and more complex. But learning how to operation these new design tools has become the main problem lying in front of each designer. This study was purpose on finding problems encountered during code-based design tools learning period of…

  11. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  12. The primitive code and repeats of base oligomers as the primordial protein-encoding sequence.

    PubMed Central

    Ohno, S; Epplen, J T

    1983-01-01

    Even if the prebiotic self-replication of nucleic acids and the subsequent emergence of primitive, enzyme-independent tRNAs are accepted as plausible, the origin of life by spontaneous generation still appears improbable. This is because the just-emerged primitive translational machinery had to cope with base sequences that were not preselected for their coding potentials. Particularly if the primitive mitochondria-like code with four chain-terminating base triplets preceded the universal code, the translation of long, randomly generated, base sequences at this critical stage would have merely resulted in the production of short oligopeptides instead of long polypeptide chains. We present the base sequence of a mouse transcript containing tetranucleotide repeats conserved during evolution. Even if translated in accordance with the primitive mitochondria-like code, this transcript in its three reading frames can yield 245-, 246-, and 251-residue-long tetrapeptidic periodical polypeptides that are already acquiring longer periodicities. We contend that the first set of base sequences translated at the beginning of life were such oligonucleotide repeats. By quickly acquiring longer periodicities, their products must have soon gained characteristic secondary structures--alpha-helical or beta-sheet or both. PMID:6574491

  13. Is Ego Depletion Real? An Analysis of Arguments.

    PubMed

    Friese, Malte; Loschelder, David D; Gieseler, Karolin; Frankenbach, Julius; Inzlicht, Michael

    2018-03-01

    An influential line of research suggests that initial bouts of self-control increase the susceptibility to self-control failure (ego depletion effect). Despite seemingly abundant evidence, some researchers have suggested that evidence for ego depletion was the sole result of publication bias and p-hacking, with the true effect being indistinguishable from zero. Here, we examine (a) whether the evidence brought forward against ego depletion will convince a proponent that ego depletion does not exist and (b) whether arguments that could be brought forward in defense of ego depletion will convince a skeptic that ego depletion does exist. We conclude that despite several hundred published studies, the available evidence is inconclusive. Both additional empirical and theoretical works are needed to make a compelling case for either side of the debate. We discuss necessary steps for future work toward this aim.

  14. Quartz crystal microbalance detection of DNA single-base mutation based on monobase-coded cadmium tellurium nanoprobe.

    PubMed

    Zhang, Yuqin; Lin, Fanbo; Zhang, Youyu; Li, Haitao; Zeng, Yue; Tang, Hao; Yao, Shouzhuo

    2011-01-01

    A new method for the detection of point mutation in DNA based on the monobase-coded cadmium tellurium nanoprobes and the quartz crystal microbalance (QCM) technique was reported. A point mutation (single-base, adenine, thymine, cytosine, and guanine, namely, A, T, C and G, mutation in DNA strand, respectively) DNA QCM sensor was fabricated by immobilizing single-base mutation DNA modified magnetic beads onto the electrode surface with an external magnetic field near the electrode. The DNA-modified magnetic beads were obtained from the biotin-avidin affinity reaction of biotinylated DNA and streptavidin-functionalized core/shell Fe(3)O(4)/Au magnetic nanoparticles, followed by a DNA hybridization reaction. Single-base coded CdTe nanoprobes (A-CdTe, T-CdTe, C-CdTe and G-CdTe, respectively) were used as the detection probes. The mutation site in DNA was distinguished by detecting the decreases of the resonance frequency of the piezoelectric quartz crystal when the coded nanoprobe was added to the test system. This proposed detection strategy for point mutation in DNA is proved to be sensitive, simple, repeatable and low-cost, consequently, it has a great potential for single nucleotide polymorphism (SNP) detection. 2011 © The Japan Society for Analytical Chemistry

  15. The Base 32 Method: An Improved Method for Coding Sibling Constellations.

    ERIC Educational Resources Information Center

    Perfetti, Lawrence J. Carpenter

    1990-01-01

    Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)

  16. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  17. Observations and Simulations of Formation of Broad Plasma Depletions Through Merging Process

    NASA Technical Reports Server (NTRS)

    Huang, Chao-Song; Retterer, J. M.; Beaujardiere, O. De La; Roddy, P. A.; Hunton, D.E.; Ballenthin, J. O.; Pfaff, Robert F.

    2012-01-01

    Broad plasma depletions in the equatorial ionosphere near dawn are region in which the plasma density is reduced by 1-3 orders of magnitude over thousands of kilometers in longitude. This phenomenon is observed repeatedly by the Communication/Navigation Outage Forecasting System (C/NOFS) satellite during deep solar minimum. The plasma flow inside the depletion region can be strongly upward. The possible causal mechanism for the formation of broad plasma depletions is that the broad depletions result from merging of multiple equatorial plasma bubbles. The purpose of this study is to demonstrate the feasibility of the merging mechanism with new observations and simulations. We present C/NOFS observations for two cases. A series of plasma bubbles is first detected by C/NOFS over a longitudinal range of 3300-3800 km around midnight. Each of the individual bubbles has a typical width of approx 100 km in longitude, and the upward ion drift velocity inside the bubbles is 200-400 m/s. The plasma bubbles rotate with the Earth to the dawn sector and become broad plasma depletions. The observations clearly show the evolution from multiple plasma bubbles to broad depletions. Large upward plasma flow occurs inside the depletion region over 3800 km in longitude and exists for approx 5 h. We also present the numerical simulations of bubble merging with the physics-based low-latitude ionospheric model. It is found that two separate plasma bubbles join together and form a single, wider bubble. The simulations show that the merging process of plasma bubbles can indeed occur in incompressible ionospheric plasma. The simulation results support the merging mechanism for the formation of broad plasma depletions.

  18. Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes

    NASA Astrophysics Data System (ADS)

    Tavallaei, Saeed Ebadi; Falahati, Abolfazl

    Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.

  19. Unsupervised Extraction of Diagnosis Codes from EMRs Using Knowledge-Based and Extractive Text Summarization Techniques

    PubMed Central

    Kavuluru, Ramakanth; Han, Sifei; Harris, Daniel

    2017-01-01

    Diagnosis codes are extracted from medical records for billing and reimbursement and for secondary uses such as quality control and cohort identification. In the US, these codes come from the standard terminology ICD-9-CM derived from the international classification of diseases (ICD). ICD-9 codes are generally extracted by trained human coders by reading all artifacts available in a patient’s medical record following specific coding guidelines. To assist coders in this manual process, this paper proposes an unsupervised ensemble approach to automatically extract ICD-9 diagnosis codes from textual narratives included in electronic medical records (EMRs). Earlier attempts on automatic extraction focused on individual documents such as radiology reports and discharge summaries. Here we use a more realistic dataset and extract ICD-9 codes from EMRs of 1000 inpatient visits at the University of Kentucky Medical Center. Using named entity recognition (NER), graph-based concept-mapping of medical concepts, and extractive text summarization techniques, we achieve an example based average recall of 0.42 with average precision 0.47; compared with a baseline of using only NER, we notice a 12% improvement in recall with the graph-based approach and a 7% improvement in precision using the extractive text summarization approach. Although diagnosis codes are complex concepts often expressed in text with significant long range non-local dependencies, our present work shows the potential of unsupervised methods in extracting a portion of codes. As such, our findings are especially relevant for code extraction tasks where obtaining large amounts of training data is difficult. PMID:28748227

  20. Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.

    2003-01-01

    The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and