MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongbin; Szilard, Ronaldo; Epiney, Aaron
Under the auspices of the DOE LWRS Program RISMC Industry Application ECCS/LOCA, INL has engaged staff from both South Texas Project (STP) and the Texas A&M University (TAMU) to produce a generic pressurized water reactor (PWR) model including reactor core, clad/fuel design and systems thermal hydraulics based on the South Texas Project (STP) nuclear power plant, a 4-Loop Westinghouse PWR. A RISMC toolkit, named LOCA Toolkit for the U.S. (LOTUS), has been developed for use in this generic PWR plant model to assess safety margins for the proposed NRC 10 CFR 50.46c rule, Emergency Core Cooling System (ECCS) performance duringmore » LOCA. This demonstration includes coupled analysis of core design, fuel design, thermalhydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results. Within this context, a multi-physics best estimate plus uncertainty (MPBEPU) methodology framework is proposed.« less
VERA Core Simulator Methodology for PWR Cycle Depletion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin S; Jabaay, Daniel
2015-01-01
This paper describes the methodology developed and implemented in MPACT for performing high-fidelity pressurized water reactor (PWR) multi-cycle core physics calculations. MPACT is being developed primarily for application within the Consortium for the Advanced Simulation of Light Water Reactors (CASL) as one of the main components of the VERA Core Simulator, the others being COBRA-TF and ORIGEN. The methods summarized in this paper include a methodology for performing resonance self-shielding and computing macroscopic cross sections, 2-D/1-D transport, nuclide depletion, thermal-hydraulic feedback, and other supporting methods. These methods represent a minimal set needed to simulate high-fidelity models of a realistic nuclearmore » reactor. Results demonstrating this are presented from the simulation of a realistic model of the first cycle of Watts Bar Unit 1. The simulation, which approximates the cycle operation, is observed to be within 50 ppm boron (ppmB) reactivity for all simulated points in the cycle and approximately 15 ppmB for a consistent statepoint. The verification and validation of the PWR cycle depletion capability in MPACT is the focus of two companion papers.« less
Optimization of small long-life PWR based on thorium fuel
NASA Astrophysics Data System (ADS)
Subkhi, Moh Nurul; Suud, Zaki; Waris, Abdul; Permana, Sidik
2015-09-01
A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% 233U & 2.8% 231Pa, 6% 233U & 2.8% 231Pa and 7% 233U & 6% 231Pa give low excess reactivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downar, Thomas
This report summarizes the current status of VERA-CS Verification and Validation for PWR Core Follow operation and proposes a multi-phase plan for continuing VERA-CS V&V in FY17 and FY18. The proposed plan recognizes the hierarchical nature of a multi-physics code system such as VERA-CS and the importance of first achieving an acceptable level of V&V on each of the single physics codes before focusing on the V&V of the coupled physics solution. The report summarizes the V&V of each of the single physics codes systems currently used for core follow analysis (ie MPACT, CTF, Multigroup Cross Section Generation, and BISONmore » / Fuel Temperature Tables) and proposes specific actions to achieve a uniformly acceptable level of V&V in FY17. The report also recognizes the ongoing development of other codes important for PWR Core Follow (e.g. TIAMAT, MAMBA3D) and proposes Phase II (FY18) VERA-CS V&V activities in which those codes will also reach an acceptable level of V&V. The report then summarizes the current status of VERA-CS multi-physics V&V for PWR Core Follow and the ongoing PWR Core Follow V&V activities for FY17. An automated procedure and output data format is proposed for standardizing the output for core follow calculations and automatically generating tables and figures for the VERA-CS Latex file. A set of acceptance metrics is also proposed for the evaluation and assessment of core follow results that would be used within the script to automatically flag any results which require further analysis or more detailed explanation prior to being added to the VERA-CS validation base. After the Automation Scripts have been completed and tested using BEAVRS, the VERA-CS plan proposes the Watts Bar cycle depletion cases should be performed with the new cross section library and be included in the first draft of the new VERA-CS manual for release at the end of PoR15. Also, within the constraints imposed by the proprietary nature of plant data, as many as possible of the FY17 AMA Plant Core Follow cases should also be included in the VERA-CS manual at the end of PoR15. After completion of the ongoing development of TIAMAT for fully coupled, full core calculations with VERA-CS / BISON 1.5D, and after the completion of the refactoring of MAMBA3D for CIPS analysis in FY17, selected cases from the VERA-CS validation based should be performed, beginning with the legacy cases of Watts Bar and BEAVRS in PoR16. Finally, as potential Phase III future work some additional considerations are identified for extending the VERA-CS V&V to other reactor types such as the BWR.« less
Efficient provisioning for multi-core applications with LSF
NASA Astrophysics Data System (ADS)
Dal Pra, Stefano
2015-12-01
Tier-1 sites providing computing power for HEP experiments are usually tightly designed for high throughput performances. This is pursued by reducing the variety of supported use cases and tuning for performances those ones, the most important of which have been that of singlecore jobs. Moreover, the usual workload is saturation: each available core in the farm is in use and there are queued jobs waiting for their turn to run. Enabling multi-core jobs thus requires dedicating a number of hosts where to run, and waiting for them to free the needed number of cores. This drain-time introduces a loss of computing power driven by the number of unusable empty cores. As an increasing demand for multi-core capable resources have emerged, a Task Force have been constituted in WLCG, with the goal to define a simple and efficient multi-core resource provisioning model. This paper details the work done at the INFN Tier-1 to enable multi-core support for the LSF batch system, with the intent of reducing to the minimum the average number of unused cores. The adopted strategy has been that of dedicating to multi-core a dynamic set of nodes, whose dimension is mainly driven by the number of pending multi-core requests and fair-share priority of the submitting user. The node status transition, from single to multi core et vice versa, is driven by a finite state machine which is implemented in a custom multi-core director script, running in the cluster. After describing and motivating both the implementation and the details specific to the LSF batch system, results about performance are reported. Factors having positive and negative impact on the overall efficiency are discussed and solutions to reduce at most the negative ones are proposed.
Optimization of small long-life PWR based on thorium fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, Moh Nurul, E-mail: nsubkhi@students.itb.ac.id; Physics Dept., Faculty of Science and Technology, State Islamic University of Sunan Gunung Djati Bandung Jalan A.H Nasution 105 Bandung; Suud, Zaki, E-mail: szaki@fi.itb.ac.id
2015-09-30
A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% {sup 233}U & 2.8% {sup 231}Pa, 6% {sup 233}U & 2.8% {sup 231}Pa and 7% {sup 233}U & 6% {supmore » 231}Pa give low excess reactivity.« less
40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?
Code of Federal Regulations, 2011 CFR
2011-07-01
... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...
40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?
Code of Federal Regulations, 2014 CFR
2014-07-01
... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...
40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?
Code of Federal Regulations, 2012 CFR
2012-07-01
... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...
40 CFR 59.506 - How do I demonstrate compliance if I manufacture multi-component kits?
Code of Federal Regulations, 2013 CFR
2013-07-01
... multi-component kits as defined in § 59.503, then the Kit PWR must not exceed the Total Reactivity Limit. (b) You must calculate the Kit PWR and the Total Reactivity Limit as follows: (1) KIT PWR = (PWR(1) × W1) + (PWR(2) × W2) +. ...+ (PWR(n) × Wn) (2) Total Reactivity Limit = (RL1 × W1) + (RL2 × W2...
Integral Full Core Multi-Physics PWR Benchmark with Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forget, Benoit; Smith, Kord; Kumar, Shikhar
In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevantmore » multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.« less
Active Job Monitoring in Pilots
NASA Astrophysics Data System (ADS)
Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas
2015-12-01
Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.
Conceptual design study of small long-life PWR based on thorium cycle fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, M. Nurul; Su'ud, Zaki; Waris, Abdul
2014-09-30
A neutronic performance of small long-life Pressurized Water Reactor (PWR) using thorium cycle based fuel has been investigated. Thorium cycle which has higher conversion ratio in thermal region compared to uranium cycle produce some significant of {sup 233}U during burn up time. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.3, while the multi-energy-group diffusion calculations were optimized in whole core cylindrical two-dimension R-Z geometry by SRAC-CITATION. this study would be introduced thorium nitride fuel system which ZIRLO is the cladding material. The optimization of 350 MWt small long life PWRmore » result small excess reactivity and reduced power peaking during its operation.« less
CMS Readiness for Multi-Core Workload Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less
CMS readiness for multi-core workload scheduling
NASA Astrophysics Data System (ADS)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.
2017-10-01
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.
Coupled Neutronics Thermal-Hydraulic Solution of a Full-Core PWR Using VERA-CS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarno, Kevin T; Palmtag, Scott; Davidson, Gregory G
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a core simulator called VERA-CS to model operating PWR reactors with high resolution. This paper describes how the development of VERA-CS is being driven by a set of progression benchmark problems that specify the delivery of useful capability in discrete steps. As part of this development, this paper will describe the current capability of VERA-CS to perform a multiphysics simulation of an operating PWR at Hot Full Power (HFP) conditions using a set of existing computer codes coupled together in a novel method. Results for several single-assembly casesmore » are shown that demonstrate coupling for different boron concentrations and power levels. Finally, high-resolution results are shown for a full-core PWR reactor modeled in quarter-symmetry.« less
Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre
2014-06-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.
Optimization of burnable poison design for Pu incineration in fully fertile free PWR core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fridman, E.; Shwageraus, E.; Galperin, A.
2006-07-01
The design challenges of the fertile-free based fuel (FFF) can be addressed by careful and elaborate use of burnable poisons (BP). Practical fully FFF core design for PWR reactor has been reported in the past [1]. However, the burnable poison option used in the design resulted in significant end of cycle reactivity penalty due to incomplete BP depletion. Consequently, excessive Pu loading were required to maintain the target fuel cycle length, which in turn decreased the Pu burning efficiency. A systematic evaluation of commercially available BP materials in all configurations currently used in PWRs is the main objective of thismore » work. The BP materials considered are Boron, Gd, Er, and Hf. The BP geometries were based on Wet Annular Burnable Absorber (WABA), Integral Fuel Burnable Absorber (IFBA), and Homogeneous poison/fuel mixtures. Several most promising combinations of BP designs were selected for the full core 3D simulation. All major core performance parameters for the analyzed cases are very close to those of a standard PWR with conventional UO{sub 2} fuel including possibility of reactivity control, power peaking factors, and cycle length. The MTC of all FFF cores was found at the full power conditions at all times and very close to that of the UO{sub 2} core. The Doppler coefficient of the FFF cores is also negative but somewhat lower in magnitude compared to UO{sub 2} core. The soluble boron worth of the FFF cores was calculated to be lower than that of the UO{sub 2} core by about a factor of two, which still allows the core reactivity control with acceptable soluble boron concentrations. The main conclusion of this work is that judicial application of burnable poisons for fertile free fuel has a potential to produce a core design with performance characteristics close to those of the reference PWR core with conventional UO{sub 2} fuel. (authors)« less
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...
2016-09-07
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
TREAT Neutronics Analysis and Design Support, Part I: Multi-SERTTA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Woolstenhulme, Nicolas E.; Hill, Connie M.
2016-08-01
Experiment vehicle design is necessary in preparation for Transient Reactor Test (TREAT) facility restart and the resumption of transient testing to support Accident Tolerant Fuel (ATF) characterization and other future fuels testing requirements. Currently the most mature vehicle design is the Multi-SERTTA (Static Environments Rodlet Transient Test Apparatuses), which can accommodate up to four concurrent rodlet-sized specimens under separate environmental conditions. Robust test vehicle design requires neutronics analyses to support design development, optimization of the power coupling factor (PCF) to efficiently maximize energy generation in the test fuel rodlets, and experiment safety analyses. Calculations were performed to support analysis ofmore » a near-final design of the Multi-SERTTA vehicle, the design process for future TREAT test vehicles, and establish analytical practices for upcoming transient test experiments. Models of the Multi-SERTTA vehicle containing typical PWR-fuel rodlets were prepared and neutronics calculations were performed using MCNP6.1 with ENDF/B-VII.1 nuclear data libraries. Calculation of the PCF for reference conditions of a PWR fuel rodlet in clean water at operational temperature and pressure provided results between 1.10 and 1.74 W/g-MW depending on the location of the four Multi-SERTTA units with the stack. Basic changes to the Multi-SERTTA secondary vessel containment and support have minimal impact on PCF; using materials with less neutron absorption can improve expected PCF values, especially in the primary containment. An optimized balance is needed between structural integrity, experiment safety, and energy deposition in the experiment. Type of medium and environmental conditions within the primary vessel surrounding the fuel rodlet can also have a significant impact on resultant PCF values. The estimated reactivity insertion worth into the TREAT core is impacted more by the primary and secondary Multi-SERTTA vehicle structure with the experiment content and contained environment having a near negligible impact on overall system reactivity. Additional calculations were performed to evaluate the peak-to-average assembly powers throughout the TREAT core, as well as the nuclear heat generation for the various structural components of the Multi-SERTTA assembly. Future efforts include the evaluation of flux collars to shape the PCF for individual Multi-SERTTA units during an experiment such as to achieve uniformity in test unit environmental conditions impacted by the non-uniform axial flux/power profile of TREAT. Upon resumption of transient testing, experimental results from both the Multi-SERTTA and Multi-SERTTA-CAL will be compared against calculational results and methods for further optimization and design strategies.« less
Xenon-induced power oscillations in a generic small modular reactor
NASA Astrophysics Data System (ADS)
Kitcher, Evans Damenortey
As world demand for energy continues to grow at unprecedented rates, the world energy portfolio of the future will inevitably include a nuclear energy contribution. It has been suggested that the Small Modular Reactor (SMR) could play a significant role in the spread of civilian nuclear technology to nations previously without nuclear energy. As part of the design process, the SMR design must be assessed for the threat to operations posed by xenon-induced power oscillations. In this research, a generic SMR design was analyzed with respect to just such a threat. In order to do so, a multi-physics coupling routine was developed with MCNP/MCNPX as the neutronics solver. Thermal hydraulic assessments were performed using a single channel analysis tool developed in Python. Fuel and coolant temperature profiles were implemented in the form of temperature dependent fuel cross sections generated using the SIGACE code and reactor core coolant densities. The Power Axial Offset (PAO) and Xenon Axial Offset (XAO) parameters were chosen to quantify any oscillatory behavior observed. The methodology was benchmarked against results from literature of startup tests performed at a four-loop PWR in Korea. The developed benchmark model replicated the pertinent features of the reactor within ten percent of the literature values. The results of the benchmark demonstrated that the developed methodology captured the desired phenomena accurately. Subsequently, a high fidelity SMR core model was developed and assessed. Results of the analysis revealed an inherently stable SMR design at beginning of core life and end of core life under full-power and half-power conditions. The effect of axial discretization, stochastic noise and convergence of the Monte Carlo tallies in the calculations of the PAO and XAO parameters was investigated. All were found to be quite small and the inherently stable nature of the core design with respect to xenon-induced power oscillations was confirmed. Finally, a preliminary investigation into excess reactivity control options for the SMR design was conducted confirming the generally held notion that existing PWR control mechanisms can be used in iPWR SMRs with similar effectiveness. With the desire to operate the SMR under the boron free coolant condition, erbium oxide fuel integral burnable absorber rods were identified as a possible means to retain the dispersed absorber effect of soluble boron in the reactor coolant in replacement.
Design study of long-life PWR using thorium cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, Moh. Nurul; Su'ud, Zaki; Waris, Abdul
2012-06-06
Design study of long-life Pressurized Water Reactor (PWR) using thorium cycle has been performed. Thorium cycle in general has higher conversion ratio in the thermal spectrum domain than uranium cycle. Cell calculation, Burn-up and multigroup diffusion calculation was performed by PIJ-CITATION-SRAC code using libraries based on JENDL 3.2. The neutronic analysis result of infinite cell calculation shows that {sup 231}Pa better than {sup 237}Np as burnable poisons in thorium fuel system. Thorium oxide system with 8%{sup 233}U enrichment and 7.6{approx} 8%{sup 231}Pa is the most suitable fuel for small-long life PWR core because it gives reactivity swing less than 1%{Delta}k/kmore » and longer burn up period (more than 20 year). By using this result, small long-life PWR core can be designed for long time operation with reduced excess reactivity as low as 0.53%{Delta}k/k and reduced power peaking during its operation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, Paul E.; Koenig, Greg John; Uncapher, William Leonard
2016-05-01
This report describes the third set of tests (the “DCLa shaker tests”) of an instrumented surrogate PWR fuel assembly. The purpose of this set of tests was to measure strains and accelerations on Zircaloy-4 fuel rods when the PWR assembly was subjected to rail and truck loadings simulating normal conditions of transport when affixed to a multi-axis shaker. This is the first set of tests of the assembly simulating rail normal conditions of transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, Paul E.; Koenig, Greg John; Uncapher, William Leonard
2016-05-12
This report describes the third set of tests (the “DCL a shaker tests”) of an instrumented surrogate PWR fuel assembly. The purpose of this set of tests was to measure strains and accelerations on Zircaloy-4 fuel rods when the PWR assembly was subjected to rail and truck loadings simulating normal conditions of transport when affixed to a multi-axis shaker. This is the first set of tests of the assembly simulating rail normal conditions of transport.
Heuristic rules embedded genetic algorithm for in-core fuel management optimization
NASA Astrophysics Data System (ADS)
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.
Reactor Physics Assessment of Thick Silicon Carbide Clad PWR Fuels
2013-06-01
Densities ............................................................................................................ 21 2.3 Fuel Mass (Core Total...70 7.1 Geometry, Material Density, and Mass Summary for All Cores...21 Table 3: Fuel Rod Masses for Different Clads
Analysis of the return to power scenario following a LBLOCA in a PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macian, R.; Tyler, T.N.; Mahaffy, J.H.
1995-09-01
The risk of reactivity accidents has been considered an important safety issue since the beginning of the nuclear power industry. In particular, several events leading to such scenarios for PWR`s have been recognized and studied to assess the potential risk of fuel damage. The present paper analyzes one such event: the possible return to power during the reflooding phase following a LBLOCA. TRAC-PF1/MOD2 coupled with a three-dimensional neutronic model of the core based on the Nodal Expansion Method (NEM) was used to perform the analysis. The system computer model contains a detailed representation of a complete typical 4-loop PWR. Thus,more » the simulation can follow complex system interactions during reflooding, which may influence the neutronics feedback in the core. Analyses were made with core models bases on cross sections generated by LEOPARD. A standard and a potentially more limiting case, with increased pressurizer and accumulator inventories, were run. In both simulations, the reactor reaches a stable state after the reflooding is completed. The lower core region, filled with cold water, generates enough power to boil part of the incoming liquid, thus preventing the core average liquid fraction from reaching a value high enough to cause a return to power. At the same time, the mass flow rate through the core is adequate to maintain the rod temperature well below the fuel damage limit.« less
ACHILLES: Heat Transfer in PWR Core During LOCA Reflood Phase
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-11-01
1. NAME AND TITLE OF DATA LIBRARY ACHILLES -Heat Transfer in PWR Core During LOCA Reflood Phase. 2. NAME AND TITLE OF DATA RETRIEVAL PROGRAMS N/A 3. CONTRIBUTOR AEA Technology, Winfrith Technology Centre, Dorchester DT2 8DH United Kingdom through the OECD Nuclear Energy Agency Data Bank, Issy-les-Moulineaux, France. 4. DESCRIPTION OF TEST FACILITY The most important features of the Achilles rig were the shroud vessel, which contained the test section, and the downcomer. These may be thought of as representing the core barrel and the annular downcomer in the reactor pressure vessel. The test section comprises a cluster of 69more » rods in a square array within a circular shroud vessel. The rod diameter and pitch (9.5 mm and 12.6 mm) were typical of PWR dimensions. The internal diameter of the shroud vessel was 128 mm. Each rod was electrically heated over a length of 3.66 m, which is typical of the nuclear heated length in a PWR fuel rod, and each contained 6 internal thermocouples. These were arranged in one of 8 groupings which concentrated the thermocouples in different axial zones. The spacer grids were at prototypic PWR locations. Each grid had two thermocouples attached to its trailing edge at radial locations. The axial power profile along the rods was an 11 step approximation to a "chopped cosine". The shroud vessel had 5 heating zones whose power could be independently controlled. 5. DESCRIPTION OF TESTS The Achilles experiments investigated the heat transfer in the core of a Pressurized Water Reactor during the re-flood phase of a postulated large break loss of coolant accident. The results provided data to validate codes and to improve modeling. Different types of experiments were carried out which included single phase cooling, re-flood under low flow conditions, level swell and re-flood under high flow conditions. Three series of experiments were performed. The first and the third used the same test section but the second used another test section, similar in all respects except that it contained a partial blockage formed by attaching sleeves (or "balloons") to some of the rods. 6. SOURCE AND SCOPE OF DATA Phenomena Tested - Heat transfer in the core of a PWR during a re-flood phase of postulated large break LOCA. Test Designation - Achilles Rig. The programme includes the following types of experiments: - on an unballooned cluster: -- single phase air flow -- low pressure level swell -- low flooding rate re-flood -- high flooding rate re-flood - on a ballooned cluster containing 80% blockage formed by 16 balloon sleeves -- single phase air flow -- low flooding rate re-flood 7. DISCUSSION OF THE DATA RETRIEVAL PROGRAM N/A 8. DATA FORMAT AND COMPUTER Many Computers (M00019MNYCP00). 9. TYPICAL RUNNING TIME N/A 11. CONTENTS OF LIBRARY The ACHILLES package contains test data and associated data processing software as well as the documentation listed above. 12. DATE OF ABSTRACT November 2013. KEYWORDS: DATABASES, BENCHMARKS, HEAT TRANSFER, LOSS-OF-COLLANT ACCIDENT, PWR REACTORS, REFLOODING« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, H.; Fukui, S.; Iwahashi, Y.
1994-12-31
The development of inspection technique and tool for Bottom Mounted Instrument (BMI) nozzle of PWR plant was performed for countermeasure of leakage accident at incore instrument nozzle of Hamaoka-1 (BWR). MHI achieved the following development, of which object was PWR Plant R/V: (1) development of ECT/UT Multi-sensored Probe; (2) development of Inspection System (3) development of Data Processing System. The Inspection System had been functionally tested using full scale mock-up. As the result of the functional test, this system was confirmed to be very effective, and assumed to be hopeful for the actual application on site.
Multi-pack Disposal Concepts for Spent Fuel (Rev. 0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Hardin, Ernest; Matteo, Edward N.
2015-12-01
At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media (Hardin et al., 2012). Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are keptmore » open after emplacement for extended ventilation) have been limited to the Yucca Mountain License Application Design (CRWMS M&O, 1999). Thermal analysis showed that, if “enclosed” concepts are constrained by peak package/buffer temperature, waste package capacity is limited to 4 PWR assemblies (or 9-BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems (EnergySolution, 2015). This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).« less
Multi-Pack Disposal Concepts for Spent Fuel (Revision 1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest; Matteo, Edward N.; Hadgu, Teklu
2016-01-01
At the initiation of the Used Fuel Disposition (UFD) R&D campaign, international geologic disposal programs and past work in the U.S. were surveyed to identify viable disposal concepts for crystalline, clay/shale, and salt host media. Concepts for disposal of commercial spent nuclear fuel (SNF) and high-level waste (HLW) from reprocessing are relatively advanced in countries such as Finland, France, and Sweden. The UFD work quickly showed that these international concepts are all “enclosed,” whereby waste packages are emplaced in direct or close contact with natural or engineered materials . Alternative “open” modes (emplacement tunnels are kept open after emplacement formore » extended ventilation) have been limited to the Yucca Mountain License Application Design. Thermal analysis showed that if “enclosed” concepts are constrained by peak package/buffer temperature, that waste package capacity is limited to 4 PWR assemblies (or 9 BWR) in all media except salt. This information motivated separate studies: 1) extend the peak temperature tolerance of backfill materials, which is ongoing; and 2) develop small canisters (up to 4-PWR size) that can be grouped in larger multi-pack units for convenience of storage, transportation, and possibly disposal (should the disposal concept permit larger packages). A recent result from the second line of investigation is the Task Order 18 report: Generic Design for Small Standardized Transportation, Aging and Disposal Canister Systems. This report identifies disposal concepts for the small canisters (4-PWR size) drawing heavily on previous work, and for the multi-pack (16-PWR or 36-BWR).« less
Liquid level, void fraction, and superheated steam sensor for nuclear-reactor cores. [PWR; BWR
Tokarz, R.D.
1981-10-27
This disclosure relates to an apparatus for monitoring the presence of coolant in liquid or mixed liquid and vapor, and superheated gaseous phases at one or more locations within an operating nuclear reactor core, such as pressurized water reactor or a boiling water reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1979-10-01
The primary objective of this program is to develop and demonstrate an improved PWR fuel assembly design capable of batch average burnups of 45,000-50,000 MWd/mtU. To accomplish this, a number of technical areas must be investigated to verify acceptable extended-burnup fuel performance. This report is the first semi-annual progress report for the program, and it describes work performed during the July-December 1978 time period. Efforts during this period included the definition of a preliminary design for a high-burnup fuel rod, physics analyses of extended-burnup fuel cycles, studies of the physics characteristics of changes in fuel assembly metal-to-water ratios, and developmentmore » of a design concept for post-irradiation examination equipment to be utilized in examining high-burnup lead-test assemblies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastenberg, W.E.; Apostolakis, G.; Dhir, V.K.
Severe accident management can be defined as the use of existing and/or altemative resources, systems and actors to prevent or mitigate a core-melt accident. For each accident sequence and each combination of severe accident management strategies, there may be several options available to the operator, and each involves phenomenological and operational considerations regarding uncertainty. Operational uncertainties include operator, system and instrumentation behavior during an accident. A framework based on decision trees and influence diagrams has been developed which incorporates such criteria as feasibility, effectiveness, and adverse effects, for evaluating potential severe accident management strategies. The framework is also capable ofmore » propagating both data and model uncertainty. It is applied to several potential strategies including PWR cavity flooding, BWR drywell flooding, PWR depressurization and PWR feed and bleed.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
...The U.S. Nuclear Regulatory Commission (NRC) is issuing a revision to regulatory guide (RG), 1.79, ``Preoperational Testing of Emergency Core Cooling Systems for Pressurized-Water Reactors.'' This RG is being revised to incorporate guidance for preoperational testing of new pressurized water reactor (PWR) designs.
NASA Astrophysics Data System (ADS)
Ansari, Saleem A.; Haroon, Muhammad; Rashid, Atif; Kazmi, Zafar
2017-02-01
Extensive calculation and measurements of flow-induced vibrations (FIV) of reactor internals were made in a PWR plant to assess the structural integrity of reactor core support structure against coolant flow. The work was done to meet the requirements of the Fukushima Response Action Plan (FRAP) for enhancement of reactor safety, and the regulatory guide RG-1.20. For the core surveillance measurements the Reactor Internals Vibration Monitoring System (IVMS) has been developed based on detailed neutron noise analysis of the flux signals from the four ex-core neutron detectors. The natural frequencies, displacement and mode shapes of the reactor core barrel (CB) motion were determined with the help of IVMS. The random pressure fluctuations in reactor coolant flow due to turbulence force have been identified as the predominant cause of beam-mode deflection of CB. The dynamic FIV calculations were also made to supplement the core surveillance measurements. The calculational package employed the computational fluid dynamics, mode shape analysis, calculation of power spectral densities of flow & pressure fields and the structural response to random flow excitation forces. The dynamic loads and stiffness of the Hold-Down Spring that keeps the core structure in position against upward coolant thrust were also determined by noise measurements. Also, the boron concentration in primary coolant at any time of the core cycle has been determined with the IVMS.
Vectorized and multitasked solution of the few-group neutron diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zee, S.K.; Turinsky, P.J.; Shayer, Z.
1989-03-01
A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less
Conceptual Core Analysis of Long Life PWR Utilizing Thorium-Uranium Fuel Cycle
NASA Astrophysics Data System (ADS)
Rouf; Su'ud, Zaki
2016-08-01
Conceptual core analysis of long life PWR utilizing thorium-uranium based fuel has conducted. The purpose of this study is to evaluate neutronic behavior of reactor core using combined thorium and enriched uranium fuel. Based on this fuel composition, reactor core have higher conversion ratio rather than conventional fuel which could give longer operation length. This simulation performed using SRAC Code System based on library SRACLIB-JDL32. The calculation carried out for (Th-U)O2 and (Th-U)C fuel with uranium composition 30 - 40% and gadolinium (Gd2O3) as burnable poison 0,0125%. The fuel composition adjusted to obtain burn up length 10 - 15 years under thermal power 600 - 1000 MWt. The key properties such as uranium enrichment, fuel volume fraction, percentage of uranium are evaluated. Core calculation on this study adopted R-Z geometry divided by 3 region, each region have different uranium enrichment. The result show multiplication factor every burn up step for 15 years operation length, power distribution behavior, power peaking factor, and conversion ratio. The optimum core design achieved when thermal power 600 MWt, percentage of uranium 35%, U-235 enrichment 11 - 13%, with 14 years operation length, axial and radial power peaking factor about 1.5 and 1.2 respectively.
Test prediction for the German PKL Test K5A using RELAP4/MOD6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.S.; Haigh, W.S.; Sullivan, L.H.
RELAP4/MOD6 is the most recent modification in the series of RELAP4 computer programs developed to describe the thermal-hydraulic conditions attendant to postulated transients in light water reactor systems. The major new features in RELAP4/MOD6 include best-estimate pressurized water reactor (PWR) reflood transient analytical models for core heat transfer, local entrainment, and core vapor superheat, and a new set of heat transfer correlations for PWR blowdown and reflood. These new features were used for a test prediction of the Kraftwerk Union three-loop PRIMAR KREISLAUF (PKL) Reflood Test K5A. The results of the prediction were in good agreement with the experimental thermalmore » and hydraulic system data. Comparisons include heater rod surface temperature, system pressure, mass flow rates, and core mixture level. It is concluded that RELAP4/MOD6 is capable of accurately predicting transient reflood phenomena in the 200% cold-leg break test configuration of the PKL reflood facility.« less
An Overview of Reactor Concepts, a Survey of Reactor Designs.
1985-02-01
may be very different. HTGRs may use highly enriched uranium, thereby yielding better fuel economy and a reduc- tion of the actual core size for a...specific power level. The HTGR core may have fuel and control rods placed in graphite arrays similar to PWR core con- figuration, or they may have fuel ...rods are pulled out. A Peach Bottom core design is another HTGR design. This design is featured by the fuel pin’s ability to purge itself of fission
Peregrine Job Queues and Scheduling Policies | High-Performance Computing |
batch batch-h long bigmem data-transfer feature Max wall time 1 hour 4 hours 2 days 2 days 10 days 10 # nodes per job 2 8 288 576 120 46 1 # of 24 core 64 GB Haswell nodes 2 8 0 1228 0 0 0 haswell # of 24core 32 GB nodes 2 16 576 0 126 0 0 24core # of 16core 32 GB nodes 2 8 195 0 162 0 5 16core, # of 24core
Shah, Neha; Mehta, Tejal; Aware, Rahul; Shetty, Vasant
2017-12-01
The present work aims at studying process parameters affecting coating of minitablets (3 mm in diameter) through Wurster coating process. Minitablets of Naproxen with high drug loading were manufactured using 3 mm multi-tip punches. The release profile of core pellets (published) and minitablets was compared with that of marketed formulation. The core formulation of minitablets was found to show similarity in dissolution profile with marketed formulation and hence was further carried forward for functional coating over it. Wurster processing was implemented to pursue functional coating over core formulation. Different process parameters were screened and control strategy was applied for factors significantly affecting the process. Modified Plackett Burman Design was applied for studying important factors. Based on the significant factors and minimum level of coating required for functionalization, optimized process was executed. Final coated batch was evaluated for coating thickness, surface morphology, and drug release study.
Demonstration of optimum fuel-to-moderator ratio in a PWR unit fuel cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Pozsgai, C.
1992-01-01
Nuclear engineering students at The Pennsylvania State University develop scaled-down [[approx]350 MW(thermal)] pressurized water reactors (PWRs) using actual plants as references. The design criteria include maintaining the clad temperature below 2200[degree]F, fuel temperature below melting point, sufficient departure from nucleate boiling ratio (DNBR) margin, a beginning-of-life boron concentration that yields a negative moderator temperature coefficient, an adequate cycle power production (330 effective full-power days), and a batch loading scheme that is economical. The design project allows for many degrees of freedom (e.g., assembly number, pitch and height and batch enrichments) so that each student's result is unique. The iterative naturemore » of the design process is stressed in the course. The LEOPARD code is used for the unit cell depletion, critical boron, and equilibrium xenon calculations. Radial two-group diffusion equations are solved with the TWIDDLE-DEE code. The steady-state ZEBRA thermal-hydraulics program is used for calculating DNBR. The unit fuel cell pin radius and pitch (fuel-to-moerator ratio) for the scaled-down design, however, was set equal to the already optimized ratio for the reference PWR. This paper describes an honors project that shows how the optimum fuel-to-moderator ratio is found for a unit fuel cell shown in terms of neutron economics. This exercise illustrates the impact of fuel-to-moderator variations on fuel utilization factor and the effect of assuming space and energy separability.« less
NASA Astrophysics Data System (ADS)
Ganda, Francesco
The first part of the work presents the neutronic results of a detailed and comprehensive study of the feasibility of using hydride fuel in pressurized water reactors (PWR). The primary hydride fuel examined is U-ZrH1.6 having 45w/o uranium: two acceptable design approaches were identified: (1) use of erbium as a burnable poison; (2) replacement of a fraction of the ZrH1.6 by thorium hydride along with addition of some IFBA. The replacement of 25 v/o of ZrH 1.6 by ThH2 along with use of IFBA was identified as the preferred design approach as it gives a slight cycle length gain whereas use of erbium burnable poison results in a cycle length penalty. The feasibility of a single recycling plutonium in PWR in the form of U-PuH2-ZrH1.6 has also been assessed. This fuel was found superior to MOX in terms of the TRU fractional transmutation---53% for U-PuH2-ZrH1.6 versus 29% for MOX---and proliferation resistance. A thorough investigation of physics characteristics of hydride fuels has been performed to understand the reasons of the trends in the reactivity coefficients. The second part of this work assessed the feasibility of multi-recycling plutonium in PWR using hydride fuel. It was found that the fertile-free hydride fuel PuH2-ZrH1.6, enables multi-recycling of Pu in PWR an unlimited number of times. This unique feature of hydride fuels is due to the incorporation of a significant fraction of the hydrogen moderator in the fuel, thereby mitigating the effect of spectrum hardening due to coolant voiding accidents. An equivalent oxide fuel PuO2-ZrO2 was investigated as well and found to enable up to 10 recycles. The feasibility of recycling Pu and all the TRU using hydride fuels were investigated as well. It was found that hydride fuels allow recycling of Pu+Np at least 6 times. If it was desired to recycle all the TRU in PWR using hydrides, the number of possible recycles is limited to 3; the limit is imposed by positive large void reactivity feedback.
Neutron-gamma flux and dose calculations in a Pressurized Water Reactor (PWR)
NASA Astrophysics Data System (ADS)
Brovchenko, Mariya; Dechenaux, Benjamin; Burn, Kenneth W.; Console Camprini, Patrizio; Duhamel, Isabelle; Peron, Arthur
2017-09-01
The present work deals with Monte Carlo simulations, aiming to determine the neutron and gamma responses outside the vessel and in the basemat of a Pressurized Water Reactor (PWR). The model is based on the Tihange-I Belgian nuclear reactor. With a large set of information and measurements available, this reactor has the advantage to be easily modelled and allows validation based on the experimental measurements. Power distribution calculations were therefore performed with the MCNP code at IRSN and compared to the available in-core measurements. Results showed a good agreement between calculated and measured values over the whole core. In this paper, the methods and hypotheses used for the particle transport simulation from the fission distribution in the core to the detectors outside the vessel of the reactor are also summarized. The results of the simulations are presented including the neutron and gamma doses and flux energy spectra. MCNP6 computational results comparing JEFF3.1 and ENDF-B/VII.1 nuclear data evaluations and sensitivity of the results to some model parameters are presented.
Astronaut Robinson presents 2010 Silver Snoopy awards
2010-06-23
NASA's John C. Stennis Space Center Director Patrick Scheuermann and astronaut Steve Robinson stand with recipients of the 2010 Silver Snoopy awards following a June 23 ceremony. Sixteen Stennis employees received the astronauts' personal award, which is presented by a member of the astronaut corps representing its core principles for outstanding flight safety and mission success. This year's recipients and ceremony participants were: (front row, l to r): Cliff Arnold (NASA), Wendy Holladay (NASA), Kendra Moran (Pratt & Whitney Rocketdyne), Mary Johnson (Jacobs Technology Facility Operating Services Contract group), Cory Beckemeyer (PWR), Dean Bourlet (PWR), Cecile Saltzman (NASA), Marla Carpenter (Jacobs FOSC), David Alston (Jacobs FOSC); (back row, l to r) Scheuermann, Don Wilson (A2 Research), Tim White (NASA), Ira Lossett (Jacobs Technology NASA Test Operations Group), Kerry Gallagher (Jacobs NTOG); Rene LeFrere (PWR), Todd Ladner (ASRC Research and Technology Solutions) and Thomas Jacks (NASA).
BNL severe-accident sequence experiments and analysis program. [PWR; BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, G.A.; Ginsberg, T.; Tutu, N.K.
1983-01-01
In the analysis of degraded core accidents, the two major sources of pressure loading on light water reactor containments are: steam generation from core debris-water thermal interactions; and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described.
Multi-stage high cell continuous fermentation for high productivity and titer.
Chang, Ho Nam; Kim, Nag-Jong; Kang, Jongwon; Jeong, Chang Moon; Choi, Jin-dal-rae; Fei, Qiang; Kim, Byoung Jin; Kwon, Sunhoon; Lee, Sang Yup; Kim, Jungbae
2011-05-01
We carried out the first simulation on multi-stage continuous high cell density culture (MSC-HCDC) to show that the MSC-HCDC can achieve batch/fed-batch product titer with much higher productivity to the fed-batch productivity using published fermentation kinetics of lactic acid, penicillin and ethanol. The system under consideration consists of n-serially connected continuous stirred-tank reactors (CSTRs) with either hollow fiber cell recycling or cell immobilization for high cell-density culture. In each CSTR substrate supply and product removal are possible. Penicillin production is severely limited by glucose metabolite repression that requires multi-CSTR glucose feeding. An 8-stage C-HCDC lactic acid fermentation resulted in 212.9 g/L of titer and 10.6 g/L/h of productivity, corresponding to 101 and 429% of the comparable lactic acid fed-batch, respectively. The penicillin production model predicted 149% (0.085 g/L/h) of productivity in 8-stage C-HCDC with 40 g/L of cell density and 289% of productivity (0.165 g/L/h) in 7-stage C-HCDC with 60 g/L of cell density compared with referring batch cultivations. A 2-stage C-HCDC ethanol experimental run showed 107% titer and 257% productivity of the batch system having 88.8 g/L of titer and 3.7 g/L/h of productivity. MSC-HCDC can give much higher productivity than batch/fed-batch system, and yield a several percentage higher titer as well. The productivity ratio of MSC-HCDC over batch/fed-batch system is given as a multiplication of system dilution rate of MSC-HCDC and cycle time of batch/fed-batch system. We suggest MSC-HCDC as a new production platform for various fermentation products including monoclonal antibody.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesonske, A.
1980-08-01
Detailed core management arrangements are developed requiring four operating cycles for the transition from present three-batch loading to an extended burnup four-batch plan for Zion-1. The ARMP code EPRI-NODE-P was used for core modeling. Although this work is preliminary, uranium and economic savings during the transition cycles appear of the order of 6 percent.
NASA Astrophysics Data System (ADS)
Lanas, Vanessa; Ahn, Yongtae; Logan, Bruce E.
2014-02-01
Larger scale microbial fuel cells (MFCs) require compact architectures to efficiently treat wastewater. We examined how anode-brush diameter, number of anodes, and electrode spacing affected the performance of the MFCs operated in fed-batch and continuous flow mode. All anodes were initially tested with the brush core set at the same distance from the cathode. In fed-batch mode, the configuration with three larger brushes (25 mm diameter) produced 80% more power (1240 mW m-2) than reactors with eight smaller brushes (8 mm) (690 mW m-2). The higher power production by the larger brushes was due to more negative and stable anode potentials than the smaller brushes. The same general result was obtained in continuous flow operation, although power densities were reduced. However, by moving the center of the smaller brushes closer to the cathode (from 16.5 to 8 mm), power substantially increased from 690 to 1030 mW m-2 in fed batch mode. In continuous flow mode, power increased from 280 to 1020 mW m-2, resulting in more power production from the smaller brushes than the larger brushes (540 mW m-2). These results show that multi-electrode MFCs can be optimized by selecting smaller anodes, placed as close as possible to the cathode.
TREAT Neutronics Analysis and Design Support, Part II: Multi-SERTTA-CAL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bess, John D.; Woolstenhulme, Nicolas E.; Hill, Connie M.
2016-08-01
Experiment vehicle design is necessary in preparation for Transient Reactor Test (TREAT) facility restart and the resumption of transient testing to support Accident Tolerant Fuel (ATF) characterization and other future fuels testing requirements. Currently the most mature vehicle design is the Multi-SERTTA (Static Environments Rodlet Transient Test Apparatuses), which can accommodate up to four concurrent rodlet-sized specimens under separate environmental conditions. Robust test vehicle design requires neutronics analyses to support design development, optimization of the power coupling factor (PCF) to efficiently maximize energy generation in the test fuel rodlets, and experiment safety analyses. In integral aspect of prior TREAT transientmore » testing was the incorporation of calibration experiments to experimentally evaluate and validate test conditions in preparation of the actual fuel testing. The calibration experiment package established the test parameter conditions to support fine-tuning of the computational models to deliver the required energy deposition to the fuel samples. The calibration vehicle was designed to be as near neutronically equivalent to the experiment vehicle as possible to minimize errors between the calibration and final tests. The Multi-SERTTA-CAL vehicle was designed to serve as the calibration vehicle supporting Multi-SERTTA experimentation. Models of the Multi-SERTTA-CAL vehicle containing typical PWR-fuel rodlets were prepared and neutronics calculations were performed using MCNP6.1 with ENDF/B-VII.1 nuclear data libraries; these results were then compared against those performed for Multi-SERTTA to determine the similarity and possible design modification necessary prior to construction of these experiment vehicles. The estimated reactivity insertion worth into the TREAT core is very similar between the two vehicle designs, with the primary physical difference being a hollow Inconel tube running down the length of the calibration vehicle. Calculations of PCF indicate that on average there is a reduction of approximately 6.3 and 12.6%, respectively, for PWR fuel rodlets irradiated under wet and dry conditions. Changes to the primary or secondary vessel structure in the calibration vehicle can be performed to offset this discrepancy and maintain neutronic equivalency. Current possible modifications to the calibration vehicle include reduction of the primary vessel wall thickness, swapping Zircaloy-4 for stainless steel 316 in the secondary containment, or slight modification to the temperature and pressure of the water environment within the primary vessel. Removal of some of the instrumentation within the calibration vehicle can also serve to slightly increase the PCF. Future efforts include further modification and optimization of the Multi-SERTTA and Multi-SERTTA-CAL designs in preparation of actual TREAT transient testing. Experimental results from both test vehicles will be compared against calculational results and methods to provide validation and support additional neutronics analyses.« less
Approach to numerical safety guidelines based on a core melt criterion. [PWR; BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azarm, M.A.; Hall, R.E.
1982-01-01
A plausible approach is proposed for translating a single level criterion to a set of numerical guidelines. The criterion for core melt probability is used to set numerical guidelines for various core melt sequences, systems and component unavailabilities. These guidelines can be used as a means for making decisions regarding the necessity for replacing a component or improving part of a safety system. This approach is applied to estimate a set of numerical guidelines for various sequences of core melts that are analyzed in Reactor Safety Study for the Peach Bottom Nuclear Power Plant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Matthew; Yin, Shengjun; Stevens, Gary
2012-01-01
In past years, the authors have undertaken various studies of nozzles in both boiling water reactors (BWRs) and pressurized water reactors (PWRs) located in the reactor pressure vessel (RPV) adjacent to the core beltline region. Those studies described stress and fracture mechanics analyses performed to assess various RPV nozzle geometries, which were selected based on their proximity to the core beltline region, i.e., those nozzle configurations that are located close enough to the core region such that they may receive sufficient fluence prior to end-of-life (EOL) to require evaluation of embrittlement as part of the RPV analyses associated with pressure-temperaturemore » (P-T) limits. In this paper, additional stress and fracture analyses are summarized that were performed for additional PWR nozzles with the following objectives: To expand the population of PWR nozzle configurations evaluated, which was limited in the previous work to just two nozzles (one inlet and one outlet nozzle). To model and understand differences in stress results obtained for an internal pressure load case using a two-dimensional (2-D) axi-symmetric finite element model (FEM) vs. a three-dimensional (3-D) FEM for these PWR nozzles. In particular, the ovalization (stress concentration) effect of two intersecting cylinders, which is typical of RPV nozzle configurations, was investigated. To investigate the applicability of previously recommended linear elastic fracture mechanics (LEFM) hand solutions for calculating the Mode I stress intensity factor for a postulated nozzle corner crack for pressure loading for these PWR nozzles. These analyses were performed to further expand earlier work completed to support potential revision and refinement of Title 10 to the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G, Fracture Toughness Requirements, and are intended to supplement similar evaluation of nozzles presented at the 2008, 2009, and 2011 Pressure Vessels and Piping (PVP) Conferences. This work is also relevant to the ongoing efforts of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code, Section XI, Working Group on Operating Plant Criteria (WGOPC) efforts to incorporate nozzle fracture mechanics solutions into a revision to ASME B&PV Code, Section XI, Nonmandatory Appendix G.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loftus, M J; Hochreiter, L E; McGuire, M F
This report presents data from the 163-Rod Bundle Blow Blockage Task of the Full-Length Emergency Cooling Heat Transfer Systems Effects and Separate Effects Test Program (FLECHT SEASET). The task consisted of forced and gravity reflooding tests utilizing electrical heater rods with a cosine axial power profile to simulate PWR nuclear core fuel rod arrays. These tests were designed to determine effects of flow blockage and flow bypass on reflooding behavior and to aid in the assessment of computational models in predicting the reflooding behavior of flow blockage in rod bundle arrays.
New core-reflector boundary conditions for transient nodal reactor calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, E.K.; Kim, C.H.; Joo, H.K.
1995-09-01
New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szilard, Ronaldo Henriques
A Risk Informed Safety Margin Characterization (RISMC) toolkit and methodology are proposed for investigating nuclear power plant core, fuels design and safety analysis, including postulated Loss-of-Coolant Accident (LOCA) analysis. This toolkit, under an integrated evaluation model framework, is name LOCA toolkit for the US (LOTUS). This demonstration includes coupled analysis of core design, fuel design, thermal hydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results.
Zhao, Xuebing; Dong, Lei; Chen, Liang; Liu, Dehua
2013-05-01
Formiline pretreatment pertains to a biomass fractionation process. In the present work, Formiline-pretreated sugarcane bagasse was hydrolyzed with cellulases by batch and multi-step fed-batch processes at 20% solid loading. For wet pulp, after 144 h incubation with cellulase loading of 10 FPU/g dry solid, fed-batch process obtained ~150 g/L glucose and ~80% glucan conversion, while batch process obtained ~130 g/L glucose with corresponding ~70% glucan conversion. Solid loading could be further increased to 30% for the acetone-dried pulp. By fed-batch hydrolysis of the dried pulp in pH 4.8 buffer solution, glucose concentration could be 247.3±1.6 g/L with corresponding 86.1±0.6% glucan conversion. The enzymatic hydrolyzates could be well converted to ethanol by a subsequent fermentation using Saccharomices cerevisiae with ethanol titer of 60-70 g/L. Batch and fed-batch SSF indicated that Formiline-pretreated substrate showed excellent fermentability. The final ethanol concentration was 80 g/L with corresponding 82.7% of theoretical yield. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
This document outlines the development of a high fidelity, best estimate nuclear power plant severe transient simulation capability that will complement or enhance the integral system codes historically used for licensing and analysis of severe accidents. As with other tools in the Risk Informed Safety Margin Characterization (RISMC) Toolkit, the ultimate user of Enhanced Severe Transient Analysis and Prevention (ESTAP) capability is the plant decision-maker; the deliverable to that customer is a modern, simulation-based safety analysis capability, applicable to a much broader class of safety issues than is traditional Light Water Reactor (LWR) licensing analysis. Currently, the RISMC pathway’s majormore » emphasis is placed on developing RELAP-7, a next-generation safety analysis code, and on showing how to use RELAP-7 to analyze margin from a modern point of view: that is, by characterizing margin in terms of the probabilistic spectra of the “loads” applied to systems, structures, and components (SSCs), and the “capacity” of those SSCs to resist those loads without failing. The first objective of the ESTAP task, and the focus of one task of this effort, is to augment RELAP-7 analyses with user-selected multi-dimensional, multi-phase models of specific plant components to simulate complex phenomena that may lead to, or exacerbate, severe transients and core damage. Such phenomena include: coolant crossflow between PWR assemblies during a severe reactivity transient, stratified single or two-phase coolant flow in primary coolant piping, inhomogeneous mixing of emergency coolant water or boric acid with hot primary coolant, and water hammer. These are well-documented phenomena associated with plant transients but that are generally not captured in system codes. They are, however, generally limited to specific components, structures, and operating conditions. The second ESTAP task is to similarly augment a severe (post-core damage) accident integral analyses code with high fidelity simulations that would allow investigation of multi-dimensional, multi-phase containment phenomena that are only treated approximately in established codes.« less
Analytical methods in the high conversion reactor core design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeggel, W.; Oldekop, W.; Axmann, J.K.
High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less
Posttest TRAC-PD2/MOD1 predictions for FLECHT SEASET test 31504. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booker, C.P.
TRAC-PD2/MOD1 is a publicly released version of TRAC that is used primarily to analyze large-break loss-of-coolant accidents in pressurized-water reactors (PWRs). TRAC-PD2 can calculate, among other things, reflood phenomena. TRAC posttest predictions are compared with test 31504 reflood data from the Full-Length Emergency Core Heat Transfer (FLECHT) System Effects and Separate Effects Tests (SEASET) facility. A false top-down quench is predicted near the top of the core and the subcooling is underpredicted at the bottom of the core. However, the overall TRAC predictions are good, especially near the center of the core.
Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole
2017-03-01
The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.
Estimating probable flaw distributions in PWR steam generator tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, J.A.; Turner, A.P.L.
1997-02-01
This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regardingmore » uncertainties and assumptions in the data and analyses.« less
Sun, Guoxiang; Zhang, Jingxian
2009-05-01
The three wavelength fusion high performance liquid chromatographic fingerprin (TWFFP) of Longdanxiegan pill (LDXGP) was established to identify the quality of LDXGP by the systematic quantified fingerprint method. The chromatographic fingerprints (CFPs) of the 12 batches of LDXGP were determined by reversed-phase high performance liquid chromatography. The technique of multi-wavelength fusion fingerprint was applied during processing the fingerprints. The TWFFPs containing 63 co-possessing peaks were obtained when choosing baicalin peak as the referential peak. The 12 batches of LDXGP were identified with hierarchical clustering analysis by using macro qualitative similarity (S(m)) as the variable. According to the results of classification, the referential fingerprint (RFP) was synthesized from 10 batches of LDXGP. Taking the RFP for the qualified model, all the 12 batches of LDXGP were evaluated by the systematic quantified fingerprint method. Among the 12 batches of LDXGP, 9 batches were completely qualified, the contents of 1 batch were obviously higher while the chemical constituents quantity and distributed proportion in 2 batches were not qualified. The systematic quantified fingerprint method based on the technique of multi-wavelength fusion fingerprint ca effectively identify the authentic quality of traditional Chinese medicine.
NASA Astrophysics Data System (ADS)
Sawicki, Jerzy A.
2011-08-01
The hydrothermal synthesis of a nickel-iron oxyborate, Ni 2FeBO 5, known as bonaccordite, was investigated at pressures and temperatures that might occur at the surface of high-power fuel rods in PWR cores and in supercritical water reactors, especially during localized departures from nucleate boiling and dry-outs. The tests were performed using aqueous mixtures of nickel and iron oxides with boric acid or boron oxide, and as a function of lithium hydroxide addition, temperature and time of heating. At subcritical temperatures nickel ferrite NiFe 2O 4 was always the primary reaction product. High yield of Ni 2FeBO 5 synthesis started near critical water temperature and was strongly promoted by additions of LiOH up to Li/Fe and Li/B molar ratios in a range 0.1-1. The synthesis of bonaccordite was also promoted by other alkalis such as NaOH and KOH. The bonaccordite particles were likely formed by dissolution and re-crystallization by means of an intermediate nickel ferrite phase. It is postulated that the formation of Ni 2FeBO 5 in deposits of borated nickel and iron oxides on PWR fuel cladding can be accelerated by lithium produced in thermal neutron capture 10B(n,α) 7Li reactions. The process may also be aided in the reactor core by kinetic energy of α-particles and 7Li ions dissipated in the crud layer.
TRAC-PD2 posttest analysis of the CCTF Evaluation-Model Test C1-19 (Run 38). [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motley, F.
The results of a Transient Reactor Analysis Code posttest analysis of the Cylindral Core Test Facility Evaluation-Model Test agree very well with the results of the experiment. The good agreement obtained verifies the multidimensional analysis capability of the TRAC code. Because of the steep radial power profile, the importance of using fine noding in the core region was demonstrated (as compared with poorer results obtained from an earlier pretest prediction that used a coarsely noded model).
Design and analysis of a nuclear reactor core for innovative small light water reactors
NASA Astrophysics Data System (ADS)
Soldatov, Alexey I.
In order to address the energy needs of developing countries and remote communities, Oregon State University has proposed the Multi-Application Small Light Water Reactor (MASLWR) design. In order to achieve five years of operation without refueling, use of 8% enriched fuel is necessary. This dissertation is focused on core design issues related with increased fuel enrichment (8.0%) and specific MASLWR operational conditions (such as lower operational pressure and temperature, and increased leakage due to small core). Neutron physics calculations are performed with the commercial nuclear industry tools CASMO-4 and SIMULATE-3, developed by Studsvik Scandpower Inc. The first set of results are generated from infinite lattice level calculations with CASMO-4, and focus on evaluation of the principal differences between standard PWR fuel and MASLWR fuel. Chapter 4-1 covers aspects of fuel isotopic composition changes with burnup, evaluation of kinetic parameters and reactivity coefficients. Chapter 4-2 discusses gadolinium self-shielding and shadowing effects, and subsequent impacts on power generation peaking and Reactor Control System shadowing. The second aspect of the research is dedicated to core design issues, such as reflector design (chapter 4-3), burnable absorber distribution and programmed fuel burnup and fuel use strategy (chapter 4-4). This section also includes discussion of the parameters important for safety and evaluation of Reactor Control System options for the proposed core design. An evaluation of the sensitivity of the proposed design to uncertainty in calculated parameters is presented in chapter 4-5. The results presented in this dissertation cover a new area of reactor design and operational parameters, and may be applicable to other small and large pressurized water reactor designs.
Multi level optimization of burnable poison utilization for advanced PWR fuel management
NASA Astrophysics Data System (ADS)
Yilmaz, Serkan
The objective of this study was to develop an unique methodology and a practical tool for designing burnable poison (BP) pattern for a given PWR core. Two techniques were studied in developing this tool. First, the deterministic technique called Modified Power Shape Forced Diffusion (MPSFD) method followed by a fine tuning algorithm, based on some heuristic rules, was developed to achieve this goal. Second, an efficient and a practical genetic algorithm (GA) tool was developed and applied successfully to Burnable Poisons (BPs) placement optimization problem for a reference Three Mile Island-1 (TMI-1) core. This thesis presents the step by step progress in developing such a tool. The developed deterministic method appeared to perform as expected. The GA technique produced excellent BP designs. It was discovered that the Beginning of Cycle (BOC) Kinf of a BP fuel assembly (FA) design is a good filter to eliminate invalid BP designs created during the optimization process. By eliminating all BP designs having BOC Kinf above a set limit, the computational time was greatly reduced since the evaluation process with reactor physics calculations for an invalid solution is canceled. Moreover, the GA was applied to develop the BP loading pattern to minimize the total Gadolinium (Gd) amount in the core together with the residual binding at End-of-Cycle (EOC) and to keep the maximum peak pin power during core depletion and Soluble boron concentration at BOC both less than their limit values. The number of UO2/Gd2O3 pins and Gd 2O3 concentrations for each fresh fuel location in the core are the decision variables and the total amount of the Gd in the core and maximum peak pin power during core depletion are in the fitness functions. The use of different fitness function definition and forcing the solution movement towards to desired region in the solution space accelerated the GA runs. Special emphasize is given to minimizing the residual binding to increase core lifetime as well as minimizing the total Gd amount in the core. The GA code developed many good solutions that satisfy all of the design constraints. For these solutions, the EOC soluble boron concentration changes from 68.9 to 97.2 ppm. It is important to note that the difference of 28.3 ppm between the best and the worst solution in the good solutions region represent the potential of 12.5 Effective-Full-Power-Day (EPFD) savings in cycle length. As a comparison, the best BP loading design has 97.2 ppm soluble boron concentration at EOC while the BP loading with available vendors' U/Gd FA designs has 94.4 ppm SOB at EOC. It was estimated that the difference of 2.8 ppm reflected the potential savings of 1.25 EFPD in cycle length. Moreover, the total Gd amount was reduced by 6.89% in mass that provided extra savings in fuel cost compared to the BP loading pattern with available vendor's U/Gd FA designs. (Abstract shortened by UMI.)
Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun
2018-06-07
An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
Thorium Fuel Options for Sustained Transuranic Burning in Pressurized Water Reactors - 12381
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, Fariz Abdul; Lee, John C.; Franceschini, Fausto
2012-07-01
As described in companion papers, Westinghouse is proposing the adoption of a thorium-based fuel cycle to burn the transuranics (TRU) contained in the current Used Nuclear Fuel (UNF) and transition towards a less radio-toxic high level waste. A combination of both light water reactors (LWR) and fast reactors (FR) is envisaged for the task, with the emphasis initially posed on their TRU burning capability and eventually to their self-sufficiency. Given the many technical challenges and development times related to the deployment of TRU burners fast reactors, an interim solution making best use of the current resources to initiate burning themore » legacy TRU inventory while developing and testing some technologies of later use is desirable. In this perspective, a portion of the LWR fleet can be used to start burning the legacy TRUs using Th-based fuels compatible with the current plants and operational features. This analysis focuses on a typical 4-loop PWR, with 17x17 fuel assembly design and TRUs (or Pu) admixed with Th (similar to U-MOX fuel, but with Th instead of U). Global calculations of the core were represented with unit assembly simulations using the Linear Reactivity Model (LRM). Several assembly configurations have been developed to offer two options that can be attractive during the TRU transmutation campaign: maximization of the TRU transmutation rate and capability for TRU multi-recycling, to extend the option of TRU recycling in LWR until the FR is available. Homogeneous as well as heterogeneous assembly configurations have been developed with various recycling schemes (Pu recycle, TRU recycle, TRU and in-bred U recycle etc.). Oxide as well as nitride fuels have been examined. This enabled an assessment of the potential for burning and multi-recycling TRU in a Th-based fuel PWR to compare against other more typical alternatives (U-MOX and variations thereof). Results will be shown indicating that Th-based PWR fuel is a promising option to multi-recycle and burn TRU in a thermal spectrum, while satisfying top-level operational and safety constraints. Various assembly designs have been proposed to assess the TRU burning potential of Th-based fuel in PWRs. In addition to typical homogeneous loading patterns, heterogeneous configurations exploiting the breeding potential of thorium to enable multiple cycles of TRU irradiation and burning have been devised. The homogeneous assembly design, with all pins featuring TRU in Th, has the benefit of a simple loading pattern and the highest rate of TRU transmutation, but it can be used only for a few cycles due to the rapid rise in the TRU content of the recycled fuel, which challenges reactivity control, safety coefficients and fuel handling. Due to its simple loading pattern, such assembly design can be used as the first step of Th implementation, achieving up to 3 times larger TRU transmutation rate than conventional U-MOX, assuming same fraction of MOX assemblies in the core. As the next step in thorium implementation, heterogeneous assemblies featuring a mixed array of Th-U and Th-U-TRU pins, where the U is in-bred from Th, have been proposed. These designs have the potential to enable burning an external supply of TRU through multiple cycles of irradiation, recovery (via reprocessing) and recycling of the residual actinides at the end of each irradiation cycle. This is achieved thanks to a larger breeding of U from Th in the heterogeneous assemblies, which reduces the TRU supply and thus mitigates the increase in the TRU core inventory for the multi-recycled fuel. While on an individual cycle basis the amount of TRU burned in the heterogeneous assembly is reduced with respect to the homogeneous design, TRU burning rates higher than single-pass U-MOX fuel can still be achieved, with the additional benefits of a multi-cycle transmutation campaign recycling all TRU isotopes. Nitride fuel, due its higher density and U breeding potential, together with its better thermal properties, ideally suits the objectives and constraints of the heterogeneous assemblies. However, significant technological advancements must be made before nitride fuels can be employed in an LWR: its water resistance needs to be improved and a viable technology to enrich N in N-15 must be devised. Moreover, for the nitride heterogeneous configurations examined in this study, the enhancement in TRU burning performance is achieved not only by replacing oxide with nitride fuel, but also by increasing the fuel rod size. This latter modification, allowed by the high thermal conductivity of nitride fuel, leads however to a very tight lattice, which may challenge reactor coolant pumps and assembly hold-down mechanisms, the former through an increase in core pressure drop and the latter through an increase in assembly lift-off forces. To alleviate these issues, while still achieving the large fuel-to-moderator ratios resulting from using tight lattices, wire wraps could be used in place of grid spacers. For tight lattices, typical grid spacers are hard to manufacture and their replacement with wire wraps is known to allow for a pressure drop reduction by at least 2 times. The studies, while certainly very preliminary, provide a starting point to devise an optimum strategy for TRU transmutation in Th-based PWR fuel. The viability of the scheme proposed depends on the timely phasing in of the associated technologies, with proper lead time and to solve the many challenges. These challenges are certainly substantial, and make the current once-through U-based scheme pursued in the US by far a more practical (and cheaper) option. However, when compared to other transmutation schemes, the proposed one has arguably similar challenges and unknowns with potentially bigger rewards. (authors)« less
Performance of U3Si2 Fuel in a Reactivity Insertion Accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Lap Y.; Cuadra, Arantxa; Todosow, Michael
In this study we examined the performance of the U3Si2 fuel cladded with Zircaloy (Zr) in a reactivity insertion accident (RIA) in a PWR core. The power excursion as a result of a $1 reactivity insertion was calculated by a TRACE PWR plant model using point-kinetics, for alternative cores with UO2 and U3Si2 fuel assemblies. The point-kinetics parameters (feedback coefficients, prompt-neutron lifetime and group constants for six delayed-neutron groups) were obtained from beginning-of-cycle equilibrium full core calculations with PARCS. In the PARCS core calculations, the few-group parameters were developed utilizing the TRITON/NEWT tools in the SCALE package. In order tomore » assess the fuel response in finer detail (e.g. the maximum fuel temperature) the power shape and thermal boundary conditions from the TRACE/PARCS calculations were used to drive a BISON model of a fuel pin with U3Si2 and UO2 respectively. For a $1 reactivity transient both TRACE and BISON predicted a higher maximum fuel temperature for the UO2 fuel than the U3Si2 fuel. Furthermore, BISON is noted to calculate a narrower gap and a higher gap heat transfer coefficient than TRACE. This resulted in BISON predicting consistently lower fuel temperatures than TRACE. This study also provides a systematic comparison between TRACE and BISON using consistent transient boundary conditions. The TRACE analysis of the RIA only reflects the core-wide response in power. A refinement to the analysis would be to predict the local peaking in a three-dimensional core as a result of control rod ejection.« less
Zebra: An advanced PWR lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, L.; Wu, H.; Zheng, Y.
2012-07-01
This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less
Wang, Lu; Zeng, Shanshan; Chen, Teng; Qu, Haibin
2014-03-01
A promising process analytical technology (PAT) tool has been introduced for batch processes monitoring. Direct analysis in real time mass spectrometry (DART-MS), a means of rapid fingerprint analysis, was applied to a percolation process with multi-constituent substances for an anti-cancer botanical preparation. Fifteen batches were carried out, including ten normal operations and five abnormal batches with artificial variations. The obtained multivariate data were analyzed by a multi-way partial least squares (MPLS) model. Control trajectories were derived from eight normal batches, and the qualification was tested by R(2) and Q(2). Accuracy and diagnosis capability of the batch model were then validated by the remaining batches. Assisted with high performance liquid chromatography (HPLC) determination, process faults were explained by corresponding variable contributions. Furthermore, a batch level model was developed to compare and assess the model performance. The present study has demonstrated that DART-MS is very promising in process monitoring in botanical manufacturing. Compared with general PAT tools, DART-MS offers a particular account on effective compositions and can be potentially used to improve batch quality and process consistency of samples in complex matrices. Copyright © 2014 Elsevier B.V. All rights reserved.
Design and implementation of a simple nuclear power plant simulator
NASA Astrophysics Data System (ADS)
Miller, William H.
1983-02-01
A simple PWR nuclear power plant simulator has been designed and implemented on a minicomputer system. The system is intended for students use in understanding the power operation of a nuclear power plant. A PDP-11 minicomputer calculates reactor parameters in real time, uses a graphics terminal to display the results and a keyboard and joystick for control functions. Plant parameters calculated by the model include the core reactivity (based upon control rod positions, soluble boron concentration and reactivity feedback effects), the total core power, the axial core power distribution, the temperature and pressure in the primary and secondary coolant loops, etc.
Horikoshi, Renato J.; Bernardi, Daniel; Bernardi, Oderlei; Malaquias, José B.; Okuma, Daniela M.; Miraldo, Leonardo L.; Amaral, Fernando S. de A. e; Omoto, Celso
2016-01-01
The resistance of fall armyworm (FAW), Spodoptera frugiperda, has been characterized to some Cry and Vip3A proteins of Bacillus thuringiensis (Bt) expressed in transgenic maize in Brazil. Here we evaluated the effective dominance of resistance based on the survival of neonates from selected Bt-resistant, heterozygous, and susceptible (Sus) strains of FAW on different Bt maize and cotton varieties. High survival of strains resistant to the Cry1F (HX-R), Cry1A.105/Cry2Ab (VT-R) and Cry1A.105/Cry2Ab/Cry1F (PW-R) proteins was detected on Herculex, YieldGard VT PRO and PowerCore maize. Our Vip3A-resistant strain (Vip-R) exhibited high survival on Herculex, Agrisure Viptera and Agrisure Viptera 3 maize. However, the heterozygous from HX-R × Sus, VT-R × Sus, PW-R × Sus and Vip-R × Sus had complete mortality on YieldGard VT PRO, PowerCore, Agrisure Viptera, and Agrisure Viptera 3, whereas the HX-R × Sus and Vip-R × Sus strains survived on Herculex maize. On Bt cotton, the HX-R, VT-R and PW-R strains exhibited high survival on Bollgard II. All resistant strains survived on WideStrike, but only PW-R and Vip-R × Sus survived on TwinLink. Our study provides useful data to aid in the understanding of the effectiveness of the refuge strategy for Insect Resistance Management of Bt plants. PMID:27721425
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Prescott, Steven R; Smith, Curtis L
2011-07-01
In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less
NASA Astrophysics Data System (ADS)
Horikoshi, Renato J.; Bernardi, Daniel; Bernardi, Oderlei; Malaquias, José B.; Okuma, Daniela M.; Miraldo, Leonardo L.; Amaral, Fernando S. De A. E.; Omoto, Celso
2016-10-01
The resistance of fall armyworm (FAW), Spodoptera frugiperda, has been characterized to some Cry and Vip3A proteins of Bacillus thuringiensis (Bt) expressed in transgenic maize in Brazil. Here we evaluated the effective dominance of resistance based on the survival of neonates from selected Bt-resistant, heterozygous, and susceptible (Sus) strains of FAW on different Bt maize and cotton varieties. High survival of strains resistant to the Cry1F (HX-R), Cry1A.105/Cry2Ab (VT-R) and Cry1A.105/Cry2Ab/Cry1F (PW-R) proteins was detected on Herculex, YieldGard VT PRO and PowerCore maize. Our Vip3A-resistant strain (Vip-R) exhibited high survival on Herculex, Agrisure Viptera and Agrisure Viptera 3 maize. However, the heterozygous from HX-R × Sus, VT-R × Sus, PW-R × Sus and Vip-R × Sus had complete mortality on YieldGard VT PRO, PowerCore, Agrisure Viptera, and Agrisure Viptera 3, whereas the HX-R × Sus and Vip-R × Sus strains survived on Herculex maize. On Bt cotton, the HX-R, VT-R and PW-R strains exhibited high survival on Bollgard II. All resistant strains survived on WideStrike, but only PW-R and Vip-R × Sus survived on TwinLink. Our study provides useful data to aid in the understanding of the effectiveness of the refuge strategy for Insect Resistance Management of Bt plants.
Brown, Nicholas R.; Worrall, Andrew; Todosow, Michael
2016-11-18
Small modular reactors (SMRs) offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of SMRs on nuclear fuel cycle performance. The focus of this paper is the fuel cycle impacts of light water SMRs in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary example reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy, Office of Nuclear Energy, Fuel Cycle Options Campaign. The hypothetical light water SMR example case considered in these preliminary scoping studies ismore » a cartridge type one-batch core with slightly less than 5.0% enrichment. Challenges associated with SMRs include increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burnup in the reactor and the fuel cycle performance. This paper summarizes a list of the factors relevant to SMR fuel, core, and operation that will impact fuel cycle performance. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burnup of the reactor. Fuel cycle performance metrics for a hypothetical example SMR are compared with those for a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. The metrics performance for such an SMR is degraded for the mass of spent nuclear fuel and high-level waste disposed of, mass of depleted uranium disposed of, land use per energy generated, and carbon emissions per energy generated. Finally, it is noted that the features of some SMR designs impact three main aspects of fuel cycle performance: (1) small cores which means high leakage (there is a radial and axial component), (2) no boron which means heterogeneous core and extensive use of control rods and BPs, and (3) single batch cores. But not all of the SMR designs have all of these traits. As a result, the approach used in this study is therefore a bounding case and not all SMRs may be affected to the same extent.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Worrall, Andrew; Todosow, Michael
Small modular reactors (SMRs) offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of SMRs on nuclear fuel cycle performance. The focus of this paper is the fuel cycle impacts of light water SMRs in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary example reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy, Office of Nuclear Energy, Fuel Cycle Options Campaign. The hypothetical light water SMR example case considered in these preliminary scoping studies ismore » a cartridge type one-batch core with slightly less than 5.0% enrichment. Challenges associated with SMRs include increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burnup in the reactor and the fuel cycle performance. This paper summarizes a list of the factors relevant to SMR fuel, core, and operation that will impact fuel cycle performance. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burnup of the reactor. Fuel cycle performance metrics for a hypothetical example SMR are compared with those for a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. The metrics performance for such an SMR is degraded for the mass of spent nuclear fuel and high-level waste disposed of, mass of depleted uranium disposed of, land use per energy generated, and carbon emissions per energy generated. Finally, it is noted that the features of some SMR designs impact three main aspects of fuel cycle performance: (1) small cores which means high leakage (there is a radial and axial component), (2) no boron which means heterogeneous core and extensive use of control rods and BPs, and (3) single batch cores. But not all of the SMR designs have all of these traits. As a result, the approach used in this study is therefore a bounding case and not all SMRs may be affected to the same extent.« less
Interrelating the breakage and composition of mined and drill core coal
NASA Astrophysics Data System (ADS)
Wilson, Terril Edward
Particle size distribution of coal is important if the coal is to be beneficiated, or if a coal sales contract includes particle size specifications. An exploration bore core sample of coal ought to be reduced from its original cylindrical form to a particle size distribution and particle composition that reflects, insofar as possible, a process stream of raw coal it represents. Often, coal cores are reduced with a laboratory crushing machine, the product of which does not match the raw coal size distribution. This study proceeds from work in coal bore core reduction by Australian investigators. In this study, as differentiated from the Australian work, drop-shatter impact breakage followed by dry batch tumbling in steel cylinder rotated about its transverse axis are employed to characterize the core material in terms of first-order and zeroth-order breakage rate constants, which are indices of the propensity of the coal to degrade during excavation and handling. Initial drop-shatter and dry tumbling calibrations were done with synthetic cores composed of controlled low-strength concrete incorporating fly ash (as a partial substitute for Portland cement) in order to reduce material variables and conserve difficult-to-obtain coal cores. Cores of three different coalbeds--Illinois No. 6, Upper Freeport, and Pocahontas No. 5 were subjected to drop-shatter and dry batch tumbling tests to determine breakage response. First-order breakage, characterized by a first-order breakage index for each coal, occurred in the drop-shatter tests. First- and zeroth-order breakage occurred in dry batch tumbling; disappearance of coarse particles and creation of fine particles occurred in a systematic way that could be represented mathematically. Certain of the coal cores available for testing were dry and friable. Comparison of coal preparation plant feed with a crushed bore core and a bore core prepared by drop-shatter and tumbling (all from the same Illinois No.6 coal mining property) indicated that the size distribution and size fraction composition of the drop-shattered/tumbled core more closely resembled the plant feed than the crushed core. An attempt to determine breakage parameters (to allow use of selection and breakage functions and population balance models in the description of bore core size reduction) was initiated. Rank determination of the three coal types was done, indicating that higher rank associates with higher breakage propensity. The two step procedure of drop-shatter and dry batch tumbling simulates the first-order (volume breakage) and zeroth-order (abrasion of particle surfaces) that occur in excavation and handling operations, and is appropriate for drill core reduction prior to laboratory analysis.
A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments
Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J
2014-01-01
Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576
Efficiency of static core turn-off in a system-on-a-chip with variation
Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong
2013-10-29
A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.
Transient Simulation of the Multi-SERTTA Experiment with MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; Baker, Benjamin; Wang, Yaqi
This work details the MAMMOTH reactor physics simulations of the Static Environment Rodlet Transient Test Apparatus (SERTTA) conducted at Idaho National Laboratory in FY-2017. TREAT static-environment experiment vehicles are being developed to enable transient testing of Pressurized Water Reactor (PWR) type fuel specimens, including fuel concepts with enhanced accident tolerance (Accident Tolerant Fuels, ATF). The MAMMOTH simulations include point reactor kinetics as well as spatial dynamics for a temperature-limited transient. The strongly coupled multi-physics solutions of the neutron flux and temperature fields are second order accurate both in the spatial and temporal domains. MAMMOTH produces pellet stack powers that are within 1.5% of the Monte Carlo reference solutions. Some discrepancies between the MCNP model used in the design of the flux collars and the Serpent/MAMMOTH models lead to higher power and energy deposition values in Multi-SERTTA unit 1. The TREAT core results compare well with the safety case computed with point reactor kinetics in RELAP5-3D. The reactor period is 44 msec, which corresponds to a reactivity insertion of 2.685% delta k/kmore » $. The peak core power in the spatial dynamics simulation is 431 MW, which the point kinetics model over-predicts by 12%. The pulse width at half the maximum power is 0.177 sec. Subtle transient effects are apparent at the beginning insertion in the experimental samples due to the control rod removal. Additional difference due to transient effects are observed in the sample powers and enthalpy. The time dependence of the power coupling factor (PCF) is calculated for the various fuel stacks of the Multi-SERTTA vehicle. Sample temperatures in excess of 3100 K, the melting point UO$$_2$$, are computed with the adiabatic heat transfer model. The planned shaped-transient might introduce additional effects that cannot be predicted with PRK models. Future modeling will be focused on the shaped-transient by improving the control rod models in MAMMOTH and adding the BISON thermo-elastic models and thermal-fluids heat transfer.« less
Design of penicillin fermentation process simulation system
NASA Astrophysics Data System (ADS)
Qi, Xiaoyu; Yuan, Zhonghu; Qi, Xiaoxuan; Zhang, Wenqi
2011-10-01
Real-time monitoring for batch process attracts increasing attention. It can ensure safety and provide products with consistent quality. The design of simulation system of batch process fault diagnosis is of great significance. In this paper, penicillin fermentation, a typical non-linear, dynamic, multi-stage batch production process, is taken as the research object. A visual human-machine interactive simulation software system based on Windows operation system is developed. The simulation system can provide an effective platform for the research of batch process fault diagnosis.
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as "smart" Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users.
The origin of high sulfate concentrations in a coastal plain aquifer, Long Island, New York
Brown, C.J.; Schoonen, M.A.A.
2004-01-01
Ion-exchange batch experiments were run on Cretaceous (Magothy aquifer) clay cores from a nearshore borehole and an inland borehole on Long Island, NY, to determine the origin of high SO42- concentrations in ground water. Desorption batch tests indicate that the amounts of SO 42- released from the core samples are much greater (980-4700 ??g/g of sediment) than the concentrations in ground-water samples. The locally high SO42- concentrations in pore water extracted from cores are consistent with the overall increase in SO 42- concentrations in ground water along Magothy flow paths. Results of the sorption batch tests indicate that SO42- sorption onto clay is small but significant (40-120 ??g/g of sediment) in the low-pH (<5) pore water of clays, and a significant part of the SO42- in Magothy pore water may result from the oxidation of FeS2 by dissolved Fe(III). The acidic conditions that result from FeS2 oxidation in acidic pore water should result in greater sorption of SO42- and other anions onto protonated surfaces than in neutral-pH pore water. Comparison of the amounts of Cl- released from a clay core sample in desorption batch tests (4 ??g/g of sediment) with the amounts of Cl- sorbed to the same clay in sorption tests (3.7-5 ??g/g) indicates that the high concentrations of Cl- in pore water did not originate from connate seawater but were desorbed from sediment that was previously in contact with seawater. Furthermore, a hypothetical seawater transgression in the past is consistent with the observed pattern of sorbed cation complexes in the Magothy cores and could be a significant source of high SO42- concentrations in Magothy ground water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The current PWR plant and core parameters are listed. Resign requirements are briefly summarized for a radiation monitoring system, a fuel handling water system, a coolant purification system, an electrical power distribution system, and component shielding. Results of studies on thermal bowing and stressing of UO/sub 2/ are reported. A graph is presented of reactor power vs. reactor flow for various hot channel conditions. Development of U-- Mo and U-Nb alloys has been stopped because of the recent selection of UO/sub 2/ fuel material for the PWR core and blanket. The fabrication characteristics of UO/sub 2/ powders are being studied.more » Seamless Zircaloy-2 tubing has been tested to determine elastic limits, bursting pressures, and corrosion resistance. Fabrication techniques and tests for corrosion and defects in Zircaloy-clad U-Mo and UO/sub 2/ fuel rods are described. The preparation of UO/sub 2/ by various methods is being studied to determine which method produces a material most suitable for PWR fuel elements. The stability of UO/sub 2/ compacts in high temperature water and steam is being determined. Surface area and density measurements have been performed on samples of UO/sub 2/ powder prepared by various methods. Revelopment work on U-- Mo and U--Nb alloys has included studies of the effect on corrosion behavior of additions to the test water, additions to the alloys, homogenization of the alloys, annealing times, cladding, and fabrication techniques. Data are presented on relaxation in spring materials after exposure to a corrosive environment. Results are reported from loop and autoclave tests on fission product and crud deposition. Results of irradiation and corrosion testing of clad and unclad U--Mo and U-Nh alloys are described. The UO/sub 2/ irradiation program has included studies of dimensional changes, release of fission gases, and activity in the water surrounding the samples. A review of the methods of calculating reactor physics parameters has been completed, and the established procedures have been applied to determination of PWR reference design parameters. Critical experiments and primary loop shielding analyses are described. (D.E.B.)« less
Severe accident modeling of a PWR core with different cladding materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. C.; Henry, R. E.; Paik, C. Y.
2012-07-01
The MAAP v.4 software has been used to model two severe accident scenarios in nuclear power reactors with three different materials as fuel cladding. The TMI-2 severe accident was modeled with Zircaloy-2 and SiC as clad material and a SBO accident in a Zion-like, 4-loop, Westinghouse PWR was modeled with Zircaloy-2, SiC, and 304 stainless steel as clad material. TMI-2 modeling results indicate that lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would result if SiC was substituted for Zircaloy-2 as cladding. SBO modeling results indicate that the calculated time to RCSmore » rupture would increase by approximately 20 minutes if SiC was substituted for Zircaloy-2. Additionally, when an extended SBO accident (RCS creep rupture failure disabled) was modeled, significantly lower peak core temperatures, less H 2 (g) produced, and a smaller mass of molten material would be generated by substituting SiC for Zircaloy-2 or stainless steel cladding. Because the rate of SiC oxidation reaction with elevated temperature H{sub 2}O (g) was set to 0 for this work, these results should be considered preliminary. However, the benefits of SiC as a more accident tolerant clad material have been shown and additional investigation of SiC as an LWR core material are warranted, specifically investigations of the oxidation kinetics of SiC in H{sub 2}O (g) over the range of temperatures and pressures relevant to severe accidents in LWR 's. (authors)« less
Validation of the new code package APOLLO2.8 for accurate PWR neutronics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamarina, A.; Bernard, D.; Blaise, P.
2013-07-01
This paper summarizes the Qualification work performed to demonstrate the accuracy of the new APOLLO2.S/SHEM-MOC package based on JEFF3.1.1 nuclear data file for the prediction of PWR neutronics parameters. This experimental validation is based on PWR mock-up critical experiments performed in the EOLE/MINERVE zero-power reactors and on P.I. Es on spent fuel assemblies from the French PWRs. The Calculation-Experiment comparison for the main design parameters is presented: reactivity of UOX and MOX lattices, depletion calculation and fuel inventory, reactivity loss with burnup, pin-by-pin power maps, Doppler coefficient, Moderator Temperature Coefficient, Void coefficient, UO{sub 2}-Gd{sub 2}O{sub 3} poisoning worth, Efficiency ofmore » Ag-In-Cd and B4C control rods, Reflector Saving for both standard 2-cm baffle and GEN3 advanced thick SS reflector. From this qualification process, calculation biases and associated uncertainties are derived. This code package APOLLO2.8 is already implemented in the ARCADIA new AREVA calculation chain for core physics and is currently under implementation in the future neutronics package of the French utility Electricite de France. (authors)« less
Kirwan, J A; Broadhurst, D I; Davidson, R L; Viant, M R
2013-06-01
Direct infusion mass spectrometry (DIMS)-based untargeted metabolomics measures many hundreds of metabolites in a single experiment. While every effort is made to reduce within-experiment analytical variation in untargeted metabolomics, unavoidable sources of measurement error are introduced. This is particularly true for large-scale multi-batch experiments, necessitating the development of robust workflows that minimise batch-to-batch variation. Here, we conducted a purpose-designed, eight-batch DIMS metabolomics study using nanoelectrospray (nESI) Fourier transform ion cyclotron resonance mass spectrometric analyses of mammalian heart extracts. First, we characterised the intrinsic analytical variation of this approach to determine whether our existing workflows are fit for purpose when applied to a multi-batch investigation. Batch-to-batch variation was readily observed across the 7-day experiment, both in terms of its absolute measurement using quality control (QC) and biological replicate samples, as well as its adverse impact on our ability to discover significant metabolic information within the data. Subsequently, we developed and implemented a computational workflow that includes total-ion-current filtering, QC-robust spline batch correction and spectral cleaning, and provide conclusive evidence that this workflow reduces analytical variation and increases the proportion of significant peaks. We report an overall analytical precision of 15.9%, measured as the median relative standard deviation (RSD) for the technical replicates of the biological samples, across eight batches and 7 days of measurements. When compared against the FDA guidelines for biomarker studies, which specify an RSD of <20% as an acceptable level of precision, we conclude that our new workflows are fit for purpose for large-scale, high-throughput nESI DIMS metabolomics studies.
Implementing a Nuclear Power Plant Model for Evaluating Load-Following Capability on a Small Grid
NASA Astrophysics Data System (ADS)
Arda, Samet Egemen
A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube steam generator. The physical systems listed above are represented by mathematical models utilizing a state variable lumped parameter approach. A steady-state control program for the reactor, and simple turbine and governor models are also developed. Adequacy of the isolated reactor core, the isolated steam generator, and the complete PWR models are tested in Matlab/Simulink and dynamic responses are compared with the test results obtained from the H. B. Robinson NPP. Test results illustrate that the developed models represents the dynamic features of real-physical systems and are capable of predicting responses due to small perturbations of external reactivity and steam valve opening. Subsequently, the NSSS representation is incorporated into PSLF and coupled with built-in excitation system and generator models. Different simulation cases are run when sudden loss of generation occurs in a small power system which includes hydroelectric and natural gas power plants besides the developed PWR NPP. The conclusion is that the NPP can respond to a disturbance in the power system without exceeding any design and safety limits if appropriate operational conditions, such as achieving the NPP turbine control by adjusting the speed of the steam valve, are met. In other words, the NPP can participate in the control of system frequency and improve the overall power system performance.
Recommendations of the VAC2VAC workshop on the design of multi-centre validation studies.
Halder, Marlies; Depraetere, Hilde; Delannois, Frédérique; Akkermans, Arnoud; Behr-Gross, Marie-Emmanuelle; Bruysters, Martijn; Dierick, Jean-François; Jungbäck, Carmen; Kross, Imke; Metz, Bernard; Pennings, Jeroen; Rigsby, Peter; Riou, Patrice; Balks, Elisabeth; Dobly, Alexandre; Leroy, Odile; Stirling, Catrina
2018-03-01
Within the Innovative Medicines Initiative 2 (IMI 2) project VAC2VAC (Vaccine batch to vaccine batch comparison by consistency testing), a workshop has been organised to discuss ways of improving the design of multi-centre validation studies and use the data generated for product-specific validation purposes. Moreover, aspects of validation within the consistency approach context were addressed. This report summarises the discussions and outlines the conclusions and recommendations agreed on by the workshop participants. Copyright © 2018.
Experiment data report for Semiscale Mod-1 Test S-05-1 (alternate ECC injection test)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, E. M.; Patton, Jr., M. L.; Sackett, K. E.
Recorded test data are presented for Test S-05-1 of the Semiscale Mod-1 alternate ECC injection test series. These tests are among several Semiscale Mod-1 experiments conducted to investigate the thermal and hydraulic phenomena accompanying a hypothesized loss-of-coolant accident in a pressurized water reactor (PWR) system. Test S-05-1 was conducted from initial conditions of 2263 psia and 544/sup 0/F to investigate the response of the Semiscale Mod-1 system to a depressurization and reflood transient following a simulated double-ended offset shear of the cold leg broken loop piping. During the test, cooling water was injected into the vessel lower plenum to simulatemore » emergency core coolant injection in a PWR, with the flow rate based on system volume scaling.« less
Segmentation, dynamic storage, and variable loading on CDC equipment
NASA Technical Reports Server (NTRS)
Tiffany, S. H.
1980-01-01
Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.
Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He
2017-07-01
This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, J.; Kucukboyaci, V. N.; Nguyen, L.
2012-07-01
The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less
Fuel Cycle Performance of Thermal Spectrum Small Modular Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Worrall, Andrew; Todosow, Michael
2016-01-01
Small modular reactors may offer potential benefits, such as enhanced operational flexibility. However, it is vital to understand the holistic impact of small modular reactors on the nuclear fuel cycle and fuel cycle performance. The focus of this paper is on the fuel cycle impacts of light water small modular reactors in a once-through fuel cycle with low-enriched uranium fuel. A key objective of this paper is to describe preliminary reactor core physics and fuel cycle analyses conducted in support of the U.S. Department of Energy Office of Nuclear Energy Fuel Cycle Options Campaign. Challenges with small modular reactors include:more » increased neutron leakage, fewer assemblies in the core (and therefore fewer degrees of freedom in the core design), complex enrichment and burnable absorber loadings, full power operation with inserted control rods, the potential for frequent load-following operation, and shortened core height. Each of these will impact the achievable discharge burn-up in the reactor and the fuel cycle performance. This paper summarizes the results of an expert elicitation focused on developing a list of the factors relevant to small modular reactor fuel, core, and operation that will impact fuel cycle performance. Preliminary scoping analyses were performed using a regulatory-grade reactor core simulator. The hypothetical light water small modular reactor considered in these preliminary scoping studies is a cartridge type one-batch core with 4.9% enrichment. Some core parameters, such as the size of the reactor and general assembly layout, are similar to an example small modular reactor concept from industry. The high-level issues identified and preliminary scoping calculations in this paper are intended to inform on potential fuel cycle impacts of one-batch thermal spectrum SMRs. In particular, this paper highlights the impact of increased neutron leakage and reduced number of batches on the achievable burn-up of the reactor. Fuel cycle performance metrics for a small modular reactor are compared to a conventional three-batch light water reactor in the following areas: nuclear waste management, environmental impact, and resource utilization. Metrics performance for a small modular reactor are degraded for mass of spent nuclear fuel and high level waste disposed, mass of depleted uranium disposed, land use per energy generated, and carbon emission per energy generated« less
Mahmud, Mufti; Pulizzi, Rocco; Vasilaki, Eleni; Giugliano, Michele
2014-01-01
Micro-Electrode Arrays (MEAs) have emerged as a mature technique to investigate brain (dys)functions in vivo and in in vitro animal models. Often referred to as “smart” Petri dishes, MEAs have demonstrated a great potential particularly for medium-throughput studies in vitro, both in academic and pharmaceutical industrial contexts. Enabling rapid comparison of ionic/pharmacological/genetic manipulations with control conditions, MEAs are employed to screen compounds by monitoring non-invasively the spontaneous and evoked neuronal electrical activity in longitudinal studies, with relatively inexpensive equipment. However, in order to acquire sufficient statistical significance, recordings last up to tens of minutes and generate large amount of raw data (e.g., 60 channels/MEA, 16 bits A/D conversion, 20 kHz sampling rate: approximately 8 GB/MEA,h uncompressed). Thus, when the experimental conditions to be tested are numerous, the availability of fast, standardized, and automated signal preprocessing becomes pivotal for any subsequent analysis and data archiving. To this aim, we developed an in-house cloud-computing system, named QSpike Tools, where CPU-intensive operations, required for preprocessing of each recorded channel (e.g., filtering, multi-unit activity detection, spike-sorting, etc.), are decomposed and batch-queued to a multi-core architecture or to a computers cluster. With the commercial availability of new and inexpensive high-density MEAs, we believe that disseminating QSpike Tools might facilitate its wide adoption and customization, and inspire the creation of community-supported cloud-computing facilities for MEAs users. PMID:24678297
Naghibi Beidokhti, Hamid Reza; Ghaffarzadegan, Reza; Mirzakhanlouei, Sasan; Ghazizadeh, Leila; Dorkoosh, Farid Abedin
2017-01-01
The objective of this study was to investigate the combined influence of independent variables in the preparation of folic acid-chitosan-methotrexate nanoparticles (FA-Chi-MTX NPs). These NPs were designed and prepared for targeted drug delivery in tumor. The NPs of each batch were prepared by coaxial electrospray atomization method and evaluated for particle size (PS) and particle size distribution (PSD). The independent variables were selected to be concentration of FA-chitosan, ratio of shell solution flow rate to core solution flow rate, and applied voltage. The process design of experiments (DOE) was obtained with three factors in three levels by Design expert software. Box-Behnken design was used to select 15 batches of experiments randomly. The chemical structure of FA-chitosan was examined by FTIR. The NPs of each batch were collected separately, and morphologies of NPs were investigated by field emission scanning electron microscope (FE-SEM). The captured pictures of all batches were analyzed by ImageJ software. Mean PS and PSD were calculated for each batch. Polynomial equation was produced for each response. The FE-SEM results showed the mean diameter of the core-shell NPs was around 304 nm, and nearly 30% of the produced NPs are in the desirable range. Optimum formulations were selected. The validation of DOE optimization results showed errors around 2.5 and 2.3% for PS and PSD, respectively. Moreover, the feasibility of using prepared NPs to target tumor extracellular pH was shown, as drug release was greater in the pH of endosome (acidic medium). Finally, our results proved that FA-Chi-MTX NPs were active against the human epithelial cervical cancer (HeLa) cells.
A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit
Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...
2015-05-17
In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less
Pretest analysis of natural circulation on the PWR model PACTEL with horizontal steam generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kervinen, T.; Riikonen, V.; Ritonummi, T.
A new tests facility - parallel channel tests loop (PACTEL)- has been designed and built to simulate the major components and system behavior of pressurized water reactors (PWRs) during postulated small- and medium-break loss-of-coolant accidents. Pretest calculations have been performed for the first test series, and the results of these calculations are being used for planning experiments, for adjusting the data acquisition system, and for choosing the optimal position and type of instrumentation. PACTEL is a volumetrically scaled (1:305) model of the VVER-440 PWR. In all the calculated cases, the natural circulation was found to be effective in removing themore » heat from the core to the steam generator. The loop mass flow rate peaked at 60% mass inventory. The straightening of the loop seals increased the mass flow rate significantly.« less
Unraveling the Fate and Transport of SrEDTA-2 and Sr+2 in Hanford Sediments
NASA Astrophysics Data System (ADS)
Pace, M. N.; Mayes, M. A.; Jardine, P. M.; Mehlhorn, T. L.; Liu, Q. G.; Yin, X. L.
2004-12-01
Accelerated migration of strontium-90 has been observed in the vadose zone beneath the Hanford tank farm. The goal of this paper is to provide an improved understanding of the hydrogeochemical processes that contribute to strontium transport in the far-field Hanford vadose zone. Laboratory scale batch, saturated packed column experiments, and an unsaturated transport experiment in an undisturbed core were conducted to quantify geochemical and hydrological processes controlling Sr+2 and SrEDTA-2 sorption to Hanford flood deposits. After experimentation, the undisturbed core was disassembled and samples were collected from different bedding units as a function of depth. Sequential extractions were then performed on the samples. It has been suggested that organic chelates such as EDTA may be responsible for the accelerated transport of strontium due to the formation of stable anionic complexes. Duplicate batch and column experiments performed with Sr+2 and SrEDTA-2 suggested that the SrEDTA-2 complex was not stable in the presence of soil and rapid dissociation allowed strontium to be transported as a divalent cation. Batch experiments indicated a decrease in sorption with increasing rock:water ratios, whereas saturated packed column experiments indicated equal retardation in columns of different lengths. This difference between the batch and column experiments is primarily due to the difference between equilibrium conditions where dissolution of cations may compete for sorption sites versus flowing conditions where any dissolved cations are flushed through the system minimizing competition for sorption sites. Unsaturated transport in the undisturbed core resulted in significant Sr+2 retardation despite the presence of physical nonequilibrium. Core disassembly and sequential extractions revealed the mass wetness distribution and reactive mineral phases associated with strontium in the core. Overall, results indicated that strontium will most likely be transported through the Hanford far-field vadose zone as a divalent cation.
Northwest Africa 5790: Revisiting nakhlite petrogenesis
NASA Astrophysics Data System (ADS)
Jambon, A.; Sautter, V.; Barrat, J.-A.; Gattacceca, J.; Rochette, P.; Boudouma, O.; Badia, D.; Devouard, B.
2016-10-01
Northwest Africa 5790, the latest nakhlite find, is composed of 58 vol.% augite, 6% olivine and 36% vitrophyric intercumulus material. Its petrology is comparable to previously discovered nakhlites but with key differences: (1) Augite cores display an unusual zoning between Mg# 54 and 60; (2) Olivine macrocrysts have a primary Fe-rich core composition (Mg# = 35); (3) The modal proportion of mesostasis is the highest ever described in a nakhlite; (4) It is the most magnetite-rich nakhlite, together with MIL 03346, and exhibits the least anisotropic fabric. Complex primary zoning in cumulus augite indicates resorption due to complex processes such as remobilization of former cumulates in a new magma batch. Textural relationships indicate unambiguously that olivine was growing around resorbed augite, and that olivine growth was continuous while pyroxene growth resumed at a final stage. Olivine core compositions (Mg# = 35) are out of equilibrium with the augite core compositions (Mg# 60-63) and with the previously inferred nakhlite parental magma (Mg# = 29). The presence of oscillatory zoning in olivine and augite precludes subsolidus diffusion that could have modified olivine compositions. NWA 5790 evidences at least two magma batches before eruption, with the implication that melt in equilibrium with augite cores was never in contact with olivine. Iddingsite is absent. Accordingly, the previous scenarios for nakhlite petrogenesis must be revised. The first primary parent magmas of nakhlites generated varied augite cumulates at depth (Mg# 66-60) as they differentiated to different extents. A subsequent more evolved magma batch entrained accumulated augite crystals to the surface where they were partly resorbed while olivine crystallized. Trace element variations indicate unambiguously that they represent consanguineous but different magma batches. The compositional differences among the various nakhlites suggest a number of successive lava flows. To account for all observations we propose a petrogenetic model for nakhlites based on several (at least three) thick flows. Although NWA 5790 belongs to the very top of one flow, it should come from the lowest flow sampled, based on the lack of iddingsite.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khateeb, Siddique; Su, Dong; Guerreo, Sandra
This article presents the performance of palladium-platinum core-shell catalysts (Pt/Pd/C) for oxygen reduction synthesized in gram-scale batches in both liquid cells and polymer-electrolyte membrane fuel cells. Core-shell catalyst synthesis and characterization, ink fabrication, and cell assembly details are discussed. The Pt mass activity of the Pt/Pd core-shell catalyst was 0.95 A mg –1 at 0.9 V measured in liquid cells (0.1 M HClO4), which was 4.8 times higher than a commercial Pt/C catalyst. The performances of Pt/Pd/C and Pt/C in large single cells (315 cm 2) were assessed under various operating conditions. The core-shell catalyst showed consistently higher performance thanmore » commercial Pt/C in fuel cell testing. A 20–60 mV improvement across the whole current density range was observed on air. Sensitivities to temperature, humidity, and gas composition were also investigated and the core-shell catalyst showed a consistent benefit over Pt under all conditions. However, the 4.8 times activity enhancement predicated by liquid cell measurements was not fully realized in fuel cells.« less
Khateeb, Siddique; Su, Dong; Guerreo, Sandra; ...
2016-05-03
This article presents the performance of palladium-platinum core-shell catalysts (Pt/Pd/C) for oxygen reduction synthesized in gram-scale batches in both liquid cells and polymer-electrolyte membrane fuel cells. Core-shell catalyst synthesis and characterization, ink fabrication, and cell assembly details are discussed. The Pt mass activity of the Pt/Pd core-shell catalyst was 0.95 A mg –1 at 0.9 V measured in liquid cells (0.1 M HClO4), which was 4.8 times higher than a commercial Pt/C catalyst. The performances of Pt/Pd/C and Pt/C in large single cells (315 cm 2) were assessed under various operating conditions. The core-shell catalyst showed consistently higher performance thanmore » commercial Pt/C in fuel cell testing. A 20–60 mV improvement across the whole current density range was observed on air. Sensitivities to temperature, humidity, and gas composition were also investigated and the core-shell catalyst showed a consistent benefit over Pt under all conditions. However, the 4.8 times activity enhancement predicated by liquid cell measurements was not fully realized in fuel cells.« less
Promises and Challenges of Thorium Implementation for Transuranic Transmutation - 13550
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franceschini, F.; Lahoda, E.; Wenner, M.
2013-07-01
This paper focuses on the challenges of implementing a thorium fuel cycle for recycle and transmutation of long-lived actinide components from used nuclear fuel. A multi-stage reactor system is proposed; the first stage consists of current UO{sub 2} once-through LWRs supplying transuranic isotopes that are continuously recycled and burned in second stage reactors in either a uranium (U) or thorium (Th) carrier. The second stage reactors considered for the analysis are Reduced Moderation Pressurized Water Reactors (RMPWRs), reconfigured from current PWR core designs, and Fast Reactors (FRs) with a burner core design. While both RMPWRs and FRs can in principlemore » be employed, each reactor and associated technology has pros and cons. FRs have unmatched flexibility and transmutation efficiency. RMPWRs have higher fuel manufacturing and reprocessing requirements, but may represent a cheaper solution and the opportunity for a shorter time to licensing and deployment. All options require substantial developments in manufacturing, due to the high radiation field, and reprocessing, due to the very high actinide recovery ratio to elicit the claimed radiotoxicity reduction. Th reduces the number of transmutation reactors, and is required to enable a viable RMPWR design, but presents additional challenges on manufacturing and reprocessing. The tradeoff between the various options does not make the choice obvious. Moreover, without an overarching supporting policy in place, the costly and challenging technologies required inherently discourage industrialization of any transmutation scheme, regardless of the adoption of U or Th. (authors)« less
Simultaneous optimization of loading pattern and burnable poison placement for PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alim, F.; Ivanov, K.; Yilmaz, S.
2006-07-01
To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bi, G.; Liu, C.; Si, S.
This paper was focused on core design, neutronics evaluation and fuel cycle analysis for Thorium-Uranium Breeding Recycle in current PWRs, without any major change to the fuel lattice and the core internals, but substituting the UOX pellet with Thorium-based pellet. The fuel cycle analysis indicates that Thorium-Uranium Breeding Recycle is technically feasible in current PWRs. A 4-loop, 193-assembly PWR core utilizing 17 x 17 fuel assemblies (FAs) was taken as the model core. Two mixed cores were investigated respectively loaded with mixed reactor grade Plutonium-Thorium (PuThOX) FAs and mixed reactor grade {sup 233}U-Thorium (U{sub 3}ThOX) FAs on the basis ofmore » reference full Uranium oxide (UOX) equilibrium-cycle core. The UOX/PuThOX mixed core consists of 121 UOX FAs and 72 PuThOX FAs. The reactor grade {sup 233}U extracted from burnt PuThOX fuel was used to fabrication of U{sub 3}ThOX for starting Thorium-. Uranium breeding recycle. In UOX/U{sub 3}ThOX mixed core, the well designed U{sub 3}ThOX FAs with 1.94 w/o fissile uranium (mainly {sup 233}U) were located on the periphery of core as a blanket region. U{sub 3}ThOX FAs remained in-core for 6 cycles with the discharged burnup achieving 28 GWD/tHM. Compared with initially loading, the fissile material inventory in U{sub 3}ThOX fuel has increased by 7% via 1-year cooling after discharge. 157 UOX fuel assemblies were located in the inner of UOX/U{sub 3}ThOX mixed core refueling with 64 FAs at each cycle. The designed UOX/PuThOX and UOX/U{sub 3}ThOX mixed core satisfied related nuclear design criteria. The full core performance analyses have shown that mixed core with PuThOX loading has similar impacts as MOX on several neutronic characteristic parameters, such as reduced differential boron worth, higher critical boron concentration, more negative moderator temperature coefficient, reduced control rod worth, reduced shutdown margin, etc.; while mixed core with U{sub 3}ThOX loading on the periphery of core has no visible impacts on neutronic characteristics compared with reference full UOX core. The fuel cycle analysis has shown that {sup 233}U mono-recycling with U{sub 3}ThOX fuel could save 13% of natural uranium resource compared with UOX once through fuel cycle, slightly more than that of Plutonium single-recycling with MOX fuel. If {sup 233}U multi-recycling with U{sub 3}ThOX fuel is implemented, more natural uranium resource would be saved. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franceschini, F.; Lahoda, E. J.; Kucukboyaci, V. N.
2012-07-01
The efforts to reduce fuel cycle cost have driven LWR fuel close to the licensed limit in fuel fissile content, 5.0 wt% U-235 enrichment, and the acceptable duty on current Zr-based cladding. An increase in the fuel enrichment beyond the 5 wt% limit, while certainly possible, entails costly investment in infrastructure and licensing. As a possible way to offset some of these costs, the addition of small amounts of Erbia to the UO{sub 2} powder with >5 wt% U-235 has been proposed, so that its initial reactivity is reduced to that of licensed fuel and most modifications to the existingmore » facilities and equipment could be avoided. This paper discusses the potentialities of such a fuel on the US market from a vendor's perspective. An analysis of the in-core behavior and fuel cycle performance of a typical 4-loop PWR with 18 and 24-month operating cycles has been conducted, with the aim of quantifying the potential economic advantage and other operational benefits of this concept. Subsequently, the implications on fuel manufacturing and storage are discussed. While this concept has certainly good potential, a compelling case for its short-term introduction as PWR fuel for the US market could not be determined. (authors)« less
Multi-Core Processor Memory Contention Benchmark Analysis Case Study
NASA Technical Reports Server (NTRS)
Simon, Tyler; McGalliard, James
2009-01-01
Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.
Multi-core processing and scheduling performance in CMS
NASA Astrophysics Data System (ADS)
Hernández, J. M.; Evans, D.; Foulkes, S.
2012-12-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
VERA Core Simulator methodology for pressurized water reactor cycle depletion
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane; ...
2017-01-12
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
Programing techniques for CDC equipment
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Tiffany, S. H.
1979-01-01
Five techniques reduce core requirements for fast batch turnaround time and interactive-terminal capability. Same techniques increase program versatility, decrease problem-configuration dependence, and facilitate interprogram communication.
Multi-core processing and scheduling performance in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J. M.; Evans, D.; Foulkes, S.
2012-01-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less
Bustos-Fierro, C; Olivera, M E; Manzo, P G; Jiménez-Kairuz, Álvaro F
2013-01-01
To evaluate the stability of an extemporaneously prepared 7% chloral hydrate syrup under different conditions of storage and dispensing. Three batches of 7% chloral hydrate syrup were prepared. Each batch was stored in 50 light-resistant glass containers of 60 mL with child-resistant caps and in two bottles of 1000 mL to simulate two forms of dispensing, mono and multi-dose, respectively. Twenty five mono-dose bottles and a multi-dose bottle of each batch were stored under room conditions (20 ± 1 °C) and the rest of the samples were stored in the fridge (5 ± 2 °C). The physical, chemical and microbiological stability was evaluated for 180 days. Stability was defined as retention of at least 95% of the initial concentration of chloral hydrate, the absence of both visible particulate matter, or color and/or odor changes and the compliance with microbiological attributes of non-sterile pharmaceutical products. At least 98% of the initial chloral hydrate concentration remained throughout the 180-day study period. There were no detectable changes in color, odor, specific gravity and pH and no visible microbial growth. These results were not affected by storage, room or refrigeration conditions or by the frequent opening or closing of the multi-dose containers. Extemporaneously compounded 7% chloral hydrate syrup was stable for at least 180 days when stored in mono or multi-dose light-resistant glass containers at room temperature and under refrigeration. Copyright © 2013 SEFH. Published by AULA MEDICA. All rights reserved.
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin, E-mail: collinsbs@ornl.gov; Stimpson, Shane, E-mail: stimpsonsg@ornl.gov; Kelley, Blake W., E-mail: kelleybl@umich.edu
2016-12-01
A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less
Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT
Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...
2016-08-25
We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less
Recent improvements of reactor physics codes in MHI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less
Accelerating deep neural network training with inconsistent stochastic gradient descent.
Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat
2017-09-01
Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hakim Halim, Abdul; Ernawati; Hidayat, Nita P. A.
2018-03-01
This paper deals with a model of batch scheduling for a single batch processor on which a number of parts of a single items are to be processed. The process needs two kinds of setups, i. e., main setups required before processing any batches, and additional setups required repeatedly after the batch processor completes a certain number of batches. The parts to be processed arrive at the shop floor at the times coinciding with their respective starting times of processing, and the completed parts are to be delivered at multiple due dates. The objective adopted for the model is that of minimizing total inventory holding cost consisting of holding cost per unit time for a part in completed batches, and that in in-process batches. The formulation of total inventory holding cost is derived from the so-called actual flow time defined as the interval between arrival times of parts at the production line and delivery times of the completed parts. The actual flow time satisfies not only minimum inventory but also arrival and delivery just in times. An algorithm to solve the model is proposed and a numerical example is shown.
Hot zero power reactor calculations using the Insilico code
Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...
2016-03-18
In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.
Support arrangements for core modules of nuclear reactors. [PWR
Bollinger, L.R.
1983-11-03
A support arrangement is provided for the core modules of a nuclear reactor which provides support access through the control drive mechanisms of the reactor. This arrangement provides axial support of individual reactor core modules from the pressure vessel head in a manner which permits attachment and detachment of the modules from the head to be accomplished through the control drive mechanisms after their leadscrews have been removed. The arrangement includes a module support nut which is suspended from the pressure vessel head and screw threaded to the shroud housing for the module. A spline lock prevents loosening of the screw connection. An installation tool assembly, including a cell lifting and preloading tool and a torquing tool, fits through the control drive mechanism and provides lifting of the shroud housing while disconnecting the spline lock, as well as application of torque to the module support nut.
NASA Technical Reports Server (NTRS)
Kadambi, J. R.; Schneider, S. J.; Stewart, W. A.
1986-01-01
The natural circulation of a single phase fluid in a scale model of a pressurized water reactor system during a postulated grade core accident is analyzed. The fluids utilized were water and SF6. The design of the reactor model and the similitude requirements are described. Four LDA tests were conducted: water with 28 kW of heat in the simulated core, with and without the participation of simulated steam generators; water with 28 kW of heat in the simulated core, with the participation of simulated steam generators and with cold upflow of 12 lbm/min from the lower plenum; and SF6 with 0.9 kW of heat in the simulated core and without the participation of the simulated steam generators. For the water tests, the velocity of the water in the center of the core increases with vertical height and continues to increase in the upper plenum. For SF6, it is observed that the velocities are an order of magnitude higher than those of water; however, the velocity patterns are similar.
77 FR 37795 - Airworthiness Directives; Dassault Aviation Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-25
... display of ELEC:LH ESS PWR LO or ELEC:LH ESS NO PWR (Abnormal procedure 3-190-40), land at nearest suitable airport Upon display of ELEC:RH ESS PWR LO and ELEC:RH ESS NO PWR (Abnormal procedure 3-190-45...
Pretest analysis of Semiscale Mod-3 baseline test S-07-8 and S-07-9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fineman, C.P.; Steiner, J.L.; Snider, D.M.
This document contains a pretest analysis of the Semiscale Mod-3 system thermal-hydraulic response for the second and third integral tests in Test Series 7 (Tests S-07-8 and S-07-9). Test Series 7 is the first test series to be conducted with the Semiscale Mod-3 system. The design of the Mod-3 system includes an improved representation of certain portions of a pressurized water reactor (PWR) when compared to the previously operated Semiscale Mod-1 system. The improvements include a new vessel which contains a full length (3.66 m) core, a full length upper plenum and upper head, and an external downcomer. An activemore » pump and active steam generator scaled to their pressurized water reactor (PWR) counterparts have been added to the broken loop. The upper head design includes the capability to simulate emergency core coolant (ECC) injection into this region. Test Series 7 is divided into three groups of tests that emphasize the evaluation of the Mod-3 system performance during different phases of the loss-of-coolant experiment (LOCE) transient. The last test group, which includes Tests S-07-8 and S-07-9, will be used to evaluate the integral behavior of the system. The previous two test groups were used to evaluate the blowdown behavior and the reflood behavior of the system. 3 refs., 35 figs., 12 tabs.« less
Implementation of the SPH Procedure Within the MOOSE Finite Element Framework
NASA Astrophysics Data System (ADS)
Laurier, Alexandre
The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.
The influence of psychological factors on post-partum weight retention at 9 months.
Phillips, Joanne; King, Ross; Skouteris, Helen
2014-11-01
Post-partum weight retention (PWR) has been identified as a critical pathway for long-term overweight and obesity. In recent years, psychological factors have been demonstrated to play a key role in contributing to and maintaining PWR. Therefore, the aim of this study was to explore the relationship between post-partum psychological distress and PWR at 9 months, after controlling for maternal weight factors, sleep quality, sociocontextual influences, and maternal behaviours. Pregnant women (N = 126) completed a series of questionnaires at multiple time points from early pregnancy until 9 months post-partum. Hierarchical regression indicated that gestational weight gain, shorter duration (6 months or less) of breastfeeding, and post-partum body dissatisfaction at 3 and 6 months are associated with higher PWR at 9 months; stress, depression, and anxiety had minimal influence. Interventions aimed at preventing excessive PWR should specifically target the prevention of body dissatisfaction and excessive weight gain during pregnancy. What is already known on this subject? Post-partum weight retention (PWR) is a critical pathway for long-term overweight and obesity. Causes of PWR are complex and multifactorial. There is increasing evidence that psychological factors play a key role in predicting high PWR. What does this study add? Post-partum body dissatisfaction at 3 and 6 months is associated with PWR at 9 months post-birth. Post-partum depression, stress and anxiety have less influence on PWR at 9 months. Interventions aimed at preventing excessive PWR should target body dissatisfaction. © 2013 The British Psychological Society.
A multi-run chemistry module for the production of [18F]FDG
NASA Astrophysics Data System (ADS)
Sipe, B.; Murphy, M.; Best, B.; Zigler, S.; Lim, J.; Dorman, E.; Mangner, T.; Weichelt, M.
2001-07-01
We have developed a new chemistry module for the production of up to four batches of [18F]FDG. Prior to starting a batch sequence, the module automatically performs a series of self-diagnostic tests, including a reagent detection sequence. The module then executes a user-defined production sequence followed by an automated process to rinse tubing, valves, and the reaction vessel prior to the next production sequence. Process feedback from the module is provided to a graphical user interface by mass flow controllers, radiation detectors, a pressure switch, a pressure transducer, and an IR temperature sensor. This paper will describe the module, the operating system, and the results of multi-site trials, including production data and quality control results.
Development and Application of Laser Peening System for PWR Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masaki Yoda; Itaru Chida; Satoshi Okada
2006-07-01
Laser peening is a process to improve residual stress from tensile to compressive in surface layer of materials by irradiating high-power laser pulses on the material in water. Toshiba has developed a laser peening system composed of Q-switched Nd:YAG laser oscillators, laser delivery equipment and underwater remote handling equipment. We have applied the system for Japanese operating BWR power plants as a preventive maintenance measure for stress corrosion cracking (SCC) on reactor internals like core shrouds or control rod drive (CRD) penetrations since 1999. As for PWRs, alloy 600 or 182 can be susceptible to primary water stress corrosion crackingmore » (PWSCC), and some cracks or leakages caused by the PWSCC have been discovered on penetrations of reactor vessel heads (RVHs), reactor bottom-mounted instrumentation (BMI) nozzles, and others. Taking measures to meet the unconformity of the RVH penetrations, RVHs themselves have been replaced in many PWRs. On the other hand, it's too time-consuming and expensive to replace BMI nozzles, therefore, any other convenient and less expensive measures are required instead of the replacement. In Toshiba, we carried out various tests for laser-peened nickel base alloys and confirmed the effectiveness of laser peening as a preventive maintenance measure for PWSCC. We have developed a laser peening system for PWRs as well after the one for BWRs, and applied it for BMI nozzles, core deluge line nozzles and primary water inlet nozzles of Ikata Unit 1 and 2 of Shikoku Electric Power Company since 2004, which are Japanese operating PWR power plants. In this system, laser oscillators and control devices were packed into two containers placed on the operating floor inside the reactor containment vessel. Laser pulses were delivered through twin optical fibers and irradiated on two portions in parallel to reduce operation time. For BMI nozzles, we developed a tiny irradiation head for small tubes and we peened the inner surface around J-groove welds after laser ultrasonic testing (LUT) as the remote inspection, and we peened the outer surface and the weld for Ikata Unit 2 supplementary. For core deluge line nozzles and primary water inlet nozzles, we peened the inner surface of the dissimilar metal welding, which is of nickel base alloy, joining a safe end and a low alloy metal nozzle. In this paper, the development and the actual application of the laser peening system for PWR power plants will be described. (authors)« less
Code of Federal Regulations, 2012 CFR
2012-07-01
... MONTROSE 2 KANSAS CITY PWR & LT. MISSOURI MONTROSE 3 KANSAS CITY PWR & LT. NEW YORK DUNKIRK 3 NIAGARA MOHAWK PWR. NEW YORK DUNKIRK 4 NIAGARA MOHAWK PWR. NEW YORK GREENIDGE 6 NY STATE ELEC & GAS. NEW YORK...
Code of Federal Regulations, 2014 CFR
2014-07-01
... MONTROSE 2 KANSAS CITY PWR & LT. MISSOURI MONTROSE 3 KANSAS CITY PWR & LT. NEW YORK DUNKIRK 3 NIAGARA MOHAWK PWR. NEW YORK DUNKIRK 4 NIAGARA MOHAWK PWR. NEW YORK GREENIDGE 6 NY STATE ELEC & GAS. NEW YORK...
Code of Federal Regulations, 2013 CFR
2013-07-01
... MONTROSE 2 KANSAS CITY PWR & LT. MISSOURI MONTROSE 3 KANSAS CITY PWR & LT. NEW YORK DUNKIRK 3 NIAGARA MOHAWK PWR. NEW YORK DUNKIRK 4 NIAGARA MOHAWK PWR. NEW YORK GREENIDGE 6 NY STATE ELEC & GAS. NEW YORK...
USDA-ARS?s Scientific Manuscript database
Batch and saturated soil column experiments were conducted to investigate sorption and mobility of two 14C-labeled contaminants, the hydrophobic chlordecone (CLD) and the readily water-soluble sulfadiazine (SDZ), in the absence or presence of functionalized multi-walled carbon nanotubes (MWCNTs). Th...
Determination of tube-to-tube support interaction characteristics. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haslinger, K.H.
Tube-to-tube support interaction characteristics were determined on a multi-span tube geometry representative of the hot-leg side of the C-E, System 80 steam generator design. Results will become input for an autoclave type wear test program on steam generator tubes, performed by Kraftwerk Union (KWU). Correlation of test data reported here with similar data obtained from the wear tests will be performed in an attempt to make predictions about the long-term fretting behavior of steam generator tubes.
The microstructure and magnetic properties of Cu/CuO/Ni core/multi-shell nanowire arrays
NASA Astrophysics Data System (ADS)
Yang, Feng; Shi, Jie; Zhang, Xiaofeng; Hao, Shijie; Liu, Yinong; Feng, Chun; Cui, Lishan
2018-04-01
Multifunctional metal/oxide/metal core/multi-shell nanowire arrays were prepared mostly by physical or chemical vapor deposition. In our study, the Cu/CuO/Ni core/multi-shell nanowire arrays were prepared by AAO template-electrodeposition and oxidation processes. The Cu/Ni core/shell nanowire arrays were prepared by AAO template-electrodeposition method. The microstructure and chemical compositions of the core/multi-shell nanowires and core/shell nanowires have been characterized using transmission electron microscopy with HADDF-STEM and X-ray diffraction. Magnetization measurements revealed that the Cu/CuO/Ni and Cu/Ni nanowire arrays have high coercivity and remanence ratio.
2013-01-01
Background Poverty is multi dimensional. Beyond the quantitative and tangible issues related to inadequate income it also has equally important social, more intangible and difficult if not impossible to quantify dimensions. In 2009, we explored these social and relativist dimension of poverty in five communities in the South of Ghana with differing socio economic characteristics to inform the development and implementation of policies and programs to identify and target the poor for premium exemptions under Ghana’s National Health Insurance Scheme. Methods We employed participatory wealth ranking (PWR) a qualitative tool for the exploration of community concepts, identification and ranking of households into socioeconomic groups. Key informants within the community ranked households into wealth categories after discussing in detail concepts and indicators of poverty. Results Community defined indicators of poverty covered themes related to type of employment, educational attainment of children, food availability, physical appearance, housing conditions, asset ownership, health seeking behavior, social exclusion and marginalization. The poverty indicators discussed shared commonalities but contrasted in the patterns of ranking per community. Conclusion The in-depth nature of the PWR process precludes it from being used for identification of the poor on a large national scale in a program such as the NHIS. However, PWR can provide valuable qualitative input to enrich discussions, development and implementation of policies, programs and tools for large scale interventions and targeting of the poor for social welfare programs such as premium exemption for health care. PMID:23497484
Aryeetey, Genevieve C; Jehu-Appiah, Caroline; Kotoh, Agnes M; Spaan, Ernst; Arhinful, Daniel K; Baltussen, Rob; van der Geest, Sjaak; Agyepong, Irene A
2013-03-14
Poverty is multi dimensional. Beyond the quantitative and tangible issues related to inadequate income it also has equally important social, more intangible and difficult if not impossible to quantify dimensions. In 2009, we explored these social and relativist dimension of poverty in five communities in the South of Ghana with differing socio economic characteristics to inform the development and implementation of policies and programs to identify and target the poor for premium exemptions under Ghana's National Health Insurance Scheme. We employed participatory wealth ranking (PWR) a qualitative tool for the exploration of community concepts, identification and ranking of households into socioeconomic groups. Key informants within the community ranked households into wealth categories after discussing in detail concepts and indicators of poverty. Community defined indicators of poverty covered themes related to type of employment, educational attainment of children, food availability, physical appearance, housing conditions, asset ownership, health seeking behavior, social exclusion and marginalization. The poverty indicators discussed shared commonalities but contrasted in the patterns of ranking per community. The in-depth nature of the PWR process precludes it from being used for identification of the poor on a large national scale in a program such as the NHIS. However, PWR can provide valuable qualitative input to enrich discussions, development and implementation of policies, programs and tools for large scale interventions and targeting of the poor for social welfare programs such as premium exemption for health care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Visosky, M.; Hejzlar, P.; Kazimi, M.
2006-07-01
CONFU-B assemblies are PWR assemblies containing standard Uranium fuel rods and TRU bearing inert material fuel rods and are designed to achieve net TRU destruction over a 4.5-year irradiation. These highly heterogeneous assemblies tend to exhibit large intra-assembly power peaking factors (IAPPF). Neutronic strategies to reduce IAPPF are developed. The IAPPF are calculated at the assembly level using CASMO4, and these are used to calculate the most restrictive thermal margin (the Minimum Departure from Nucleate Boiling Ratio, MDNBR) using a whole-core VIPRE-01 model. This paper examines two strategies to manage the thermal margin of a CONFU-B assembly while retaining themore » TRU destruction performance: use of neutron poisons and tailored enrichment schemes. Burnable poisons can be used to suppress BOL reactivity of fresh CONFU-B assemblies with only minor impact on MDNBR and TRU destruction performance. Tailored enrichment, along with the use of soluble boron, can achieve significant improvements in MDNBR, but at some cost to TRU destruction performance. (authors)« less
Probability of in-vessel steam explosion-induced containment failure for a KWU PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esmaili, H.; Khatib-Rahbar, M.; Zuchuat, O.
During postulated core meltdown accidents in light water reactors, there is a likelihood for an in-vessel steam explosion when the melt contacts the coolant in the lower plenum. The objective of the work described in this paper is to determine the conditional probability of in-vessel steam explosion-induced containment failure for a Kraftwerk Union (KWU) pressurized water reactor (PWR). The energetics of the explosion depends on the mass of the molten fuel that mixes with the coolant and participates in the explosion and on the conversion of fuel thermal energy into mechanical work. The work can result in the generation ofmore » dynamic pressures that affect the lower head (and possibly lead to its failure), and it can cause acceleration of a slug (fuel and coolant material) upward that can affect the upper internal structures and vessel head and ultimately cause the failure of the upper head. If the upper head missile has sufficient energy, it can reach the containment shell and penetrate it. The analysis, must therefore, take into account all possible dissipation mechanisms.« less
Wang, Ruifei; Koppram, Rakesh; Olsson, Lisbeth; Franzén, Carl Johan
2014-11-01
Fed-batch simultaneous saccharification and fermentation (SSF) is a feasible option for bioethanol production from lignocellulosic raw materials at high substrate concentrations. In this work, a segregated kinetic model was developed for simulation of fed-batch simultaneous saccharification and co-fermentation (SSCF) of steam-pretreated birch, using substrate, enzymes and cell feeds. The model takes into account the dynamics of the cellulase-cellulose system and the cell population during SSCF, and the effects of pre-cultivation of yeast cells on fermentation performance. The model was cross-validated against experiments using different feed schemes. It could predict fermentation performance and explain observed differences between measured total yeast cells and dividing cells very well. The reproducibility of the experiments and the cell viability were significantly better in fed-batch than in batch SSCF at 15% and 20% total WIS contents. The model can be used for simulation of fed-batch SSCF and optimization of feed profiles. Copyright © 2014 Elsevier Ltd. All rights reserved.
1992-11-01
Incorporated. Each design is characterized by a moderated core, a NaK pumped loop primary coolant system, and a potassium heat pipe radiator as the...1 1 10 1 RelHX 1 2 10 2 nRel HX 3 3 RelSS nRelSS Irr 4 3 7 8 9 io 2 + 2 + 2 + 2 nRel Pwr nRel NaK nRel RC nRel HX 1 1 11 1 RelSS 1 2 11 2 nRel SS 3 3
Conceptual Designing of a Reduced Moderation Pressurized Water Reactor by Use of MVP and MVP-BURN
NASA Astrophysics Data System (ADS)
Kugo, T.
A conceptual design of a seed-blanket assembly PWR core with a complicated geometry and a strong heterogeneity has been carried forward by use of the continuous-energy Monte Carlo method. Through parametric survey calculations by repeated use of MVP and a lattice burn-up calculation by MVP-BURN, a seed-blanket assembly configuration suitable for a concept of RMWR has been established, by evaluating precisely reactivity, a conversion ratio and a coolant void reactivity coefficient in a realistic computation time on a super computer.
A multi-core fiber based interferometer for high temperature sensing
NASA Astrophysics Data System (ADS)
Zhou, Song; Huang, Bo; Shu, Xuewen
2017-04-01
In this paper, we have verified and implemented a Mach-Zehnder interferometer based on seven-core fiber for high temperature sensing application. This proposed structure is based on a multi-mode-multi-core-multi-mode fiber structure sandwiched by a single mode fiber. Between the single-mode and multi-core fiber, a 3 mm long multi-mode fiber is formed for lead-in and lead-out light. The basic operation principle of this device is the use of multi-core modes, single-mode and multi-mode interference coupling is also utilized. Experimental results indicate that this interferometer sensor is capable of accurate measurements of temperatures up to 800 °C, and the temperature sensitivity of the proposed sensor is as high as 170.2 pm/°C, which is much higher than the current existing MZI based temperature sensors (109 pm/°C). This type of sensor is promising for practical high temperature applications due to its advantages including high sensitivity, simple fabrication process, low cost and compactness.
Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee
NASA Astrophysics Data System (ADS)
Clerc, Thomas
With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).
Core-to-core uniformity improvement in multi-core fiber Bragg gratings
NASA Astrophysics Data System (ADS)
Lindley, Emma; Min, Seong-Sik; Leon-Saval, Sergio; Cvetojevic, Nick; Jovanovic, Nemanja; Bland-Hawthorn, Joss; Lawrence, Jon; Gris-Sanchez, Itandehui; Birks, Tim; Haynes, Roger; Haynes, Dionne
2014-07-01
Multi-core fiber Bragg gratings (MCFBGs) will be a valuable tool not only in communications but also various astronomical, sensing and industry applications. In this paper we address some of the technical challenges of fabricating effective multi-core gratings by simulating improvements to the writing method. These methods allow a system designed for inscribing single-core fibers to cope with MCFBG fabrication with only minor, passive changes to the writing process. Using a capillary tube that was polished on one side, the field entering the fiber was flattened which improved the coverage and uniformity of all cores.
[Study on HPLC fingerprint of Oldenlandia diffusa].
Chen, Yan; Yao, Zhi-Hong; Dai, Yi; Cheng, Hong; Wen, Li-Rong; Zhou, Guang-Xiong; Yao, Xin-Sheng
2012-06-01
To establish the HPLC fingerprint chromatogram of Oldenlandia diffusa coupled with chemometrics means for the quality control of multi-batches of medicinal material. The separation was developed on C18 column(4.6 mm x 250 mm, 5 microm) by gradient elution with acetonitrile-water(both containing 0.1 per thousand (V/V) ocetic acid) as mobile phase at a flow rate of 0.8 mL/min, the detection wavelength at 238 nm and column temperature at 30 degrees C. The HPLC fingerprint chromatogram of Oldenlandia diffusa was set up and the main characteristic peaks were identified by comparing with chemical reference substance. The quality of 22 batches of medicinal material was evaluated by similarity assay as well as principal component analysis (PCA) and cluster analysis. The established HPLC fingerprint chromatogram of Oldenlandia diffusa was specific, precise, reproducible and stable. 11 peaks were chemically identified. The similarity of 17 batches of Oldenlandia diffusa was obviously higher than 5 batches of adulterants. PCA showed that 17 batches of Oldenlandia diffusa were in a domain and 5 batches of adulterants were far apart from the domain. The cluster analysis of the 22 batches of medicinal material showed that 17 batches of Oldenlandia diffusa were in a cluster while 5 batches of adulterants were excluded. Further cluster analysis was carried out for the quality consistency of 17 batches of Oldenlandia diffusa and accordingly they were devided into 4 clusters. With the combination of chemometrics means, the HPLC fingerprint chromatogram provides a method for evaluation of authenticity and quality control of Oldenlandia diffusa, which is favorable to improve overall quality control of Oldenlandia diffusa.
Low Platelet to White Blood Cell Ratio Indicates Poor Prognosis for Acute-On-Chronic Liver Failure.
Jie, Yusheng; Gong, Jiao; Xiao, Cuicui; Zhu, Shuguang; Zhou, Wenying; Luo, Juan; Chong, Yutian; Hu, Bo
2018-01-01
Background. Platelet to white blood cell ratio (PWR) was an independent prognostic predictor for outcomes in some diseases. However, the prognostic role of PWR is still unclear in patients with hepatitis B related acute-on-chronic liver failure (ACLF). In this study, we evaluated the clinical performances of PWR in predicting prognosis in HBV-related ACLF. Methods. A total of 530 subjects were recruited, including 97 healthy controls and 433 with HBV-related ACLF. Liver function, prothrombin time activity (PTA), international normalized ratio (INR), HBV DNA measurement, and routine hematological testing were performed at admission. Results . At baseline, PWR in patients with HBV-related ACLF (14.03 ± 7.17) was significantly decreased compared to those in healthy controls (39.16 ± 9.80). Reduced PWR values were clinically associated with the severity of liver disease and the increased mortality rate. Furthermore, PWR may be an inexpensive, easily accessible, and significant independent prognostic index for mortality on multivariate analysis (HR = 0.660, 95% CI: 0.438-0.996, p = 0.048) as well as model for end-stage liver disease (MELD) score. Conclusions . The PWR values were markedly decreased in ACLF patients compared with healthy controls and associated with severe liver disease. Moreover, PWR was an independent prognostic indicator for the mortality rate in patients with ACLF. This investigation highlights that PWR comprised a useful biomarker for prediction of liver severity.
Modulation of venlafaxine hydrochloride release from press coated matrix tablet.
Gohel, M C; Soni, C D; Nagori, S A; Sarvaiya, K G
2008-01-01
The aim of present study was to prepare novel modified release press coated tablets of venlafaxine hydrochloride. Hydroxypropylmethylcellulose K4M and hydroxypropylmethylcellulose K100M were used as release modifier in core and coat, respectively. A 3(2) full factorial design was adopted in the optimization study. The drug to polymer ratio in core and coat were chosen as independent variables. The drug release in the first hour and drug release rate between 1 and 12 h were chosen as dependent variables. The tablets were characterized for dimension analysis, crushing strength, friability and in vitro drug release. A check point batch, containing 1:2.6 and 1:5.4 drug to polymer in core and coat respectively, was prepared. The tablets of check point batch were subjected to in vitro drug release in dissolution media with pH 5, 7.2 and distilled water. The kinetics of drug release was best explained by Korsmeyer and Peppas model (anomalous non-Fickian diffusion). The systematic formulation approach enabled us to develop modified release venlafaxine hydrochloride tablets.
Application of Advanced Multi-Core Processor Technologies to Oceanographic Research
2013-09-30
STM32 NXP LPC series No Proprietary Microchip PIC32/DSPIC No > 500 mW; < 5 W ARM Cortex TI OMAP TI Sitara Broadcom BCM2835 Varies FPGA...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies...state-of-the-art information processing architectures. OBJECTIVES Next-generation processor architectures (multi-core, multi-threaded) hold the
Thonusin, Chanisa; IglayReger, Heidi B; Soni, Tanu; Rothberg, Amy E; Burant, Charles F; Evans, Charles R
2017-11-10
In recent years, mass spectrometry-based metabolomics has increasingly been applied to large-scale epidemiological studies of human subjects. However, the successful use of metabolomics in this context is subject to the challenge of detecting biologically significant effects despite substantial intensity drift that often occurs when data are acquired over a long period or in multiple batches. Numerous computational strategies and software tools have been developed to aid in correcting for intensity drift in metabolomics data, but most of these techniques are implemented using command-line driven software and custom scripts which are not accessible to all end users of metabolomics data. Further, it has not yet become routine practice to assess the quantitative accuracy of drift correction against techniques which enable true absolute quantitation such as isotope dilution mass spectrometry. We developed an Excel-based tool, MetaboDrift, to visually evaluate and correct for intensity drift in a multi-batch liquid chromatography - mass spectrometry (LC-MS) metabolomics dataset. The tool enables drift correction based on either quality control (QC) samples analyzed throughout the batches or using QC-sample independent methods. We applied MetaboDrift to an original set of clinical metabolomics data from a mixed-meal tolerance test (MMTT). The performance of the method was evaluated for multiple classes of metabolites by comparison with normalization using isotope-labeled internal standards. QC sample-based intensity drift correction significantly improved correlation with IS-normalized data, and resulted in detection of additional metabolites with significant physiological response to the MMTT. The relative merits of different QC-sample curve fitting strategies are discussed in the context of batch size and drift pattern complexity. Our drift correction tool offers a practical, simplified approach to drift correction and batch combination in large metabolomics studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Cooling of core debris and the impact on containment pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, J.W.
1981-07-01
An evaluation of the core debris/water interactions associated with a postulated meltdown of a PWR and its impact on the containment pressure is presented. In the event of a complete core meltdown in a PWR, the interaction of molten debris with water in the bottom head of the reactor vessel could result in complete evaporation of water and breach of the vessel wall. In the reactor cavity, the debris-water interaction may lead to a rapid generation of steam, which could lead to pressures beyond the containment building limit. Previous analysis of the debris-water interactions with the MARCH code was basedmore » on the single-sphere model, in which the internal and surface heat transfer are the controlling mechanisms. In this study, the potential in-vessel and ex-vessel debris-water interactions are analyzed in terms of porous debris bed models. The debris cooling and steam generation are controlled by the hydrodynamics of the two-phase flow. The porous models developed by Dhir-Catton and by Lipinski were examined and used to test their impact on containment dynamics. The tests include several particle sizes from 1 mm to 50 mm. Detailed transient data on the pressure, temperature, and mass of steam in the containment building was obtained for all cases. Bands of pressure variation which represents the possible pressure rise under accident conditions were obtained for the Dhir-Catton model and for the Lipinski model. The results show that, for the case of a wet cavity, the magnitude of the predicted pressure rises is not strongly affected by the different models. The occurrence of the peak pressure, however, is considerably delayed by using the debris bed model. For the case of a dry cavity, a large reduction of the peak pressure is obtained by using the debris bed model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kochunas, Brendan; Collins, Benjamin; Stimpson, Shane
This paper describes the methodology developed and implemented in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS) to perform high-fidelity, pressurized water reactor (PWR), multicycle, core physics calculations. Depletion of the core with pin-resolved power and nuclide detail is a significant advance in the state of the art for reactor analysis, providing the level of detail necessary to address the problems of the U.S. Department of Energy Nuclear Reactor Simulation Hub, the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS has three main components: the neutronics solver MPACT, the thermal-hydraulic (T-H) solver COBRA-TF (CTF), and the nuclidemore » transmutation solver ORIGEN. This paper focuses on MPACT and provides an overview of the resonance self-shielding methods, macroscopic-cross-section calculation, two-dimensional/one-dimensional (2-D/1-D) transport, nuclide depletion, T-H feedback, and other supporting methods representing a minimal set of the capabilities needed to simulate high-fidelity models of a commercial nuclear reactor. Results are presented from the simulation of a model of the first cycle of Watts Bar Unit 1. The simulation is within 16 parts per million boron (ppmB) reactivity for all state points compared to cycle measurements, with an average reactivity bias of <5 ppmB for the entire cycle. Comparisons to cycle 1 flux map data are also provided, and the average 2-D root-mean-square (rms) error during cycle 1 is 1.07%. To demonstrate the multicycle capability, a state point at beginning of cycle (BOC) 2 was also simulated and compared to plant data. The comparison of the cycle 2 BOC state has a reactivity difference of +3 ppmB from measurement, and the 2-D rms of the comparison in the flux maps is 1.77%. Lastly, these results provide confidence in VERA-CS’s capability to perform high-fidelity calculations for practical PWR reactor problems.« less
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Sanchez-Vazquez, Manuel J; Nielen, Mirjam; Edwards, Sandra A; Gunn, George J; Lewis, Fraser I
2012-08-31
Abattoir detected pathologies are of crucial importance to both pig production and food safety. Usually, more than one pathology coexist in a pig herd although it often remains unknown how these different pathologies interrelate to each other. Identification of the associations between different pathologies may facilitate an improved understanding of their underlying biological linkage, and support the veterinarians in encouraging control strategies aimed at reducing the prevalence of not just one, but two or more conditions simultaneously. Multi-dimensional machine learning methodology was used to identify associations between ten typical pathologies in 6485 batches of slaughtered finishing pigs, assisting the comprehension of their biological association. Pathologies potentially associated with septicaemia (e.g. pericarditis, peritonitis) appear interrelated, suggesting on-going bacterial challenges by pathogens such as Haemophilus parasuis and Streptococcus suis. Furthermore, hepatic scarring appears interrelated with both milk spot livers (Ascaris suum) and bacteria-related pathologies, suggesting a potential multi-pathogen nature for this pathology. The application of novel multi-dimensional machine learning methodology provided new insights into how typical pig pathologies are potentially interrelated at batch level. The methodology presented is a powerful exploratory tool to generate hypotheses, applicable to a wide range of studies in veterinary research.
Richards, Selena; Miller, Robert; Gemperline, Paul
2008-02-01
An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of l-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.
Multi-Threaded DNA Tag/Anti-Tag Library Generator for Multi-Core Platforms
2009-05-01
base pair) Watson ‐ Crick strand pairs that bind perfectly within pairs, but poorly across pairs. A variety of DNA strand hybridization metrics...AFRL-RI-RS-TR-2009-131 Final Technical Report May 2009 MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE PLATFORMS...TYPE Final 3. DATES COVERED (From - To) Jun 08 – Feb 09 4. TITLE AND SUBTITLE MULTI-THREADED DNA TAG/ANTI-TAG LIBRARY GENERATOR FOR MULTI-CORE
EPRI/DOE High-Burnup Fuel Sister Rod Test Plan Simplification and Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saltzstein, Sylvia J.; Sorenson, Ken B.; Hanson, B. D.
The EPRI/DOE High-Burnup Confirmatory Data Project (herein called the “Demo”) is a multi-year, multi-entity test with the purpose of providing quantitative and qualitative data to show if high-burnup fuel mechanical properties change in dry storage over a ten-year period. The Demo involves obtaining 32 assemblies of high-burnup PWR fuel of common cladding alloys from the North Anna Nuclear Power Plant, loading them in an NRC-licensed TN-32B cask, drying them according to standard plant procedures, and then storing them on the North Anna dry storage pad for ten years. After the ten-year storage time, the cask will be opened and themore » mechanical properties of the rods will be tested and analyzed.« less
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
Mesomorphic properties of multi-arm chenodeoxycholic acid-derived liquid crystals
NASA Astrophysics Data System (ADS)
Dong, Liang; Yao, Miao; Wu, Shuang-jie; Yao, Dan-Shu; Hu, Jian-She; He, Xiao-zhi; Tian, Mei
2017-12-01
Four multi-arm liquid crystals (LCs) based on chenodeoxycholic acid, termed as 2G-PD, 2G-IB, 2G-BD and 5G-GC, respectively, have been synthesised by convergent method, which nematic LC, 6-(4-((4-ethoxybenzoyl)oxy)phenoxy)-6-oxohexanoic acid, was used as side arm, and chenodeoxycholic acid (CDCA) was used as the first core, 1,2-propanediol (PD), isosorbide (IB), 4,4‧-biphenyldiol (BD) and glucose (GC) were used as the second core, respectively. The first generation product, CDCA2EA, displayed cholesteric phase. The second generation products 2G-BD and 5G-GC displayed cholesteric phase, while 2G-PD and 2G-IB exhibited nematic phase. The multi-arm LC, 2G-IB, did not display cholesteric phase although the two cores were all chiral ones. The result indicated that chirality of the second core sometimes made the multi-arm LCs display nematic phase when cholesteric CDCA-derivative were introduced into the second core. Some attention should be paid on molecular conformation besides the introduction of chiral cores for multi-chiral-core LCs to obtain cholesteric phase.
Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components
NASA Technical Reports Server (NTRS)
Reck, Theodore (Inventor); Perez, Jose Vicente Siles (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Jung-Kubiak, Cecile (Inventor); Mehdi, Imran (Inventor); Chattopadhyay, Goutam (Inventor); Lin, Robert H. (Inventor); Peralta, Alejandro (Inventor)
2016-01-01
A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.
Plasmid partition system of the P1par family from the pWR100 virulence plasmid of Shigella flexneri.
Sergueev, Kirill; Dabrazhynetskaya, Alena; Austin, Stuart
2005-05-01
P1par family members promote the active segregation of a variety of plasmids and plasmid prophages in gram-negative bacteria. Each has genes for ParA and ParB proteins, followed by a parS partition site. The large virulence plasmid pWR100 of Shigella flexneri contains a new P1par family member: pWR100par. Although typical parA and parB genes are present, the putative pWR100parS site is atypical in sequence and organization. However, pWR100parS promoted accurate plasmid partition in Escherichia coli when the pWR100 Par proteins were supplied. Unique BoxB hexamer motifs within parS define species specificities among previously described family members. Although substantially different from P1parS from the P1 plasmid prophage of E. coli, pWR100parS has the same BoxB sequence. As predicted, the species specificity of the two types proved identical. They also shared partition-mediated incompatibility, consistent with the proposed mechanistic link between incompatibility and species specificity. Among several informative sequence differences between pWR100parS and P1parS is the presence of a 21-bp insert at the center of the pWR100parS site. Deletion of this insert left much of the parS activity intact. Tolerance of central inserts with integral numbers of helical DNA turns reflects the critical topology of these sites, which are bent by binding the host IHF protein.
Enhancing enterovirus A71 vaccine production yield by microcarrier profusion bioreactor culture.
Liu, Chia-Chyi; Wu, Suh-Chin; Wu, Shang-Rung; Lin, Hsiao-Yu; Guo, Meng-Shin; Yung-Chih Hu, Alan; Chow, Yen-Hung; Chiang, Jen-Ron; Shieh, Dar-Bin; Chong, Pele
2018-05-24
Hand, foot and mouth diseases (HFMD) are mainly caused by Enterovirus A71 (EV-A71) infections. Clinical trials in Asia conducted with formalin-inactivated EV-A71 vaccine candidates produced from serum-free Vero cell culture using either roller bottle or cell factory technology, are found to be safe and highly efficacious. To increase vaccine yields and reduce the production costs, the bioprocess improvement for EV-A71 vaccine manufacturing is currently being investigated. The parameters that could affect and enhance the production yields of EV-A71 virus growth in the microcarrier bioreactor were investigated. The medium replacement culture strategy included a multi-harvested semi-batch process and perfusion technology and was found to increase the production yields more than 7-14 folds. Based on the western blot and cryo-EM analyses of the EV-A71 virus particles produced from either the multi-harvested semi-batch (MHSBC) or perfusion cultures were found to be similar to those virus particles obtained from the single batch culture. Mouse immunogenicity studies indicate that the EV-A71 vaccine candidates produced from the perfusion culture have similar potency to those obtained from single batch bioprocess. The physical structures of the EV-A71 particles revealed by the cryo-EM analysis were found to be spherical capsid particles. These results provide feasible technical bioprocesses for increasing virus yields and the scale up of EV-A71 vaccine manufacturing using the bioreactor cell culture methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2018-02-01
This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
De San Luis, Alicia; Paulis, Maria; Leiza, Jose Ramon
2017-11-15
Hybrid core/shell polymer particles with co-encapsulated quantum dots (QDs) (CdSe/ZnS) and CeO 2 nanoparticles have been synthesized in a two stage semi-batch emulsion polymerization process. In the first stage, both inorganic nanoparticles are incorporated into cross-linked polystyrene (PS) particles by miniemulsion polymerization. This hybrid dispersion is then used as the seed to produce the core/shell particles by starved feeding of methyl methacrylate and divinylbenzene (MMA/DVB) monomers. The core/shell hybrid dispersions maintained in the dark exhibit stable fluorescence emission over time, and notably their fluorescence intensity increases under sunlight, likely due to the effect of the co-encapsulated CeO 2 nanoparticles that change the optical properties of the environment of the quantum dot particles. The fluorescence increase depends on the QD : CeO 2 ratio, with the 1 : 2 ratio resulting in the highest increase (280%). Furthermore, a film forming hybrid latex has been synthesized using the former core/shell PS/QD/CeO 2 /PMMA particles as seeds and feeding under semi-batch conditions methyl methacrylate, butyl acrylate and acrylic acid. Films cast from this core/shell/shell hybrid dispersion also exhibit fluorescence, and as for the core/shell latex the fluorescence increases under sunlight exposure. Interestingly, the increase in the film is at least two times higher than that in the latex, which is attributed to the additional effect of the neighboring coalesced particles containing CeO 2 affecting the environment of the QDs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martino, C
The Department of Energy (DOE) recognizes the need for the characterization of High-Level Waste (HLW) saltcake in the Savannah River Site (SRS) F- and H-area tank farms to support upcoming salt processing activities. As part of the enhanced characterization efforts, Tank 25F will be sampled and the samples analyzed at the Savannah River National Laboratory (SRNL). This Task Technical and Quality Assurance Plan documents the planned activities for the physical, chemical, and radiological analysis of the Tank 25F saltcake core samples. This plan does not cover other characterization activities that do not involve core sample analysis and it does notmore » address issues regarding sampling or sample transportation. The objectives of this report are: (1) Provide information useful in projecting the composition of dissolved salt batches by quantifying important components (such as actinides, {sup 137}Cs, and {sup 90}Sr) on a per batch basis. This will assist in process selection for the treatment of salt batches and provide data for the validation of dissolution modeling. (2) Determine the properties of the heel resulting from dissolution of the bulk saltcake. Also note tendencies toward post-mixing precipitation. (3) Provide a basis for determining the number of samples needed for the characterization of future saltcake tanks. Gather information useful towards performing characterization in a manner that is more cost and time effective.« less
Asif, Rameez
2016-01-01
Space division multiplexing (SDM), incorporating multi-core fibers (MCFs), has been demonstrated for effectively maximizing the data capacity in an impending capacity crunch. To achieve high spectral-density through multi-carrier encoding while simultaneously maintaining transmission reach, benefits from inter-core crosstalk (XT) and non-linear compensation must be utilized. In this report, we propose a proof-of-concept unified receiver architecture that jointly compensates optical Kerr effects, intra- and inter-core XT in MCFs. The architecture is analysed in multi-channel 512 Gbit/s dual-carrier DP-16QAM system over 800 km 19-core MCF to validate the digital compensation of inter-core XT. Through this architecture: (a) we efficiently compensates the inter-core XT improving Q-factor by 4.82 dB and (b) achieve a momentous gain in transmission reach, increasing the maximum achievable distance from 480 km to 1208 km, via analytical analysis. Simulation results confirm that inter-core XT distortions are more relentless for cores fabricated around the central axis of cladding. Predominantly, XT induced Q-penalty can be suppressed to be less than 1 dB up-to −11.56 dB of inter-core XT over 800 km MCF, offering flexibility to fabricate dense core structures with same cladding diameter. Moreover, this report outlines the relationship between core pitch and forward-error correction (FEC). PMID:27270381
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
< 9 A < 2 6 < 7 4 8 9 6 2 6 equalizing vent valves on air locks 2, 4, and 5 was completed. An evaluation of the failed main coolant pump No. 1-80-F-737 was completed. The design for installing combination ball check and manual stop valves on the boiler water level sight glasses, to prevent the escape of steam should a defective sight glass develop, was completed. The main coolant pumps No. 80 and No. 79 were modified by increasing the radial clearance of the impeller wear ring and by removing the upper labyrinth ring. A designmore » for relocating the cooling water flow orifice 17-J4-17 was completed. Metallurgy: Preliminary data from the Bett 69-1 in-pile thermal conductivity capsules indicate that the thermal conductivity of as-sintered ZrO/sub 2/ 34 wt.% UO/sub 2/ appears to decrease from an initial value of about 1.6 Btu/hr-ft- deg F to about 0.7 Btu/hr-ft- deg F after 17 days irradiation in an estimated perturbed flux of 4 x 10/sup 13/. The thermal conductivities of UO/sub 2/ and BeO 51 wt.% UO/sub 2/ fuel remained unchanged during this time. Examination of the two failed X-3-1 fuel plates and the two failed CR-V-m fuel plates showed that a definite burnup limitation exists for bulk UO/sub 2/i of about 16 x 10/sup 20/ to 21.5 x 10/sup 20/ fissions/cc at which point the fuel increases in volume about 4- -5%. Irradiation of both fine and coarse dis-persions of 28 wt.% UO/sub 2/in BeO to exposures of about 11 x 10/sup 20/ fissions/cc shows this material has very poor dimensional stabllity and poor fission gas retention ability. The fine particles dispersion showed approximately 4.8 times the thickness increase as did the coarse particles. Interim examination of a bulk B/sub 4/ burnable poison plate irradiated in the HB-1 loop to about 60 at.% B/sup 10/ burnup showed a 17% increase in plate thickness. The technical feasibility of fabricating blanket receptacles with full length fuel channels and an integral cover plate by form rolling was established. Hack-pressure-bonding appears to be a suitable means of incorporating void volume in fuel compartments of oxide plates. High density (99% T.D.) and improved microstructure of B/sub 4/C-SiC burnable poisons are achieved when small (2 micron) B/sub 4/C particle size powder is used ia hot pressing compacts. Measurements of the self-diffusion coefficients of uranium in UO/sub 2/ by the method of surface activity decrease were completed. Experiments on the diffusion of Xe/sup 133/ in Core 2--type UO/sup 2/ fuel platelets were completed. Diffusion anaeals carried out at 1000 deg C on samples from the X-3-1 and the 14-28 irradiation tests show that the apparent diffusion coefficient for Kr/sup 85/ incresses considerably with burnup. An average activation energy for thoron emanation in UO/sub 2/ was estimated to be 44 kcal/mole. An initial experiment on the release of helium from slightly irradiated B/sub 4/C at 900 deg C resulted in a diffusion coefficient for helium of 3.5 x 10/sup -8/ Physics: Calculatad values for seed-blanket power sharing as a function of PWR-1 Seed 1 life were compared with measured data obtained from thermal instrumentation at Shippingport. Two-dimensional depletion studies in the PWR-2 "composite cell" geometry were completed for seed assembly configurations having different radial fuel zoning. An eighth core representation is being employed for a two- dimensional depletion calculation of PWR-2. An analysis of the effect on the axial power distribution of the nonuniform temperature distribution in an 8 ft PWR-2 core loaded with 295 kg of U/sup 235/ indicated that local variations in power density of as much as 15% may occur, relative to the distribution that would exist if the axial temperature distribution were uniform. A technique was developed which makes possible an approximately correct description of the neutron capture rate within small rectangular boron wafers in diffusion theory calculations. Seed peaking factors measured in a five-cluster slab of PWR-2 mock- up materials were measured and compared with calculated peaking factors obtained using the nuclear« less
77 FR 15293 - Airworthiness Directives; Dassault Aviation Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-15
...-190-20), land at nearest suitable airport Upon display of ELEC:LH ESS PWR LO or ELEC:LH ESS NO PWR (Abnormal procedure 3-190-40), land at nearest suitable airport Upon display of ELEC:RH ESS PWR LO and ELEC...
NASA Technical Reports Server (NTRS)
Drummond, Mark; Hine, Butler; Genet, Russell; Genet, David; Talent, David; Boyd, Louis; Trueblood, Mark; Filippenko, Alexei V. (Editor)
1991-01-01
The objective of multi-use telescopes is to reduce the initial and operational costs of space telescopes to the point where a fair number of telescopes, a dozen or so, would be affordable. The basic approach is to develop a common telescope, control system, and power and communications subsystem that can be used with a wide variety of instrument payloads, i.e., imaging CCD cameras, photometers, spectrographs, etc. By having such a multi-use and multi-user telescope, a common practice for earth-based telescopes, development cost can be shared across many telescopes, and the telescopes can be produced in economical batches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas
2007-07-01
Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less
The effects of stainless steel radial reflector on core reactivity for small modular reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Jung Kil, E-mail: jkkang@email.kings.ac.kr; Hah, Chang Joo, E-mail: changhah@kings.ac.kr; Cho, Sung Ju, E-mail: sungju@knfc.co.kr
Commercial PWR core is surrounded by a radial reflector, which consists of a baffle and water. Radial reflector is designed to reflect neutron back into the core region to improve the neutron efficiency of the reactor and to protect the reactor vessels from the embrittling effects caused by irradiation during power operation. Reflector also helps to flatten the neutron flux and power distributions in the reactor core. The conceptual nuclear design for boron-free small modular reactor (SMR) under development in Korea requires to have the cycle length of 4∼5 years, rated power of 180 MWth and enrichment less than 5more » w/o. The aim of this paper is to analyze the effects of stainless steel radial reflector on the performance of the SMR using UO{sub 2} fuels. Three types of reflectors such as water, water/stainless steel 304 mixture and stainless steel 304 are selected to investigate the effect on core reactivity. Additionally, the thickness of stainless steel and double layer reflector type are also investigated. CASMO-4/SIMULATE-3 code system is used for this analysis. The results of analysis show that single layer stainless steel reflector is the most efficient reflector.« less
Free Factories: Unified Infrastructure for Data Intensive Web Services
Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.
2010-01-01
We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356
Liu, Ya-Juan; André, Silvère; Saint Cristau, Lydia; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Devos, Olivier; Duponchel, Ludovic
2017-02-01
Multivariate statistical process control (MSPC) is increasingly popular as the challenge provided by large multivariate datasets from analytical instruments such as Raman spectroscopy for the monitoring of complex cell cultures in the biopharmaceutical industry. However, Raman spectroscopy for in-line monitoring often produces unsynchronized data sets, resulting in time-varying batches. Moreover, unsynchronized data sets are common for cell culture monitoring because spectroscopic measurements are generally recorded in an alternate way, with more than one optical probe parallelly connecting to the same spectrometer. Synchronized batches are prerequisite for the application of multivariate analysis such as multi-way principal component analysis (MPCA) for the MSPC monitoring. Correlation optimized warping (COW) is a popular method for data alignment with satisfactory performance; however, it has never been applied to synchronize acquisition time of spectroscopic datasets in MSPC application before. In this paper we propose, for the first time, to use the method of COW to synchronize batches with varying durations analyzed with Raman spectroscopy. In a second step, we developed MPCA models at different time intervals based on the normal operation condition (NOC) batches synchronized by COW. New batches are finally projected considering the corresponding MPCA model. We monitored the evolution of the batches using two multivariate control charts based on Hotelling's T 2 and Q. As illustrated with results, the MSPC model was able to identify abnormal operation condition including contaminated batches which is of prime importance in cell culture monitoring We proved that Raman-based MSPC monitoring can be used to diagnose batches deviating from the normal condition, with higher efficacy than traditional diagnosis, which would save time and money in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.
Neutronics Analysis of SMART Small Modular Reactor using SRAC 2006 Code
NASA Astrophysics Data System (ADS)
Ramdhani, Rahmi N.; Prastyo, Puguh A.; Waris, Abdul; Widayani; Kurniadi, Rizal
2017-07-01
Small modular reactors (SMRs) are part of a new generation of nuclear reactor being developed worldwide. One of the advantages of SMR is the flexibility to adopt the advanced design concepts and technology. SMART (System integrated Modular Advanced ReacTor) is a small sized integral type PWR with a thermal power of 330 MW that has been developed by KAERI (Korea Atomic Energy Research Institute). SMART core consists of 57 fuel assemblies which are based on the well proven 17×17 array that has been used in Korean commercial PWRs. SMART is soluble boron free, and the high initial reactivity is mainly controlled by burnable absorbers. The goal of this study is to perform neutronics evaluation of SMART core with UO2 as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2006 code with JENDL 3.3 as nuclear data library.
Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang
2013-05-01
To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.
MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system
NASA Astrophysics Data System (ADS)
Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.
Thorium-based mixed oxide fuel in a pressurized water reactor: A feasibility analysis with MCNP
NASA Astrophysics Data System (ADS)
Tucker, Lucas Powelson
This dissertation investigates techniques for spent fuel monitoring, and assesses the feasibility of using a thorium-based mixed oxide fuel in a conventional pressurized water reactor for plutonium disposition. Both non-paralyzing and paralyzing dead-time calculations were performed for the Portable Spectroscopic Fast Neutron Probe (N-Probe), which can be used for spent fuel interrogation. Also, a Canberra 3He neutron detector's dead-time was estimated using a combination of subcritical assembly measurements and MCNP simulations. Next, a multitude of fission products were identified as candidates for burnup and spent fuel analysis of irradiated mixed oxide fuel. The best isotopes for these applications were identified by investigating half-life, photon energy, fission yield, branching ratios, production modes, thermal neutron absorption cross section and fuel matrix diffusivity. 132I and 97Nb were identified as good candidates for MOX fuel on-line burnup analysis. In the second, and most important, part of this work, the feasibility of utilizing ThMOX fuel in a pressurized water reactor (PWR) was first examined under steady-state, beginning of life conditions. Using a three-dimensional MCNP model of a Westinghouse-type 17x17 PWR, several fuel compositions and configurations of a one-third ThMOX core were compared to a 100% UO2 core. A blanket-type arrangement of 5.5 wt% PuO2 was determined to be the best candidate for further analysis. Next, the safety of the ThMOX configuration was evaluated through three cycles of burnup at several using the following metrics: axial and radial nuclear hot channel factors, moderator and fuel temperature coefficients, delayed neutron fraction, and shutdown margin. Additionally, the performance of the ThMOX configuration was assessed by tracking cycle length, plutonium destroyed, and fission product poison concentration.
Tapia, Felipe; Vázquez-Ramírez, Daniel; Genzel, Yvonne; Reichl, Udo
2016-03-01
With an increasing demand for efficacious, safe, and affordable vaccines for human and animal use, process intensification in cell culture-based viral vaccine production demands advanced process strategies to overcome the limitations of conventional batch cultivations. However, the use of fed-batch, perfusion, or continuous modes to drive processes at high cell density (HCD) and overextended operating times has so far been little explored in large-scale viral vaccine manufacturing. Also, possible reductions in cell-specific virus yields for HCD cultivations have been reported frequently. Taking into account that vaccine production is one of the most heavily regulated industries in the pharmaceutical sector with tough margins to meet, it is understandable that process intensification is being considered by both academia and industry as a next step toward more efficient viral vaccine production processes only recently. Compared to conventional batch processes, fed-batch and perfusion strategies could result in ten to a hundred times higher product yields. Both cultivation strategies can be implemented to achieve cell concentrations exceeding 10(7) cells/mL or even 10(8) cells/mL, while keeping low levels of metabolites that potentially inhibit cell growth and virus replication. The trend towards HCD processes is supported by development of GMP-compliant cultivation platforms, i.e., acoustic settlers, hollow fiber bioreactors, and hollow fiber-based perfusion systems including tangential flow filtration (TFF) or alternating tangential flow (ATF) technologies. In this review, these process modes are discussed in detail and compared with conventional batch processes based on productivity indicators such as space-time yield, cell concentration, and product titers. In addition, options for the production of viral vaccines in continuous multi-stage bioreactors such as two- and three-stage systems are addressed. While such systems have shown similar virus titers compared to batch cultivations, keeping high yields for extended production times is still a challenge. Overall, we demonstrate that process intensification of cell culture-based viral vaccine production can be realized by the consequent application of fed-batch, perfusion, and continuous systems with a significant increase in productivity. The potential for even further improvements is high, considering recent developments in establishment of new (designer) cell lines, better characterization of host cell metabolism, advances in media design, and the use of mathematical models as a tool for process optimization and control.
Batching System for Superior Service
NASA Technical Reports Server (NTRS)
2001-01-01
Veridian's Portable Batch System (PBS) was the recipient of the 1997 NASA Space Act Award for outstanding software. A batch system is a set of processes for managing queues and jobs. Without a batch system, it is difficult to manage the workload of a computer system. By bundling the enterprise's computing resources, the PBS technology offers users a single coherent interface, resulting in efficient management of the batch services. Users choose which information to package into "containers" for system-wide use. PBS also provides detailed system usage data, a procedure not easily executed without this software. PBS operates on networked, multi-platform UNIX environments. Veridian's new version, PBS Pro,TM has additional features and enhancements, including support for additional operating systems. Veridian distributes the original version of PBS as Open Source software via the PBS website. Customers can register and download the software at no cost. PBS Pro is also available via the web and offers additional features such as increased stability, reliability, and fault tolerance.A company using PBS can expect a significant increase in the effective management of its computing resources. Tangible benefits include increased utilization of costly resources and enhanced understanding of computational requirements and user needs.
An interactive multi-block grid generation system
NASA Technical Reports Server (NTRS)
Kao, T. J.; Su, T. Y.; Appleby, Ruth
1992-01-01
A grid generation procedure combining interactive and batch grid generation programs was put together to generate multi-block grids for complex aircraft configurations. The interactive section provides the tools for 3D geometry manipulation, surface grid extraction, boundary domain construction for 3D volume grid generation, and block-block relationships and boundary conditions for flow solvers. The procedure improves the flexibility and quality of grid generation to meet the design/analysis requirements.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... Manufacturing, Multi-Plastics, Inc., Division, Sipco, Inc., Division, Including Leased Workers of M-Ploy... Manufacturing, Multi-Plastics, Inc., Division and Sipco, Inc., Division, including leased workers of M-Ploy... applicable to TA-W-70,457 is hereby issued as follows: ``All workers of Core Manufacturing, Multi-Plastics...
Stack-and-Draw Manufacture Process of a Seven-Core Optical Fiber for Fluorescence Measurements
NASA Astrophysics Data System (ADS)
Samir, Ahmed; Batagelj, Bostjan
2018-01-01
Multi-core, optical-fiber technology is expected to be used in telecommunications and sensory systems in a relatively short amount of time. However, a successful transition from research laboratories to industry applications will only be possible with an optimized design and manufacturing process. The fabrication process is an important aspect in designing and developing new multi-applicable, multi-core fibers, where the best candidate is a seven-core fiber. Here, the basics for designing and manufacturing a single-mode, seven-core fiber using the stack-and-draw process is described for the example of a fluorescence sensory system.
NASA Astrophysics Data System (ADS)
Takeda, Takeshi; Maruyama, Yu; Watanabe, Tadashi; Nakamura, Hideo
Experiments simulating PWR intermediate-break loss-of-coolant accidents (IBLOCAs) with 17% break at hot leg or cold leg were conducted in OECD/NEA ROSA-2 Project using the Large Scale Test Facility (LSTF). In the hot leg IBLOCA test, core uncovery started simultaneously with liquid level drop in crossover leg downflow-side before loop seal clearing (LSC) induced by steam condensation on accumulator coolant injected into cold leg. Water remained on upper core plate in upper plenum due to counter-current flow limiting (CCFL) because of significant upward steam flow from the core. In the cold leg IBLOCA test, core dryout took place due to rapid liquid level drop in the core before LSC. Liquid was accumulated in upper plenum, steam generator (SG) U-tube upflow-side and SG inlet plenum before the LSC due to CCFL by high velocity vapor flow, causing enhanced decrease in the core liquid level. The RELAP5/MOD3.2.1.2 post-test analyses of the two LSTF experiments were performed employing critical flow model in the code with a discharge coefficient of 1.0. In the hot leg IBLOCA case, cladding surface temperature of simulated fuel rods was underpredicted due to overprediction of core liquid level after the core uncovery. In the cold leg IBLOCA case, the cladding surface temperature was underpredicted too due to later core uncovery than in the experiment. These may suggest that the code has remaining problems in proper prediction of primary coolant distribution.
Gutiérrez, Miguel Morales; Caruso, Stefano; Diomidis, Nikitas
2018-05-19
According to the Swiss disposal concept, the safety of a deep geological repository for spent nuclear fuel (SNF) is based on a multi-barrier system. The disposal canister is an important component of the engineered barrier system, aiming to provide containment of the SNF for thousands of years. This study evaluates the criticality safety and shielding of candidate disposal canister concepts, focusing on the fulfilment of the sub-criticality criterion and on limiting radiolysis processes at the outer surface of the canister which can enhance corrosion mechanisms. The effective neutron multiplication factor (k-eff) and the surface dose rates are calculated for three different canister designs and material combinations for boiling water reactor (BWR) canisters, containing 12 spent fuel assemblies (SFA), and pressurized water reactor (PWR) canisters, with 4 SFAs. For each configuration, individual criticality and shielding calculations were carried out. The results show that k-eff falls below the defined upper safety limit (USL) of 0.95 for all BWR configurations, while staying above USL for the PWR ones. Therefore, the application of a burnup credit methodology for the PWR case is required, being currently under development. Relevant is also the influence of canister material and internal geometry on criticality, enabling the identification of safer fuel arrangements. For a final burnup of 55MWd/kgHM and 30y cooling time, the combined photon-neutron surface dose rate is well below the threshold of 1 Gy/h defined to limit radiation-induced corrosion of the canister in all cases. Copyright © 2018 Elsevier Ltd. All rights reserved.
Joining dissimilar stainless steels for pressure vessel components
NASA Astrophysics Data System (ADS)
Sun, Zheng; Han, Huai-Yue
1994-03-01
A series of studies was carried out to examine the weldability and properties of dissimilar steel joints between martensitic and austenitic stainless steels - F6NM (OCr13Ni4Mo) and AISI 347, respectively. Such joints are important parts in, e.g. the primary circuit of a pressurized water reactor (PWR). This kind of joint requires both good mechanical properties, corrosion resistance and a stable magnetic permeability besides good weldability. The weldability tests included weld thermal simulation of the martensitic steel for investigating the influence of weld thermal cycles and post-weld heat treatment (PWHT) on the mechanical properties of the heat-affected zone (HAZ); implant testing for examining the tendency for cold cracking of martensitic steel; rigid restraint testing for determining hot crack susceptibility of the multi-pass dissimilar steel joints. The joints were subjected to various mechanical tests including a tensile test, bending test and impact test at various temperatures, as well as slow strain-rate test for examining the stress corrosion cracking tendency in the simulated environment of a primary circuit of a PWR. The results of various tests indicated that the quality of the tube/tube joints is satisfactory for meeting all the design requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa
The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less
NASA Astrophysics Data System (ADS)
Ratnesh, R. K.; Mehata, Mohan Singh
2017-02-01
We report two port synthesis of CdSe/CdS/ZnS core-multi-shell quantum dots (Q-dots) and their structural properties. The multi-shell structures of Q-dots were developed by using successive ionic layer adsorption and reaction (SILAR) technique. The obtained Q-dots show high crystallinity with the step-wise adjustment of lattice parameters in the radial direction. The size of the core and core-shell Q-dots estimated by transmission electron microscopy images and absorption spectra is about 3.4 and 5.3 nm, respectively. The water soluble Q-dots (scheme-1) were prepared by using ligand exchange method, and the effect of pH was discussed regarding the variation of quantum yield (QY). The decrease of a lifetime of core-multi-shell Q-dots with respect to core CdSe indicates that the shell growth may be tuned by the lifetimes. Thus, the study clearly demonstrates that the core-shell approach can be used to substantially improve the optical properties of Q-dots desired for various applications.
Monitoring system for a liquid-cooled nuclear fission reactor. [PWR
DeVolpi, A.
1984-07-20
The invention provides improved means for detecting the water levels in various regions of a water-cooled nuclear power reactor, viz., in the downcomer, in the core, in the inlet and outlet plenums, at the head, and elsewhere; and also for detecting the density of the water in these regions. The invention utilizes a plurality of exterior gamma radiation detectors and a collimator technique operable to sense separate regions of the reactor vessel to give respectively, unique signals for these regions, whereby comparative analysis of these signals can be used to advise of the presence and density of cooling water in the vessel.
Corletti, Michael M.; Lau, Louis K.; Schulz, Terry L.
1993-01-01
The spent fuel pit of a pressured water reactor (PWR) nuclear power plant has sufficient coolant capacity that a safety rated cooling system is not required. A non-safety rated combined cooling and purification system with redundant branches selectively provides simultaneously cooling and purification for the spent fuel pit, the refueling cavity, and the refueling water storage tank, and transfers coolant from the refueling water storage tank to the refueling cavity without it passing through the reactor core. Skimmers on the suction piping of the combined cooling and purification system eliminate the need for separate skimmer circuits with dedicated pumps.
Reactive Transport Models with Geomechanics to Mitigate Risks of CO2 Utilization and Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deo, Milind; Huang, Hai; Kweon, Hyukmin
2016-03-28
Reactivity of carbon dioxide (CO 2), rocks and brine is important in a number of practical situations in carbon dioxide sequestration. Injectivity of CO 2 will be affected by near wellbore dissolution or precipitation. Natural fractures or faults containing specific minerals may reactivate leading to induced seismicity. In this project, we first examined if the reactions between CO 2, brine and rocks affect the nature of the porous medium and properties including petrophysical properties in the timeframe of the injection operations. This was done by carrying out experiments at sequestration conditions (2000 psi for corefloods and 2400 psi for batchmore » experiments, and 600°C) with three different types of rocks – sandstone, limestone and dolomite. Experiments were performed in batch mode and corefloods were conducted over a two-week period. Batch experiments were performed with samples of differing surface area to understand the impact of surface area on overall reaction rates. Toughreact, a reactive transport model was used to interpret and understand the experimental results. The role of iron in dissolution and precipitation reactions was observed to be significant. Iron containing minerals – siderite and ankerite dissolved resulting in changes in porosity and permeability. Corefloods and batch experiments revealed similar patterns. With the right cationic balance, there is a possibility of precipitation of iron bearing carbonates. The results indicate that during injection operations mineralogical changes may lead to injectivity enhancements near the wellbore and petrophysical changes elsewhere in the system. Limestone and dolomite cores showed consistent dissolution at the entrance of the core. The dissolution led to formation of wormholes and interconnected dissolution zones. Results indicate that near wellbore dissolution in these rock-types may lead to rock failure. Micro-CT images of the cores before and after the experiments revealed that an initial high-permeability pathway facilitated the formation of wormholes. The peak cation concentrations and general trends were matched using Toughreact. Batch reactor modeling showed that the geometric factors obtained using powder data that related effective surface area to the BET surface area had to be reduced for fractured samples and cores. This indicates that the available surface area in consolidated samples is lower than that deduced from powder experiments. Field-scale modeling of reactive transport and geomechanics was developed in parallel at Idaho National Laboratory. The model is able to take into account complex chemistry, and consider interactions of natural fractures and faults. Poroelastic geomechanical considerations are also included in the model.« less
TORC3: Token-ring clearing heuristic for currency circulation
NASA Astrophysics Data System (ADS)
Humes, Carlos, Jr.; Lauretto, Marcelo S.; Nakano, Fábio; Pereira, Carlos A. B.; Rafare, Guilherme F. G.; Stern, Julio Michael
2012-10-01
Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a real time heuristic procedure, demanding modest computational resources, and able to completely shield the clearing operation against the participating agents' risk of default.
Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores
Kim, Youngmin; Lee, Chan-Gun
2017-01-01
In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695
The increase in fatigue crack growth rates observed for Zircaloy-4 in a PWR environment
NASA Astrophysics Data System (ADS)
Cockeram, B. V.; Kammenzind, B. F.
2018-02-01
Cyclic stresses produced during the operation of nuclear reactors can result in the extension of cracks by processes of fatigue. Although fatigue crack growth rate (FCGR) data for Zircaloy-4 in air are available, little testing has been performed in a PWR primary water environment. Test programs have been performed by Gee et al., in 1989 and Picker and Pickles in 1984 by the UK Atomic Energy Authority, and by Wisner et al., in 1994, that have shown an enhancement in FCGR for Zircaloy-2 and Zircaloy-4 in high-temperature water. In this work, FCGR testing is performed on Zircaloy-4 in a PWR environment in the hydrided and non-hydrided condition over a range of stress-intensity. Measurements of crack extension are performed using a direct current potential drop (DCPD) method. The cyclic rate in the PWR primary water environment is varied between 1 cycle per minute to 0.1 cycle per minute. Faster FCGR rates are observed in water in comparison to FCGR testing performed in air for the hydrided material. Hydrided and non-hydrided materials had similar FCGR values in air, but the non-hydrided material exhibited much lower rates of FCGR in a PWR primary water environment than for hydrided material. Hydrides are shown to exhibit an increased tendency for cracking or decohesion in a PWR primary water environment that results in an enhancement in FCGR values. The FCGR in the PWR primary water only increased slightly with decreasing cycle frequency in the range of 1 cycle per minute to 0.1 cycle per minute. Comparisons between the FCGR in water and air show the enhancement from the PWR environment is affected by the applied stress intensity.
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
NASA Astrophysics Data System (ADS)
Gupta, J.; Hure, J.; Tanguy, B.; Laffont, L.; Lafont, M.-C.; Andrieu, E.
2018-04-01
Irradiation Assisted Stress Corrosion Cracking (IASCC) is a complex phenomenon of degradation which can have a significant influence on maintenance time and cost of core internals of a Pressurized Water Reactor (PWR). Hence, it is an issue of concern, especially in the context of lifetime extension of PWRs. Proton irradiation is generally used as a representative alternative of neutron irradiation to improve the current understanding of the mechanisms involved in IASCC. This study assesses the possibility of using heavy ions irradiation to evaluate IASCC mechanisms by comparing the irradiation induced modifications (in microstructure and mechanical properties) and cracking susceptibility of SA 304 L after both type of irradiations: Fe irradiation at 450 °C and proton irradiation at 350 °C. Irradiation-induced defects are characterized and quantified along with nano-hardness measurements, showing a correlation between irradiation hardening and density of Frank loops that is well captured by Orowan's formula. Both irradiations (iron and proton) increase the susceptibility of SA 304 L to intergranular cracking on subjection to Constant Extension Rate Tensile tests (CERT) in simulated nominal PWR primary water environment at 340 °C. For these conditions, cracking susceptibility is found to be quantitatively similar for both irradiations, despite significant differences in hardening and degree of localization.
Extension of the Bgl Broad Group Cross Section Library
NASA Astrophysics Data System (ADS)
Kirilova, Desislava; Belousov, Sergey; Ilieva, Krassimira
2009-08-01
The broad group cross-section libraries BUGLE and BGL are applied for reactor shielding calculation using the DOORS package based on discrete ordinates method and multigroup approximation of the neutron cross-sections. BUGLE and BGL libraries are problem oriented for PWR or VVER type of reactors respectively. They had been generated by collapsing the problem independent fine group library VITAMIN-B6 applying PWR and VVER one-dimensional radial model of the reactor middle plane using the SCALE software package. The surveillance assemblies (SA) of VVER-1000/320 are located on the baffle above the reactor core upper edge in a region where geometry and materials differ from those of the middle plane and the neutron field gradient is very high which would result in a different neutron spectrum. That is why the application of the fore-mentioned libraries for the neutron fluence calculation in the region of SA could lead to an additional inaccuracy. This was the main reason to study the necessity for an extension of the BGL library with cross-sections appropriate for the SA region. Comparative analysis of the neutron spectra of the SA region calculated by the VITAMIN-B6 and BGL libraries using the two-dimensional code DORT have been done with purpose to evaluate the BGL applicability for SA calculation.
The stability and transport of radiolabeled Fe2O3 particles were studied using laboratory batch and column techniques. Core material collected from a shallow sand and gravel aquifer was used as the immobile column matrix material. Variables in the study incl...
Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R
2014-01-01
Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
NASA Astrophysics Data System (ADS)
Ebrahimi, P.; Vilcaez, J.
2017-12-01
Hydraulic fracturing wastewater (HFW) containing high concentrations of Ba, is commonly disposed into the deep saline aquifers. We investigate the effect of brine salinity, competing cations (Ca and Mg), and guar gum (most common fracturing viscosifier) on the sorption and transport of Ba through dolomite rocks. To this aim, we have conducted batch sorption and core-flooding experiments at both ambient (22°C) and deep subsurface (60°C) temperature conditions. The effect of mineral composition is assessed by comparing batch and core-flooding experimental results obtained with sandstone and dolomite rocks. Batch sorption experiments conducted using powdered dolomite rocks (500-600 µm particle size) revealed that Ba sorption on dolomite greatly decreases with increasing brine salinity (0 - 180,000 mg-NaCl/L), and that at brine salinities of HFW, chloro-complexation reactions between Ba and Cl ions and changes in pH (that results from dolomite dissolution) are the controlling factors of Ba sorption on dolomite. Organo-complexation reactions between Ba and guar gum, and competition of Ba with common cations (Ca and Mg) for hydration sites of dolomite, play a secondary role. This finding is in accordance with core-flooding experimental results, showing that the transport of Ba through synthetic dolomite rocks of high flow properties (25-29.6% porosity, 9.6-13.7 mD permeability), increases with increasing brine salinity (0-180,000 mg-NaCl/L), while the presence of guar gum (50-500 mg/L) does not affect the transport of Ba. On the other hand, core-flooding experiments conducted using natural dolomite core plugs (6.5-8.6% porosity, 0.06-0.3 mD permeability), indicates that guar gum can clog the pore throats of tight dolomite rocks retarding the transport of Ba. Results of our numerical simulation studies indicate that the mechanism of Ba sorption on dolomite can be represented by a sorption model that accounts for both surface complexation reactions on three distinct hydration sites (>CaOHo, >MgOHo, and >CO3Ho), and the kinetic dissolution of dolomite. The presented results are important in understanding the fate of heavy metals present in HFW disposed into deep saline aquifers.
Air ingression calculations for selected plant transients using MELCOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kmetyk, L.N.
1994-01-01
Two sets of MELCOR calculations have been completed studying the effects of air ingression on the consequences of various severe accident scenarios. One set of calculations analyzed a station blackout with surge line failure prior to vessel breach, starting from nominal operating conditions; the other set of calculations analyzed a station blackout occurring during shutdown (refueling) conditions. Both sets of analyses were for the Surry plant, a three-loop Westinghouse PWR. For both accident scenarios, a basecase calculation was done, and then repeated with air ingression from containment into the core region following core degradation and vessel failure. In addition tomore » the two sets of analyses done for this program, a similar air-ingression sensitivity study was done as part of a low-power/shutdown PRA, with results summarized here; that PRA study also analyzed a station blackout occurring during shutdown (refueling) conditions, but for the Grand Gulf plant, a BWR/6 with Mark III containment. These studies help quantify the amount of air that would have to enter the core region to have a significant impact on the severe accident scenario, and demonstrate that one effect, of air ingression is substantial enhancement of ruthenium release. These calculations also show that, while the core clad temperatures rise more quickly due to oxidation with air rather than steam, the core also degrades and relocates more quickly, so that no sustained, enhanced core heatup is predicted to occur with air ingression.« less
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Guan, Fa-chun; Sha, Zhi-peng; Zhang, Yu-yang; Wang, Jun-feng; Wang, Chao
2016-01-01
Home courtyard agriculture is an important model of agricultural production on the Tibetan plateau. Because of the sensitive and fragile plateau environment, it needs to have optimal performance characteristics, including high sustainability, low environmental pressure, and high economic benefit. Emergy analysis is a promising tool for evaluation of the environmental-economic performance of these production systems. In this study, emergy analysis was used to evaluate three courtyard agricultural production models: Raising Geese in Corn Fields (RGICF), Conventional Corn Planting (CCP), and Pea-Wheat Rotation (PWR). The results showed that the RGICF model produced greater economic benefits, and had higher sustainability, lower environmental pressure, and higher product safety than the CCP and PWR models. The emergy yield ratio (EYR) and emergy self-support ratio (ESR) of RGICF were 0.66 and 0.11, respectively, lower than those of the CCP production model, and 0.99 and 0.08, respectively, lower than those of the PWR production model. The impact of RGICF (1.45) on the environment was lower than that of CCP (2.26) and PWR (2.46). The emergy sustainable indices (ESIs) of RGICF were 1.07 and 1.02 times higher than those of CCP and PWR, respectively. With regard to the emergy index of product safety (EIPS), RGICF had a higher safety index than those of CCP and PWR. Overall, our results suggest that the RGICF model is advantageous and provides higher environmental benefits than the CCP and PWR systems. PMID:27487808
Guan, Fa-Chun; Sha, Zhi-Peng; Zhang, Yu-Yang; Wang, Jun-Feng; Wang, Chao
2016-08-01
Home courtyard agriculture is an important model of agricultural production on the Tibetan plateau. Because of the sensitive and fragile plateau environment, it needs to have optimal performance characteristics, including high sustainability, low environmental pressure, and high economic benefit. Emergy analysis is a promising tool for evaluation of the environmental-economic performance of these production systems. In this study, emergy analysis was used to evaluate three courtyard agricultural production models: Raising Geese in Corn Fields (RGICF), Conventional Corn Planting (CCP), and Pea-Wheat Rotation (PWR). The results showed that the RGICF model produced greater economic benefits, and had higher sustainability, lower environmental pressure, and higher product safety than the CCP and PWR models. The emergy yield ratio (EYR) and emergy self-support ratio (ESR) of RGICF were 0.66 and 0.11, respectively, lower than those of the CCP production model, and 0.99 and 0.08, respectively, lower than those of the PWR production model. The impact of RGICF (1.45) on the environment was lower than that of CCP (2.26) and PWR (2.46). The emergy sustainable indices (ESIs) of RGICF were 1.07 and 1.02 times higher than those of CCP and PWR, respectively. With regard to the emergy index of product safety (EIPS), RGICF had a higher safety index than those of CCP and PWR. Overall, our results suggest that the RGICF model is advantageous and provides higher environmental benefits than the CCP and PWR systems.
Development of 3D pseudo pin-by-pin calculation methodology in ANC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, B.; Mayhue, L.; Huria, H.
2012-07-01
Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less
"Photonic lantern" spectral filters in multi-core Fiber.
Birks, T A; Mangan, B J; Díez, A; Cruz, J L; Murphy, D F
2012-06-18
Fiber Bragg gratings are written across all 120 single-mode cores of a multi-core optical Fiber. The Fiber is interfaced to multimode ports by tapering it within a depressed-index glass jacket. The result is a compact multimode "photonic lantern" filter with astrophotonic applications. The tapered structure is also an effective mode scrambler.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.; Hoffler, Keith D.; Proffitt, Melissa S.; Brown, Philip W.; Phillips, Michael R.; Rivers, Robert A.; Messina, Michael D.; Carzoo, Susan W.; Bacon, Barton J.; Foster, John F.
1994-01-01
This paper describes the design, analysis, and nonlinear simulation results (batch and piloted) for a longitudinal controller which is scheduled to be flight-tested on the High-Alpha Research Vehicle (HARV). The HARV is an F-18 airplane modified for and equipped with multi-axis thrust vectoring. The paper includes a description of the facilities, a detailed review of the feedback controller design, linear analysis results of the feedback controller, a description of the feed-forward controller design, nonlinear batch simulation results, and piloted simulation results. Batch simulation results include maximum pitch stick agility responses, angle of attack alpha captures, and alpha regulation for full lateral stick rolls at several alpha's. Piloted simulation results include task descriptions for several types of maneuvers, task guidelines, the corresponding Cooper-Harper ratings from three test pilots, and some pilot comments. The ratings show that desirable criteria are achieved for almost all of the piloted simulation tasks.
Zhang, Li; Wylie, Bruce K.; Loveland, Thomas R.; Fosnight, Eugene A.; Tieszen, Larry L.; Ji, Lei; Gilmanov, Tagir
2007-01-01
Two spatially-explicit estimates of gross primary production (GPP) are available for the Northern Great Plains. An empirical piecewise regression (PWR) GPP model was developed from flux tower measurements to map carbon flux across the region. The Moderate Resolution Imaging Spectrometer (MODIS) GPP model is a process-based model that uses flux tower data to calibrate its parameters. Verification and comparison of the regional PWR GPP and the global MODIS GPP are important for the modeling of grassland carbon flux. This study compared GPP estimates from PWR and MODIS models with five towers in the grasslands. Among them, PWR GPP and MODIS GPP showed a good agreement with tower-based GPP at three towers. The global MODIS GPP, however, did not agree well with tower-based GPP at two other towers, probably because of the insensitivity of MODIS model to regional ecosystem and climate change and extreme soil moisture conditions. Cross-validation indicated that the PWR model is relatively robust for predicting regional grassland GPP. However, the PWR model should include a wide variety of flux tower data as the training data sets to obtain more accurate results.In addition, GPP maps based on the PWR and MODIS models were compared for the entire region. In the northwest and south, PWR GPP was much higher than MODIS GPP. These areas were characterized by the higher water holding capacity with a lower proportion of C4 grasses in the northwest and a higher proportion of C4 grasses in the south. In the central and southeastern regions, PWR GPP was much lower than MODIS GPP under complicated conditions with generally mixed C3/C4 grasses. The analysis indicated that the global MODIS GPP model has some limitations on detecting moisture stress, which may have been caused by the facts that C3 and C4 grasses are not distinguished, water stress is driven by vapor pressure deficit (VPD) from coarse meteorological data, and MODIS land cover data are unable to differentiate the sub-pixel cropland components.
Batch fabrication of polymer microfluidic cartridges for QCM sensor packaging by direct bonding
NASA Astrophysics Data System (ADS)
Sandström, Niklas; Zandi Shafagh, Reza; Gylfason, Kristinn B.; Haraldsson, Tommy; van der Wijngaart, Wouter
2017-12-01
Quartz crystal microbalance (QCM) sensing is an established technique commonly used in laboratory based life-science applications. However, the relatively complex, multi-part design and multi-step fabrication and assembly of state-of-the-art QCM cartridges make them unsuited for disposable applications such as point-of-care (PoC) diagnostics. In this work, we present the uncomplicated manufacturing of QCMs in polymer microfluidic cartridges. Our novel approach comprises two key innovations: the batch reaction injection molding of microfluidic parts; and the integration of the cartridge components by direct, unassisted bonding. We demonstrate molding of batches of 12 off-stoichiometry thiol-ene epoxy polymer (OSTE+) polymer parts in a single molding cycle using an adapted reaction injection molding process; and the direct bonding of the OSTE+ parts to other OSTE+ substrates, to printed circuit boards, and to QCMs. The microfluidic QCM OSTE+ cartridges were successfully evaluated in terms of liquid sealing as well as electrical properties, and the sensor performance characteristics are on par with those of a commercially available QCM biosensor cartridge. The simplified manufacturing of QCM sensors with maintained performance potentializes novel application areas, e.g. as disposable devices in a point of care setting. Moreover, our results can be extended to simplifying the fabrication of other microfluidic devices with multiple heterogeneously integrated components.
Current and anticipated uses of thermal-hydraulic codes in Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Sommer, F.; Depisch, F.
1997-07-01
In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.
Quality control and batch testing of MRPC modules for BESIII ETOF upgrade
NASA Astrophysics Data System (ADS)
Liu, Z.; Li, X.; Sun, Y. J.; Li, C.; Heng, Y. K.; Chen, T. X.; Dai, H. L.; Shao, M.; Sun, S. S.; Tang, Z. B.; Yang, R. X.; Wu, Z.; Wang, X. Z.
2017-12-01
The end-cap time-of-flight (ETOF) system for the Beijing Spectrometer III (BESIII) has been upgraded using the Multi-gap Resistive Plate Chamber (MRPC) technology (Williams et al., 1999; Li et al., 2001; Blanco et al., 2003; Fonte et al., 2013, [1-4]). A set of quality-assurance procedures has been developed to guarantee the performances of the 72 mass-produced MRPC modules installed. The cosmic ray batch testing show that the average detection efficiency of the MRPC modules is about 95%. Two different calibration methods indicate that MRPCs' time resolution can reach 60 ps in the cosmic ray test.
The stability and transport of radio-labeled Fe2O3 particles were studied using laboratory batch and column techniques. Core material collected from shallow sand and gravel aquifer was used as the immobile column matrix material. Variables in the study included flow rate, pH, i...
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, P.; Umminger, K.J.; Schoen, B.
1995-09-01
The thermal hydraulic behavior of a PWR during beyond-design-basis accident scenarios is of vital interest for the verification and optimization of accident management procedures. Within the scope of the German reactor safety research program experiments were performed in the volumetrically scaled PKL 111 test facility by Siemens/KWU. This highly instrumented test rig simulates a KWU-design PWR (1300 MWe). In particular, the latest tests performed related to a SBLOCA with additional system failures, e.g. nitrogen entering the primary system. In the case of a SBLOCA, it is the goal of the operator to put the plant in a condition where themore » decay heat can be removed first using the low pressure emergency core cooling system and then the residual heat removal system. The experimental investigation presented assumed the following beyond-design-basis accident conditions: 0.5% break in a cold leg, 2 of 4 steam generators (SGs) isolated on the secondary side (feedwater- and steam line-valves closed), filled with steam on the primary side, cooldown of the primary system using the remaining two steam generators, high pressure injection system only in the two loops with intact steam generators, if possible no operator actions to reach the conditions for residual heat removal system activation. Furthermore, it was postulated that 2 of the 4 hot leg accumulators had a reduced initial water inventory (increased nitrogen inventory), allowing nitrogen to enter the primary system at a pressure of 15 bar and nearly preventing the heat transfer in the SGs ({open_quotes}passivating{close_quotes} U-tubes). Due to this the heat transfer regime in the intact steam generators changed remarkably. The primary system showed self-regulating system effects and heat transfer improved again (reflux-condenser mode in the U-tube inlet region).« less
Gjoka, Xhorxhi; Gantier, Rene; Schofield, Mark
2017-01-20
The goal of this study was to adapt a batch mAb purification chromatography platform for continuous operation. The experiments and rationale used to convert from batch to continuous operation are described. Experimental data was used to design chromatography methods for continuous operation that would exceed the threshold for critical quality attributes and minimize the consumables required as compared to batch mode of operation. Four unit operations comprising of Protein A capture, viral inactivation, flow-through anion exchange (AEX), and mixed-mode cation exchange chromatography (MMCEX) were integrated across two Cadence BioSMB PD multi-column chromatography systems in order to process a 25L volume of harvested cell culture fluid (HCCF) in less than 12h. Transfer from batch to continuous resulted in an increase in productivity of the Protein A step from 13 to 50g/L/h and of the MMCEX step from 10 to 60g/L/h with no impact on the purification process performance in term of contaminant removal (4.5 log reduction of host cell proteins, 50% reduction in soluble product aggregates) and overall chromatography process yield of recovery (75%). The increase in productivity, combined with continuous operation, reduced the resin volume required for Protein A and MMCEX chromatography by more than 95% compared to batch. The volume of AEX membrane required for flow through operation was reduced by 74%. Moreover, the continuous process required 44% less buffer than an equivalent batch process. This significant reduction in consumables enables cost-effective, disposable, single-use manufacturing. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
Using Multi-Core Systems for Rover Autonomy
NASA Technical Reports Server (NTRS)
Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.
2010-01-01
Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality
Validation Data and Model Development for Fuel Assembly Response to Seismic Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardet, Philippe; Ricciardi, Guillaume
2016-01-31
Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less
Pretest mediction of Semiscale Test S-07-10 B. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobbe, C A
A best estimate prediction of Semiscale Test S-07-10B was performed at INEL by EG and G Idaho as part of the RELAP4/MOD6 code assessment effort and as the Nuclear Regulatory Commission pretest calculation for the Small Break Experiment. The RELAP4/MOD6 Update 4 and the RELAP4/MOD7 computer codes were used to analyze Semiscale Test S-07-10B, a 10% communicative cold leg break experiment. The Semiscale Mod-3 system utilized an electrially heated simulated core operating at a power level of 1.94 MW. The initial system pressure and temperature in the upper plenum was 2276 psia and 604/sup 0/F, respectively.
Corletti, M.M.; Lau, L.K.; Schulz, T.L.
1993-12-14
The spent fuel pit of a pressured water reactor (PWR) nuclear power plant has sufficient coolant capacity that a safety rated cooling system is not required. A non-safety rated combined cooling and purification system with redundant branches selectively provides simultaneously cooling and purification for the spent fuel pit, the refueling cavity, and the refueling water storage tank, and transfers coolant from the refueling water storage tank to the refueling cavity without it passing through the reactor core. Skimmers on the suction piping of the combined cooling and purification system eliminate the need for separate skimmer circuits with dedicated pumps. 1 figures.
Development and preliminary verification of the 3D core neutronic code: COCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, H.; Mo, K.; Li, W.
As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-07-08
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.
Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putney, J.M.; Preece, R.J.
1993-06-01
An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less
Optical properties of core-shell and multi-shell nanorods
NASA Astrophysics Data System (ADS)
Mokkath, Junais Habeeb; Shehata, Nader
2018-05-01
We report a first-principles time dependent density functional theory study of the optical response modulations in bimetallic core-shell (Na@Al and Al@Na) and multi-shell (Al@Na@Al@Na and Na@Al@Na@Al: concentric shells of Al and Na alternate) nanorods. All of the core-shell and multi-shell configurations display highly enhanced absorption intensity with respect to the pure Al and Na nanorods, showing sensitivity to both composition and chemical ordering. Remarkably large spectral intensity enhancements were found in a couple of core-shell configurations, indicative that optical response averaging based on the individual components can not be considered as true as always in the case of bimetallic core-shell nanorods. We believe that our theoretical results would be useful in promising applications depending on Aluminum-based plasmonic materials such as solar cells and sensors.
Persistent aerial video registration and fast multi-view mosaicing.
Molina, Edgardo; Zhu, Zhigang
2014-05-01
Capturing aerial imagery at high resolutions often leads to very low frame rate video streams, well under full motion video standards, due to bandwidth, storage, and cost constraints. Low frame rates make registration difficult when an aircraft is moving at high speeds or when global positioning system (GPS) contains large errors or it fails. We present a method that takes advantage of persistent cyclic video data collections to perform an online registration with drift correction. We split the persistent aerial imagery collection into individual cycles of the scene, identify and correct the registration errors on the first cycle in a batch operation, and then use the corrected base cycle as a reference pass to register and correct subsequent passes online. A set of multi-view panoramic mosaics is then constructed for each aerial pass for representation, presentation and exploitation of the 3D dynamic scene. These sets of mosaics are all in alignment to the reference cycle allowing their direct use in change detection, tracking, and 3D reconstruction/visualization algorithms. Stereo viewing with adaptive baselines and varying view angles is realized by choosing a pair of mosaics from a set of multi-view mosaics. Further, the mosaics for the second pass and later can be generated and visualized online as their is no further batch error correction.
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Assessment for advanced fuel cycle options in CANDU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morreale, A.C.; Luxat, J.C.; Friedlander, Y.
2013-07-01
The possible options for advanced fuel cycles in CANDU reactors including actinide burning options and thorium cycles were explored and are feasible options to increase the efficiency of uranium utilization and help close the fuel cycle. The actinide burning TRUMOX approach uses a mixed oxide fuel of reprocessed transuranic actinides from PWR spent fuel blended with natural uranium in the CANDU-900 reactor. This system reduced actinide content by 35% and decreased natural uranium consumption by 24% over a PWR once through cycle. The thorium cycles evaluated used two CANDU-900 units, a generator and a burner unit along with a drivermore » fuel feedstock. The driver fuels included plutonium reprocessed from PWR, from CANDU and low enriched uranium (LEU). All three cycles were effective options and reduced natural uranium consumption over a PWR once through cycle. The LEU driven system saw the largest reduction with a 94% savings while the plutonium driven cycles achieved 75% savings for PWR and 87% for CANDU. The high neutron economy, online fuelling and flexible compact fuel make the CANDU system an ideal reactor platform for many advanced fuel cycles.« less
Coupled Monte Carlo neutronics and thermal hydraulics for power reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernnat, W.; Buck, M.; Mattes, M.
The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less
Neural simulations on multi-core architectures.
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.
Neural Simulations on Multi-Core Architectures
Eichner, Hubert; Klug, Tobias; Borst, Alexander
2009-01-01
Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393
Batch Computed Tomography Analysis of Projectiles
2016-05-01
error calculation. Projectiles are then grouped together according to the similarity of their components. Also discussed is graphical- cluster analysis...ballistic, armor, grouping, clustering 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF...Fig. 10 Graphical structure of 15 clusters of the jacket/core radii profiles with plots of the profiles contained within each cluster . The size of
Neural networks within multi-core optic fibers
Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael
2016-01-01
Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911
Neural networks within multi-core optic fibers.
Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael
2016-07-07
Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.
Noninvasive and Real-Time Plasmon Waveguide Resonance Thermometry
Zhang, Pengfei; Liu, Le; He, Yonghong; Zhou, Yanfei; Ji, Yanhong; Ma, Hui
2015-01-01
In this paper, the noninvasive and real-time plasmon waveguide resonance (PWR) thermometry is reported theoretically and demonstrated experimentally. Owing to the enhanced evanescent field and thermal shield effect of its dielectric layer, a PWR thermometer permits accurate temperature sensing and has a wide dynamic range. A temperature measurement sensitivity of 9.4 × 10−3 °C is achieved and the thermo optic coefficient nonlinearity is measured in the experiment. The measurement of water cooling processes distributed in one dimension reveals that a PWR thermometer allows real-time temperature sensing and has potential to be applied for thermal gradient analysis. Apart from this, the PWR thermometer has the advantages of low cost and simple structure, since our transduction scheme can be constructed with conventional optical components and commercial coating techniques. PMID:25871718
Isotopic Details of the Spent Catawba-1 MOX Fuel Rods at ORNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Ronald James
The United States Department of Energy funded Shaw/AREVA MOX Services LLC to fabricate four MOX Lead Test Assemblies (LTA) from weapons-grade plutonium. A total of four MOX LTAs (including MX03) were irradiated in the Catawba Nuclear Station (Unit 1) Catawba-1 PWR which operated at a total thermal power of 3411 MWt and had a core with 193 total fuel assemblies. The MOX LTAs were irradiated along with Duke Energy s irradiation of eight Westinghouse Next Generation Fuel (NGF) LEU LTAs (ref.1) and the remaining 181 LEU fuel assemblies. The MX03 LTA was irradiated in the Catawba-1 PWR core (refs.2,3) duringmore » cycles C-16 and C-17. C-16 began on June 5, 2005, and ended on November 11, 2006, after 499 effective full power days (EFPDs). C-17 started on December 29, 2006, (after a shutdown of 48 days) and continued for 485 EFPDs. The MX03 and three other MOX LTAs (and other fuel assemblies) were discharged at the end of C-17 on May 3, 2008. The design of the MOX LTAs was based on the (Framatome ANP, Inc.) Mark-BW/MOX1 17 17 fuel assembly design (refs. 4,5,6) for use in Westinghouse PWRs, but with MOX fuel rods with three Pu loading ranges: the nominal Pu loadings are 4.94 wt%, 3.30 wt%, and 2.40 wt%, respectively, for high, medium, and low Pu content. The Mark-BW/MOX1 (MOX LTA) fuel assembly design is the same as the Advanced Mark-BW fuel assembly design but with the LEU fuel rods replaced by MOX fuel rods (ref. 5). The fabrication of the fuel pellets and fuel rods for the MOX LTAs was performed at the Cadarache facility in France, with the fabrication of the LTAs performed at the MELOX facility, also in France.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzgrewe, F.; Hegedues, F.; Paratte, J.M.
1995-03-01
The light water reactor BOXER code was used to determine the fast azimuthal neutron fluence distribution at the inner surface of the reactor pressure vessel after the tenth cycle of a pressurized water reactor (PWR). Using a cross-section library in 45 groups, fixed-source calculations in transport theory and x-y geometry were carried out to determine the fast azimuthal neutron flux distribution at the inner surface of the pressure vessel for four different cycles. From these results, the fast azimuthal neutron fluence after the tenth cycle was estimated and compared with the results obtained from scraping test experiments. In these experiments,more » small samples of material were taken from the inner surface of the pressure vessel. The fast neutron fluence was then determined form the measured activity of the samples. Comparing the BOXER and scraping test results have maximal differences of 15%, which is very good, considering the factor of 10{sup 3} neutron attenuation between the reactor core and the pressure vessel. To compare the BOXER results with an independent code, the 21st cycle of the PWR was also calculated with the TWODANT two-dimensional transport code, using the same group structure and cross-section library. Deviations in the fast azimuthal flux distribution were found to be <3%, which verifies the accuracy of the BOXER results.« less
Korean standard nuclear plant ex-vessel neutron dosimetry program Ulchin 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duo, J.I.; Chen, J.; Kulesza, J.A.
2011-07-01
A comprehensive ex-vessel neutron dosimetry (EVND) surveillance program has been deployed in 16 pressurized water reactors (PWR) in South Korea and EVND dosimetry sets have already been installed and analyzed in Westinghouse reactor designs. In this paper, the unique features of the design, training, and installation in the Korean standard nuclear plant (KSNP) Ulchin Unit 4 are presented. Ulchin Unit 4 Cycle 9 represents the first dosimetry analyzed from the EVND design deployed in KSNP plants: Yonggwang Units 3 through 6 and Ulchin Units 3 through 6. KSNP's cavity configuration precludes a conventional installation from the cavity floor. The solution,more » requiring the installation crew to access the cavity at an elevation of the active core, places a premium on rapid installation due to high area dose rates. Numerous geometrical features warranted the use of a detailed design in true 3D mechanical design software to control interferences. A full-size training mockup maximized the crew ability to correctly install the instrument in minimum time. The analysis of the first dosimetry set shows good agreements between measurement and calculation within the associated uncertainties. A complete EVND system has been successfully designed, installed, and analyzed for a KNSP plant. Current and future EVND analyses will continue supporting the successful operation of PWR units in South Korea. (authors)« less
Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Chun-Yi
By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less
Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao
2009-05-20
Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmarkmore » the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.« less
Scheduling multicore workload on shared multipurpose clusters
NASA Astrophysics Data System (ADS)
Templon, J. A.; Acosta-Silva, C.; Flix Molina, J.; Forti, A. C.; Pérez-Calero Yzquierdo, A.; Starink, R.
2015-12-01
With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This paper describes how efficient multicore scheduling was achieved at the sites the authors represent, by implementing dynamically-sized multicore partitions via a minimalistic addition to the Torque/Maui batch system already in use at those sites. The paper further includes example results from use of the system in production, as well as measurements on the dependence of performance (especially the ramp-up in throughput for multicore jobs) on node size and job size.
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.
Peng, Chao; Sahani, Sandip; Rushing, John
2017-10-01
We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.
Structural Integrity of Water Reactor Pressure Boundary Components.
1981-02-20
environment, and load waveform parameters . A theory of the influence of dissolved oxygen content on the fatigue crack growth results in simulated PWR ...simulated PWR coolant is - (Continues ) DD IJN7 1473 EDITION OF I NOV S ..OSL- -C 2 S/ 0102-LF-014-6601 S1ECURITY CLASSI1FICATION OF THIS PAGE (When...not seem to influence the data, which was produced for a load ratio of 0.2 and a simulated PWR coolant environment. Test results for A106 Grade C piping
Research on JD e-commerce's delivery model
NASA Astrophysics Data System (ADS)
Fan, Zhiguo; Ma, Mengkun; Feng, Chaoying
2017-03-01
E-commerce enterprises represented by JD have made a great contribution to the economic growth and economic development of our country. Delivery, as an important part of logistics, has self-evident importance. By establishing efficient and perfect self-built logistics systems and building good cooperation models with third-party logistics enterprises, e-commerce enterprises have created their own logistics advantages. Characterized by multi-batch and small-batch, e-commerce is much more complicated than traditional transaction. It's not easy to decide which delivery model e-commerce enterprises should adopt. Having e-commerce's logistics delivery as the main research object, this essay aims to find a more suitable logistics delivery model for JD's development.
Bonal, Niteesh Singh; Paramkusam, Bala Ramudu; Basudhar, Prabir Kumar
2018-06-05
The study aims to enhance the efficacy of surfactants using salt and multi-walled carbon nanotubes (MWCNT) for washing used engine oil (UEO) contaminated soil and compare the geotechnical properties of contaminated soil before and after washing (batch washing and soil washing). From batch washing of the contaminated soil the efficacy of the cleaning process is established. Contamination of soil with hydrocarbons present in UEO significantly affects its' engineering properties manifesting in no plasticity and low specific gravity; the corresponding optimum moisture content value is 6.42% while maximum dry density is 1.770 g/cc, which are considerably lower than those of the uncontaminated soil. The result also showed decrease in the values of cohesion intercept and increase in the friction angle values. The adopted soil washing technique resulted increase in specific gravity from 1.85 to 2.13 and cohesion from 0.443 to 1.04 kg/cm 2 and substantial decrease in the friction angle from 31.16° to 17.14° when washed with most efficient combination of SDS surfactant along with sodium meta-silicate (salt) and MWCNT. Effectiveness of the washing of contaminated soil by batch processing and soil washing techniques has been established qualitatively. The efficiency of surfactant treatment has been observed to be increased significantly by the addition of salt and MWCNT. Copyright © 2018 Elsevier B.V. All rights reserved.
Analysis of the Impedance Resonance of Piezoelectric Multi-Fiber Composite Stacks
NASA Technical Reports Server (NTRS)
Sherrit, S.; Djrbashian, A.; Bradford, S C
2013-01-01
Multi-Fiber CompositesTM (MFC's) produced by Smart Materials Corp behave essentially like thin planar stacks where each piezoelectric layer is composed of a multitude of fibers. We investigate the suitability of using previously published inversion techniques for the impedance resonances of monolithic co-fired piezoelectric stacks to the MFCTM to determine the complex material constants from the impedance data. The impedance equations examined in this paper are those based on the derivation. The utility of resonance techniques to invert the impedance data to determine the small signal complex material constants are presented for a series of MFC's. The technique was applied to actuators with different geometries and the real coefficients were determined to be similar within changes of the boundary conditions due to change of geometry. The scatter in the imaginary coefficient was found to be larger. The technique was also applied to the same actuator type but manufactured in different batches with some design changes in the non active portion of the actuator and differences in the dielectric and the electromechanical coupling between the two batches were easily measureable. It is interesting to note that strain predicted by small signal impedance analysis is much lower than high field stains. Since the model is based on material properties rather than circuit constants, it could be used for the direct evaluation of specific aging or degradation mechanisms in the actuator as well as batch sorting and adjustment of manufacturing processes.
Stormwater Pollution Prevention Plan - TA-60 Asphalt Batch Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, Leonard Frank
This Storm Water Pollution Prevention Plan (SWPPP) was developed in accordance with the provisions of the Clean Water Act (33 U.S.C. §§1251 et seq., as amended), and the Multi-Sector General Permit for Storm Water Discharges Associated with Industrial Activity (U.S. EPA, June 2015) issued by the U.S. Environmental Protection Agency (EPA) for the National Pollutant Discharge Elimination System (NPDES) and using the industry specific permit requirements for Sector P-Land Transportation and Warehousing as a guide. This SWPPP applies to discharges of stormwater from the operational areas of the TA-60-01 Asphalt Batch Plant at Los Alamos National Laboratory. Los Alamos Nationalmore » Laboratory (also referred to as LANL or the “Laboratory”) is owned by the Department of Energy (DOE), and is operated by Los Alamos National Security, LLC (LANS). Throughout this document, the term “facility” refers to the TA-60 Asphalt Batch Plant and associated areas. The current permit expires at midnight on June 4, 2020.« less
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-core Processors
2009-09-01
TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes... 4 3. INFORMATION MANAGEMENT FOR PARALLELIZATION AND...STREAMING............................................................. 7 4 . RESULTS
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-01-01
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722
Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.
Steimers, A; Farnung, W; Kohl-Bareis, M
2016-01-01
We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.
NASA Astrophysics Data System (ADS)
Aksenov, A. G.; Chechetkin, V. M.
2018-04-01
Most of the energy released in the gravitational collapse of the cores of massive stars is carried away by neutrinos. Neutrinos play a pivotal role in explaining core-collape supernovae. Currently, mathematical models of the gravitational collapse are based on multi-dimensional gas dynamics and thermonuclear reactions, while neutrino transport is considered in a simplified way. Multidimensional gas dynamics is used with neutrino transport in the flux-limited diffusion approximation to study the role of multi-dimensional effects. The possibility of large-scale convection is discussed, which is interesting both for explaining SN II and for setting up observations to register possible high-energy (≳10MeV) neutrinos from the supernova. A new multi-dimensional, multi-temperature gas dynamics method with neutrino transport is presented.
A grid generation system for multi-disciplinary design optimization
NASA Technical Reports Server (NTRS)
Jones, William T.; Samareh-Abolhassani, Jamshid
1995-01-01
A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.
Allesø, Morten; Holm, Per; Carstensen, Jens Michael; Holm, René
2016-05-25
Surface topography, in the context of surface smoothness/roughness, was investigated by the use of an image analysis technique, MultiRay™, related to photometric stereo, on different tablet batches manufactured either by direct compression or roller compaction. In the present study, oblique illumination of the tablet (darkfield) was considered and the area of cracks and pores in the surface was used as a measure of tablet surface topography; the higher a value, the rougher the surface. The investigations demonstrated a high precision of the proposed technique, which was able to rapidly (within milliseconds) and quantitatively measure the obtained surface topography of the produced tablets. Compaction history, in the form of applied roll force and tablet punch pressure, was also reflected in the measured smoothness of the tablet surfaces. Generally it was found that a higher degree of plastic deformation of the microcrystalline cellulose resulted in a smoother tablet surface. This altogether demonstrated that the technique provides the pharmaceutical developer with a reliable, quantitative response parameter for visual appearance of solid dosage forms, which may be used for process and ultimately product optimization. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
2013-09-01
Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.
Preparation of microcapsules with self-microemulsifying core by a vibrating nozzle method.
Homar, Miha; Suligoj, Dasa; Gasperlin, Mirjana
2007-02-01
Incorporation of drugs in self-microemulsifying systems (SMES) offers several advantages for their delivery, the main one being faster drug dissolution and absorption. Formulation of SMES in solid dosage forms can be difficult and, to date, most SMES are applied in liquid dosage form or soft gelatin capsules. This study has explored the incorporation of SMES in microcapsules, which could then be used for formulation of solid dosage forms. An Inotech IE-50 R encapsulator equipped with a concentric nozzle was used to produce alginate microcapsules with a self-microemulsifying core. Retention of the core phase was improved by optimization of encapsulator parameters and modification of the shell forming phase and hardening solution. The mean encapsulation efficiency of final batches was more than 87%, which resulted in 0.07% drug loading. It was demonstrated that production of microcapsules with a self-microemulsifying core is possible and that the process is stable and reproducible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wichner, R.P.
The program was designed to determine fission product and aerosol release rates from irradiated fuel under accident conditions, to identify the chemical forms of the released material, and to correlate the results with experimental and specimen conditions with the data from related experiments. These tests of PWR fuel were conducted and fuel specimen and test operating data are presented. The nature and rate of fission product vapor interaction with aerosols were studied. Aerosol deposition rates and transport in the reactor vessel during LWR core-melt accidents were studied. The Nuclear Safety Pilot Plant is dedicated to developing an expanded data basemore » on the behavior of aerosols generated during a severe accident.« less
Pretest and posttest calculations of Semiscale Test S-07-10D with the TRAC computer program. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duerre, K.H.; Cort, G.E.; Knight, T.D.
The Transient Reactor Analysis Code (TRAC) developed at the Los Alamos National Laboratory was used to predict the behavior of the small-break experiment designated Semiscale S-07-10D. This test simulates a 10 per cent communicative cold-leg break with delayed Emergency Core Coolant injection and blowdown of the broken-loop steam generator secondary. Both pretest calculations that incorporated measured initial conditions and posttest calculations that incorporated measured initial conditions and measured transient boundary conditions were completed. The posttest calculated parameters were generally between those obtained from pretest calculations and those from the test data. The results are strongly dependent on depressurization rate and,more » hence, on break flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brzoska, B.; Depisch, F.; Fuchs, H.P.
To analyze the influence of prepressurization on fuel rod behavior, a parametric study has been performed that considers the effects of as-fabricated fuel rod internal prepressure on the normal operation and postulated loss-of-coolant accident (LOCA) rod behavior of a 1300-MW(electric) Kraftwerk Union (KWU) standard pressurized water reactor nuclear power plant. A variation of the prepressure in the range from 15 to 35 bars has only a slight influence on normal operation behavior. Considering the LOCA behavior, only a small temperature increase results from prepressure reduction, while the core-wide straining behavior is improved significantly. The KWU prepressurization takes both conditions intomore » account.« less
Moving from Batch to Field Using the RT3D Reactive Transport Modeling System
NASA Astrophysics Data System (ADS)
Clement, T. P.; Gautam, T. R.
2002-12-01
The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.
Modelling of Tc migration in an un-oxidized fractured drill core from Äspö, Sweden
NASA Astrophysics Data System (ADS)
Huber, F. M.; Totskiy, Y.; Montoya Garcia, V.; Enzmann, F.; Trumm, M.; Wenka, A.; Geckeis, H.; Schaefer, T.
2015-12-01
The radionuclide retention of redox sensitive radionuclides (e.g. Pu, Np, U, Tc) in crystalline host rock greatly depends on the rock matrix and the rock redox capacity. Preservation of drill cores concerning oxidation is therefore of paramount importance to reliably predict the near-natural radionuclide retention properties. Here, experimental results of HTO and Tc laboratory migration experiments in a naturally single fractured Äspö un-oxidized drill core are modelled using two different 2D models. Both models employ geometrical information obtained by μ-computed tomography (μCT) scanning of the drill core. The models differ in geometrical complexity meaning the first model (PPM-MD) consists of a simple parallel plate with a porous matrix adjacent to the fracture whereas the second model (MPM) uses the mid-plane of the 3D fracture only (no porous matrix). Simulation results show that for higher flow rates (Peclet number > 1), the MPM satisfactorily describes the HTO breakthrough curves (BTC) whereas the PPM-MD model nicely reproduces the HTO BTC for small Pe numbers (<1). These findings clearly highlight the influence of fracture geometry/flow field complexity on solute transport for Pe numbers > 1 and the dominating effect of matrix diffusion for Peclet numbers < 1. Retention of Tc is modelled using a simple Kd-approach in case of the PPM-MD and including 1st order sorptive reduction/desorption kinetics in case of the MPM. Batch determined sorptive reduction/desorption kinetic rates and Kd values for Tc on non-oxidized Äspö diorite are used in the model and compared to best fit values. By this approach, the transferability of kinetic data concerning sorptive reduction determined in static batch experiments to dynamic transport experiments is examined.
NASA Astrophysics Data System (ADS)
Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide
2015-09-01
The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.
NASA Astrophysics Data System (ADS)
Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.
2018-02-01
The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.
Amaya, N; Irfan, M; Zervas, G; Nejabati, R; Simeonidou, D; Sakaguchi, J; Klaus, W; Puttnam, B J; Miyazawa, T; Awaji, Y; Wada, N; Henning, I
2013-04-08
We present the first elastic, space division multiplexing, and multi-granular network based on two 7-core MCF links and four programmable optical nodes able to switch traffic utilising the space, frequency and time dimensions with over 6000-fold bandwidth granularity. Results show good end-to-end performance on all channels with power penalties between 0.75 dB and 3.7 dB.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
Impact of Reprocessed Uranium Management on the Homogeneous Recycling of Transuranics in PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youinou, Gilles J.
This article presents the results of a neutronics analysis related to the homogeneous recycling of transuranics (TRU) in PWRs with a MOX fuel using enriched uranium instead of depleted uranium. It also addresses an often, if not always, overlooked aspect related to the recycling of TRU in PWRs, namely the use of reprocessed uranium. From a neutronics point of view, it is possible to multi-recycle the entirety of the plutonium with or without neptunium and americium in a PWR fleet using MOX-EU fuel in between one third and two thirds of the fleet. Recycling neptunium and americium with plutonium significantlymore » decreases the decay heat of the waste stream between 100 to 1,000 years compared to those of an open fuel cycle or when only plutonium is recycled. The uranium present in MOX-EU used fuel still contains a significant amount of 235uranium and recycling it makes a major difference on the natural uranium needs. For example, a PWR fleet recycling its plutonium, neptunium and americium in MOXEU needs 28 percent more natural uranium than a reference UO 2 open cycle fleet generating the same energy if the reprocessed uranium is not recycled and 19 percent less if the reprocessed uranium is recycled back in the reactors, i.e. a 47 percent difference.« less
Impact of Reprocessed Uranium Management on the Homogeneous Recycling of Transuranics in PWRs
Youinou, Gilles J.
2017-05-04
This article presents the results of a neutronics analysis related to the homogeneous recycling of transuranics (TRU) in PWRs with a MOX fuel using enriched uranium instead of depleted uranium. It also addresses an often, if not always, overlooked aspect related to the recycling of TRU in PWRs, namely the use of reprocessed uranium. From a neutronics point of view, it is possible to multi-recycle the entirety of the plutonium with or without neptunium and americium in a PWR fleet using MOX-EU fuel in between one third and two thirds of the fleet. Recycling neptunium and americium with plutonium significantlymore » decreases the decay heat of the waste stream between 100 to 1,000 years compared to those of an open fuel cycle or when only plutonium is recycled. The uranium present in MOX-EU used fuel still contains a significant amount of 235uranium and recycling it makes a major difference on the natural uranium needs. For example, a PWR fleet recycling its plutonium, neptunium and americium in MOXEU needs 28 percent more natural uranium than a reference UO 2 open cycle fleet generating the same energy if the reprocessed uranium is not recycled and 19 percent less if the reprocessed uranium is recycled back in the reactors, i.e. a 47 percent difference.« less
Focal ratio degradation in lightly fused hexabundles
NASA Astrophysics Data System (ADS)
Bryant, J. J.; Bland-Hawthorn, J.; Fogarty, L. M. R.; Lawrence, J. S.; Croom, S. M.
2014-02-01
We are now moving into an era where multi-object wide-field surveys, which traditionally use single fibres to observe many targets simultaneously, can exploit compact integral field units (IFUs) in place of single fibres. Current multi-object integral field instruments such as Sydney-AAO Multi-object Integral field spectrograph have driven the development of new imaging fibre bundles (hexabundles) for multi-object spectrographs. We have characterized the performance of hexabundles with different cladding thicknesses and compared them to that of the same type of bare fibre, across the range of fill fractions and input f-ratios likely in an IFU instrument. Hexabundles with 7-cores and 61-cores were tested for focal ratio degradation (FRD), throughput and cross-talk when fed with inputs from F/3.4 to >F/8. The five 7-core bundles have cladding thickness ranging from 1 to 8 μm, and the 61-core bundles have 5 μm cladding. As expected, the FRD improves as the input focal ratio decreases. We find that the FRD and throughput of the cores in the hexabundles match the performance of single fibres of the same material at low input f-ratios. The performance results presented can be used to set a limit on the f-ratio of a system based on the maximum loss allowable for a planned instrument. Our results confirm that hexabundles are a successful alternative for fibre imaging devices for multi-object spectroscopy on wide-field telescopes and have prompted further development of hexabundle designs with hexagonal packing and square cores.
Ni, Yongnian; Zhang, Liangsheng; Churchill, Jane; Kokot, Serge
2007-06-15
In this paper, chemometrics methods were applied to resolve the high performance liquid chromatography (HPLC) fingerprints of complex, many-component substances to compare samples from a batch from a given manufacturer, or from those of different producers. As an example of such complex substances, we used a common Chinese traditional medicine, Huoxiang Zhengqi Tincture (HZT) for this research. Twenty-one samples, each representing a separate HZT production batch from one of three manufacturers were analyzed by HPLC with the aid of a diode array detector (DAD). An Agilent Zorbax Eclipse XDB-C18 column with an Agilent Zorbax high pressure reliance cartridge guard-column were used. The mobile phase consisted of water (A) and methanol (B) with a gradient program of 25-65% (v/v, B) during 0-30min, 65-55% (v/v, B) during 30-35min and 55-100% (v/v, B) during 35-60min (flow rate, 1.0mlmin(-1); injection volume, 20mul; and column temperature-ambient). The detection wavelength was adjusted for maximum sensitivity at different time periods. A peak area matrix with 21objectsx14HPLC variables was obtained by sampling each chromatogram at 14 common retention times. Similarities were then calculated to discriminate the batch-to-batch samples and also, a more informative multi-criteria decision making methodology (MCDM), PROMETHEE and GAIA, was applied to obtain more information from the chromatograms in order to rank and compare the complex HZT profiles. The results showed that with the MCDM analysis, it was possible to match and discriminate correctly the batch samples from the three different manufacturers. Fourier transform infrared (FT-IR) spectra taken from samples from several batches were compared by the common similarity method with the HPLC results. It was found that the FT-IR spectra did not discriminate the samples from the different batches.
Ren, Xi-Dong; Chen, Xu-Sheng; Tang, Lei; Zeng, Xin; Wang, Liang; Mao, Zhong-Gui
2015-11-01
The introduction of an environmental stress of acidic pH shock had successfully solved the common deficiency existed in ε-PL production, viz. the distinct decline of ε-PL productivity in the feeding phase of the fed-batch fermentation. To unravel the underlying mechanism, we comparatively studied the physiological changes of Streptomyces sp. M-Z18 during fed-batch fermentations with the pH shock strategy (PS) and pH non-shock strategy (PNS). Morphology investigation showed that pellet-shape change was negligible throughout both fermentations. In addition, the distribution of pellet size rarely changed in the PS, whereas pellet size and number decreased substantially with time in the PNS. This was consistent with the performances of ε-PL productivity in both strategies, demonstrating that morphology could be used as a predictor of ε-PL productivity during fed-batch fermentation. Furthermore, a second growth phase happened in the PS after pH shock, followed by the re-appearance of live mycelia in the dead core of the pellets. Meanwhile, mycelia respiration and key enzymes in the central metabolic and ε-PL biosynthetic pathways were overall strengthened until the end of the fed-batch fermentation. As a result, the physiological changes induced by the acidic pH shock have synergistically and permanently contributed to the stimulation of ε-PL productivity. However, this second growth phase and re-appearance of live mycelia were absent in the PNS. These results indicated that the introduction of a short-term suppression on mycelia physiological metabolism would guarantee the long-term high ε-PL productivity.
A strategy of gene overexpression based on tandem repetitive promoters in Escherichia coli.
Li, Mingji; Wang, Junshu; Geng, Yanping; Li, Yikui; Wang, Qian; Liang, Quanfeng; Qi, Qingsheng
2012-02-06
For metabolic engineering, many rate-limiting steps may exist in the pathways of accumulating the target metabolites. Increasing copy number of the desired genes in these pathways is a general method to solve the problem, for example, the employment of the multi-copy plasmid-based expression system. However, this method may bring genetic instability, structural instability and metabolic burden to the host, while integrating of the desired gene into the chromosome may cause inadequate transcription or expression. In this study, we developed a strategy for obtaining gene overexpression by engineering promoter clusters consisted of multiple core-tac-promoters (MCPtacs) in tandem. Through a uniquely designed in vitro assembling process, a series of promoter clusters were constructed. The transcription strength of these promoter clusters showed a stepwise enhancement with the increase of tandem repeats number until it reached the critical value of five. Application of the MCPtacs promoter clusters in polyhydroxybutyrate (PHB) production proved that it was efficient. Integration of the phaCAB genes with the 5CPtacs promoter cluster resulted in an engineered E.coli that can accumulate 23.7% PHB of the cell dry weight in batch cultivation. The transcription strength of the MCPtacs promoter cluster can be greatly improved by increasing the tandem repeats number of the core-tac-promoter. By integrating the desired gene together with the MCPtacs promoter cluster into the chromosome of E. coli, we can achieve high and stale overexpression with only a small size. This strategy has an application potential in many fields and can be extended to other bacteria.
NASA Astrophysics Data System (ADS)
Zhang, Ziyang; Fiebrandt, Julia; Haynes, Dionne; Sun, Kai; Madhav, Kalaga; Stoll, Andreas; Makan, Kirill; Makan, Vadim; Roth, Martin
2018-03-01
Three-dimensional multi-mode interference devices are demonstrated using a single-mode fiber (SMF) center-spliced to a section of polygon-shaped core multimode fiber (MMF). This simple structure can effectively generate well-localized self-focusing spots that match to the layout of a chosen multi-core fiber (MCF) as a launcher device. An optimized hexagon-core MMF can provide efficient coupling from a SMF to a 7-core MCF with an insertion loss of 0.6 dB and a power imbalance of 0.5 dB, while a square-core MMF can form a self-imaging pattern with symmetrically distributed 2 × 2, 3 × 3 or 4 × 4 spots. These spots can be directly received by a two-dimensional detector array. The device can work as a vector curvature sensor by comparing the relative power among the spots with a resolution of ∼0.1° over a 1.8 mm-long MMF.
Flow characteristics of Korea multi-purpose research reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heonil Kim; Hee Taek Chae; Byung Jin Jun
1995-09-01
The construction of Korea Multi-purpose Research Reactor (KMRR), a 30 MW{sub th} open-tank-in-pool type, is completed. Various thermal-hydraulic experiments have been conducted to verify the design characteristics of the KMRR. This paper describes the commissioning experiments to determine the flow distribution of KMRR core and the flow characteristics inside the chimney which stands on top of the core. The core flow is distributed to within {+-}6% of the average values, which is sufficiently flat in the sense that the design velocity in the fueled region is satisfied. The role of core bypass flow to confine the activated core coolant inmore » the chimney structure is confirmed.« less
Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; D'Exelle, Ben; Agyepong, Irene; Baltussen, Rob
2010-12-01
To evaluate the effectiveness of three alternative strategies to identify poor households: means testing (MT), proxy means testing (PMT) and participatory wealth ranking (PWR) in urban, rural and semi-urban settings in Ghana. The primary motivation was to inform implementation of the National Health Insurance policy of premium exemptions for the poorest households. Survey of 145-147 households per setting to collect data on consumption expenditure to estimate MT measures and of household assets to estimate PMT measures. We organized focus group discussions to derive PWR measures. We compared errors of inclusion and exclusion of PMT and PWR relative to MT, the latter being considered the gold standard measure to identify poor households. Compared to MT, the errors of exclusion and inclusion of PMT ranged between 0.46-0.63 and 0.21-0.36, respectively, and of PWR between 0.03-0.73 and 0.17-0.60, respectively, depending on the setting. Proxy means testing and PWR have considerable errors of exclusion and inclusion in comparison with MT. PWR is a subjective measure of poverty and has appeal because it reflects community's perceptions on poverty. However, as its definition of the poor varies across settings, its acceptability as a uniform strategy to identify the poor in Ghana may be questionable. PMT and MT are potential strategies to identify the poor, and their relative societal attractiveness should be judged in a broader economic analysis. This study also holds relevance to other programmes that require identification of the poor in low-income countries. © 2010 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.
2017-11-01
Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.
Qualification of CASMO5 / SIMULATE-3K against the SPERT-III E-core cold start-up experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grandi, G.; Moberg, L.
SIMULATE-3K is a three-dimensional kinetic code applicable to LWR Reactivity Initiated Accidents. S3K has been used to calculate several international recognized benchmarks. However, the feedback models in the benchmark exercises are different from the feedback models that SIMULATE-3K uses for LWR reactors. For this reason, it is worth comparing the SIMULATE-3K capabilities for Reactivity Initiated Accidents against kinetic experiments. The Special Power Excursion Reactor Test III was a pressurized-water, nuclear-research facility constructed to analyze the reactor kinetic behavior under initial conditions similar to those of commercial LWRs. The SPERT III E-core resembles a PWR in terms of fuel type, moderator,more » coolant flow rate, and system pressure. The initial test conditions (power, core flow, system pressure, core inlet temperature) are representative of cold start-up, hot start-up, hot standby, and hot full power. The qualification of S3K against the SPERT III E-core measurements is an ongoing work at Studsvik. In this paper, the results for the 30 cold start-up tests are presented. The results show good agreement with the experiments for the reactivity initiated accident main parameters: peak power, energy release and compensated reactivity. Predicted and measured peak powers differ at most by 13%. Measured and predicted reactivity compensations at the time of the peak power differ less than 0.01 $. Predicted and measured energy release differ at most by 13%. All differences are within the experimental uncertainty. (authors)« less
Multi-photon excited luminescence of magnetic FePt core-shell nanoparticles.
Seemann, K M; Kuhn, B
2014-07-01
We present magnetic FePt nanoparticles with a hydrophilic, inert, and biocompatible silico-tungsten oxide shell. The particles can be functionalized, optically detected, and optically manipulated. To show the functionalization the fluorescent dye NOPS was bound to the FePt core-shell nanoparticles with propyl-triethoxy-silane linkers and fluorescence of the labeled particles were observed in ethanol (EtOH). In aqueous dispersion the NOPS fluorescence is quenched making them invisible using 1-photon excitation. However, we observe bright luminescence of labeled and even unlabeled magnetic core-shell nanoparticles with multi-photon excitation. Luminescence can be detected in the near ultraviolet and the full visible spectral range by near infrared multi-photon excitation. For optical manipulation, we were able to drag clusters of particles, and maybe also single particles, by a focused laser beam that acts as optical tweezers by inducing an electric dipole in the insulated metal nanoparticles. In a first application, we show that the luminescence of the core-shell nanoparticles is bright enough for in vivo multi-photon imaging in the mouse neocortex down to cortical layer 5.
Evaluation of a Powered Ankle-Foot Prosthesis during Slope Ascent Gait
2016-01-01
Passive prosthetic feet lack active plantarflexion and push-off power resulting in gait deviations and compensations by individuals with transtibial amputation (TTA) during slope ascent. We sought to determine the effect of active ankle plantarflexion and push-off power provided by a powered prosthetic ankle-foot (PWR) on lower extremity compensations in individuals with unilateral TTA as they walked up a slope. We hypothesized that increased ankle plantarflexion and push-off power would reduce compensations commonly observed with a passive, energy-storing-returning prosthetic ankle-foot (ESR). We compared the temporal spatial, kinematic, and kinetic measures of ten individuals with TTA (age: 30.2 ± 5.3 yrs) to matched abled-bodied (AB) individuals during 5° slope ascent. The TTA group walked with an ESR and separately with a PWR. The PWR produced significantly greater prosthetic ankle plantarflexion and push-off power generation compared to an ESR and more closely matched AB values. The PWR functioned similar to a passive ESR device when transitioning onto the prosthetic limb due to limited prosthetic dorsiflexion, which resulted in similar deviations and compensations. In contrast, when transitioning off the prosthetic limb, increased ankle plantarflexion and push-off power provided by the PWR contributed to decreased intact limb knee extensor power production, lessening demand on the intact limb knee. PMID:27977681
Evaluation of a Powered Ankle-Foot Prosthesis during Slope Ascent Gait.
Rábago, Christopher A; Aldridge Whitehead, Jennifer; Wilken, Jason M
2016-01-01
Passive prosthetic feet lack active plantarflexion and push-off power resulting in gait deviations and compensations by individuals with transtibial amputation (TTA) during slope ascent. We sought to determine the effect of active ankle plantarflexion and push-off power provided by a powered prosthetic ankle-foot (PWR) on lower extremity compensations in individuals with unilateral TTA as they walked up a slope. We hypothesized that increased ankle plantarflexion and push-off power would reduce compensations commonly observed with a passive, energy-storing-returning prosthetic ankle-foot (ESR). We compared the temporal spatial, kinematic, and kinetic measures of ten individuals with TTA (age: 30.2 ± 5.3 yrs) to matched abled-bodied (AB) individuals during 5° slope ascent. The TTA group walked with an ESR and separately with a PWR. The PWR produced significantly greater prosthetic ankle plantarflexion and push-off power generation compared to an ESR and more closely matched AB values. The PWR functioned similar to a passive ESR device when transitioning onto the prosthetic limb due to limited prosthetic dorsiflexion, which resulted in similar deviations and compensations. In contrast, when transitioning off the prosthetic limb, increased ankle plantarflexion and push-off power provided by the PWR contributed to decreased intact limb knee extensor power production, lessening demand on the intact limb knee.
Multirecycling of Plutonium from LMFBR Blanket in Standard PWRs Loaded with MOX Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonat Sen; Gilles Youinou
2013-02-01
It is now well-known that, from a physics standpoint, Pu, or even TRU (i.e. Pu+M.A.), originating from LEU fuel irradiated in PWRs can be multirecycled also in PWRs using MOX fuel. However, the degradation of the isotopic composition during irradiation necessitates using enriched U in conjunction with the MOX fuel either homogeneously or heterogeneously to maintain the Pu (or TRU) content at a level allowing safe operation of the reactor, i.e. below about 10%. The study is related to another possible utilization of the excess Pu produced in the blanket of a LMFBR, namely in a PWR(MOX). In this casemore » the more Pu is bred in the LMFBR, the more PWR(MOX) it can sustain. The important difference between the Pu coming from the blanket of a LMFBR and that coming from a PWR(LEU) is its isotopic composition. The first one contains about 95% of fissile isotopes whereas the second one contains only about 65% of fissile isotopes. As it will be shown later, this difference allows the PWR fed by Pu from the LMFBR blanket to operate with natural U instead of enriched U when it is fed by Pu from PWR(LEU)« less
Research and Development in Preventive Dentistry.
1979-12-01
Characterization 16 B. Core Material Preparation 18 C. Microencapsulation 20 D. Characterization of Microcapsules 22 1. Size Distribution 22 2. Assays 22 3... microencapsulated with a biodegradable polymer, poly-L(-)- lactide, using a fluidized bed coating technique. A series of microcapsule batches with different...lbs/hr. Material was less than 15 iim (99%), and most of the lidocaine was in the 1 micron range, * C. MICROENCAPSULATION Lidocaine microcapsules were
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
NASA Astrophysics Data System (ADS)
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
The benefits of a fast reactor closed fuel cycle in the UK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregg, R.; Hesketh, K.
2013-07-01
The work has shown that starting a fast reactor closed fuel cycle in the UK, requires virtually all of Britain's existing and future PWR spent fuel to be reprocessed, in order to obtain the plutonium needed. The existing UK Pu stockpile is sufficient to initially support only a modest SFR 'closed' fleet assuming spent fuel can be reprocessed shortly after discharge (i.e. after two years cooling). For a substantial fast reactor fleet, most Pu will have to originate from reprocessing future spent PWR fuel. Therefore, the maximum fast reactor fleet size will be limited by the preceding PWR fleet size,more » so scenarios involving fast reactors still require significant quantities of uranium ore indirectly. However, once a fast reactor fuel cycle has been established, the very substantial quantities of uranium tails in the UK would ensure there is sufficient material for several centuries. Both the short and long term impacts on a repository have been considered in this work. Over the short term, the decay heat emanating from the HLW and spent fuel will limit the density of waste within a repository. For scenarios involving fast reactors, the only significant heat bearing actinide content will be present in the final cores, resulting in a 50% overall reduction in decay energy deposited within the repository when compared with an equivalent open fuel cycle. Over the longer term, radiological dose becomes more important. Total radiotoxicity (normalised by electricity generated) is lower for scenarios with Pu recycle after 2000 years. Scenarios involving fast reactors have the lowest radiotoxicity since the quantities of certain actinides (Np, Pu and Am) eventually stabilise. However, total radiotoxicity as a measure of radiological risk does not account for differences in radionuclide mobility once in repository. Radiological dose is dominated by a small number of fission products so is therefore not affected significantly by reactor type or recycling strategy (since the fission product will primarily be a function of nuclear energy generated). However, by reprocessing spent fuel, it is possible to immobilise the fission product in a more suitable waste form that has far more superior in-repository performance. (authors)« less
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Automatic treatment of the variance estimation bias in TRIPOLI-4 criticality calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumonteil, E.; Malvagi, F.
2012-07-01
The central limit (CLT) theorem States conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The use of Monte Carlo transport codes, such as Tripoli4, relies on those conditions. While these are verified in protection applications (the cycles provide independent measurements of fluxes and related quantities), the hypothesis of independent estimates/cycles is broken in criticality mode. Indeed the power iteration technique used in this mode couples a generation to its progeny. Often, after what is called 'source convergence' this coupling almost disappears (the solutionmore » is closed to equilibrium) but for loosely coupled systems, such as for PWR or large nuclear cores, the equilibrium is never found, or at least may take time to reach, and the variance estimation such as allowed by the CLT is under-evaluated. In this paper we first propose, by the mean of two different methods, to evaluate the typical correlation length, as measured in cycles number, and then use this information to diagnose correlation problems and to provide an improved variance estimation. Those two methods are based on Fourier spectral decomposition and on the lag k autocorrelation calculation. A theoretical modeling of the autocorrelation function, based on Gauss-Markov stochastic processes, will also be presented. Tests will be performed with Tripoli4 on a PWR pin cell. (authors)« less
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
SAXS analysis of single- and multi-core iron oxide magnetic nanoparticles
Szczerba, Wojciech; Costo, Rocio; Morales, Maria del Puerto; Thünemann, Andreas F.
2017-01-01
This article reports on the characterization of four superparamagnetic iron oxide nanoparticles stabilized with dimercaptosuccinic acid, which are suitable candidates for reference materials for magnetic properties. Particles p1 and p2 are single-core particles, while p3 and p4 are multi-core particles. Small-angle X-ray scattering analysis reveals a lognormal type of size distribution for the iron oxide cores of the particles. Their mean radii are 6.9 nm (p1), 10.6 nm (p2), 5.5 nm (p3) and 4.1 nm (p4), with narrow relative distribution widths of 0.08, 0.13, 0.08 and 0.12. The cores are arranged as a clustered network in the form of dense mass fractals with a fractal dimension of 2.9 in the multi-core particles p3 and p4, but the cores are well separated from each other by a protecting organic shell. The radii of gyration of the mass fractals are 48 and 44 nm, and each network contains 117 and 186 primary particles, respectively. The radius distributions of the primary particle were confirmed with transmission electron microscopy. All particles contain purely maghemite, as shown by X-ray absorption fine structure spectroscopy. PMID:28381973
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
FY2012 summary of tasks completed on PROTEUS-thermal work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.H.; Smith, M.A.
2012-06-06
PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less
Performance evaluation of two-stage fuel cycle from SFR to PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fei, T.; Hoffman, E.A.; Kim, T.K.
2013-07-01
One potential fuel cycle option being considered is a two-stage fuel cycle system involving the continuous recycle of transuranics in a fast reactor and the use of bred plutonium in a thermal reactor. The first stage is a Sodium-cooled Fast Reactor (SFR) fuel cycle with metallic U-TRU-Zr fuel. The SFRs need to have a breeding ratio greater than 1.0 in order to produce fissile material for use in the second stage. The second stage is a PWR fuel cycle with uranium and plutonium mixed oxide fuel based on the design and performance of the current state-of-the-art commercial PWRs with anmore » average discharge burnup of 50 MWd/kgHM. This paper evaluates the possibility of this fuel cycle option and discusses its fuel cycle performance characteristics. The study focuses on an equilibrium stage of the fuel cycle. Results indicate that, in order to avoid a positive coolant void reactivity feedback in the stage-2 PWR, the reactor requires high quality of plutonium from the first stage and minor actinides in the discharge fuel of the PWR needs to be separated and sent back to the stage-1 SFR. The electricity-sharing ratio between the 2 stages is 87.0% (SFR) to 13.0% (PWR) for a TRU inventory ratio (the mass of TRU in the discharge fuel divided by the mass of TRU in the fresh fuel) of 1.06. A sensitivity study indicated that by increasing the TRU inventory ratio to 1.13, The electricity generation fraction of stage-2 PWR is increased to 28.9%. The two-stage fuel cycle system considered in this study was found to provide a high uranium utilization (>80%). (authors)« less
Rode, Line; Kjærgaard, Hanne; Ottesen, Bent; Damm, Peter; Hegaard, Hanne K
2012-02-01
Our aim was to investigate the association between gestational weight gain (GWG) and postpartum weight retention (PWR) in pre-pregnancy underweight, normal weight, overweight or obese women, with emphasis on the American Institute of Medicine (IOM) recommendations. We performed secondary analyses on data based on questionnaires from 1,898 women from the "Smoke-free Newborn Study" conducted 1996-1999 at Hvidovre Hospital, Denmark. Relationship between GWG and PWR was examined according to BMI as a continuous variable and in four groups. Association between PWR and GWG according to IOM recommendations was tested by linear regression analysis and the association between PWR ≥ 5 kg (11 lbs) and GWG by logistic regression analysis. Mean GWG and mean PWR were constant for all BMI units until 26-27 kg/m(2). After this cut-off mean GWG and mean PWR decreased with increasing BMI. Nearly 40% of normal weight, 60% of overweight and 50% of obese women gained more than recommended during pregnancy. For normal weight and overweight women with GWG above recommendations the OR of gaining ≥ 5 kg (11 lbs) 1-year postpartum was 2.8 (95% CI 2.0-4.0) and 2.8 (95% CI 1.3-6.2, respectively) compared to women with GWG within recommendations. GWG above IOM recommendations significantly increases normal weight, overweight and obese women's risk of retaining weight 1 year after delivery. Health personnel face a challenge in prenatal counseling as 40-60% of these women gain more weight than recommended for their BMI. As GWG is potentially modifiable, our study should be followed by intervention studies focusing on GW.
Integration of Panda Workload Management System with supercomputers
NASA Astrophysics Data System (ADS)
De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.
2016-09-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility's infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
NASA Astrophysics Data System (ADS)
Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.
2016-10-01
The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.
ERIC Educational Resources Information Center
Rutland Consulting Group Ltd.
The report presents summaries of evaluations of the Coordinated Assessment and Program Planning for Education (CAPE) Program and the Coordinated Rehabilitation and Education (CORE) program for multi-handicapped sensory impaired and/or communication and behavior disordered children and their families in Alberta, Canada. Each program is evaluated…
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Recent operating experiences with steam generators in Japanese NPPs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashima, Seiji
1997-02-01
In 1994, the Genkai-3 of Kyushu Electric Power Co., Inc. and the Ikata-3 of Shikoku Electric Power Co., Inc. started commercial operation, and now 22 PWR plants are being operated in Japan. Since the first PWR plant now 22 PWR plants are being operated in was started to operate, Japanese PWR plants have had an operating experience of approx. 280 reactor-years. During that period, many tube degradations have been experienced in steam generators (SGs). And, in 1991, the steam generator tube rupture (SGTR) occurred in the Mihama-2 of Kansai Electric Power Co., Inc. However, the occurrence of tube degradation ofmore » SGs has been decreased by the instructions of the MITI as regulatory authorities, efforts of Electric Utilities, and technical support from the SG manufacturers. Here the author describes the recent SGs in Japan about the following points. (1) Recent Operating Experiences (2) Lessons learned from Mihama-2 SGTR (3) SG replacement (4) Safety Regulations on SG (5) Research and development on SG.« less
NASA Astrophysics Data System (ADS)
Chen, Xu; Ren, Bin; Yu, Dunji; Xu, Bin; Zhang, Zhe; Chen, Gang
2018-06-01
The effects of uniaxial tension properties and low cycle fatigue behavior of 16MND5 bainitic steel cylinder pre-corroded in simulated pressurized water reactor (PWR) were investigated by fatigue at room temperature in air and immersion test system, scanning electron microscopy (SEM), energy disperse spectroscopy (EDS). The experimental results indicated that the corrosion fatigue lives of 16MND5 specimen were significantly affected by the strain amplitude and simulated PWR environments. The compositions of corrosion products were complexly formed in simulated PWR environments. The porous corrosion surface of pre-corroded materials tended to generate pits as a result of promoting contact area to the fresh metal, which promoted crack initiation. For original materials, the fatigue cracks initiated at inclusions imbedded in the micro-cracks. Moreover, the simulated PWR environments degraded the mechanical properties and low cycle fatigue behavior of 16MND5 specimens remarkably. Pre-corrosion of 16MND5 specimen mainly affected the plastic term of the Coffin-Manson equation.
NASA Astrophysics Data System (ADS)
Nouraei, S.; Tice, D. R.; Mottershead, K. J.; Wright, D. M.
Field experience of 300 series stainless steels in the primary circuit of PWR plant has been good. Stress Corrosion Cracking of components has been infrequent and mainly associated with contamination by impurities/oxygen in occluded locations. However, some instances of failures have been observed which cannot necessarily be attributed to deviations in the water chemistry. These failures appear to be associated with the presence of cold-work produced by surface finishing and/or by welding-induced shrinkage. Recent data indicate that some heats of SS show an increased susceptibility to SCC; relatively high crack growth rates were observed even when the crack growth direction is orthogonal to the cold-work direction. SCC of cold-worked SS in PWR coolant is therefore determined by a complex interaction of material composition, microstructure, prior cold-work and heat treatment. This paper will focus on the interactions between these parameters on crack propagation in simulated PWR conditions.
Matějíček, David
2012-03-30
The multi-heart-cutting two-dimensional liquid chromatography-tandem mass spectrometry method using atmospheric pressure photoionization has been developed and successfully validated for the determination of nine endocrine disrupting compounds in river water. The method is based on the use of two different reverse-phase columns connected through a six-port two-position switching valve equipped with a 200 μl loop. An orthogonal separation was achieved by proper selection of stationary phases, mobile phases, and the use of a gradient elution in both dimensions. The method shows excellent performance in terms of accuracy (86.2-111.1%), precision (intra-batch: 6.7-11.2%, inter-batch: 7.2-13.5%), and sensitivity (1.2-7.1 ng l(-1)). Twenty real samples collected from the Loučka and the Svratka rivers were analyzed, the studied compounds were found in all Svratka samples (9.7-11.2 ng l(-1) for β-estradiol, 7.6-9.3 ng l(-1) for estrone, and 24.6-38.7 ng l(-1) for bisphenol A). Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.
The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.
Miller, Julie M; Dewey, Marc; Vavere, Andrea L; Rochitte, Carlos E; Niinuma, Hiroyuki; Arbab-Zadeh, Armin; Paul, Narinder; Hoe, John; de Roos, Albert; Yoshioka, Kunihiro; Lemos, Pedro A; Bush, David E; Lardo, Albert C; Texter, John; Brinker, Jeffery; Cox, Christopher; Clouse, Melvin E; Lima, João A C
2009-04-01
Multislice computed tomography (MSCT) for the noninvasive detection of coronary artery stenoses is a promising candidate for widespread clinical application because of its non-invasive nature and high sensitivity and negative predictive value as found in several previous studies using 16 to 64 simultaneous detector rows. A multi-centre study of CT coronary angiography using 16 simultaneous detector rows has shown that 16-slice CT is limited by a high number of nondiagnostic cases and a high false-positive rate. A recent meta-analysis indicated a significant interaction between the size of the study sample and the diagnostic odds ratios suggestive of small study bias, highlighting the importance of evaluating MSCT using 64 simultaneous detector rows in a multi-centre approach with a larger sample size. In this manuscript we detail the objectives and methods of the prospective "CORE-64" trial ("Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography using 64 Detectors"). This multi-centre trial was unique in that it assessed the diagnostic performance of 64-slice CT coronary angiography in nine centres worldwide in comparison to conventional coronary angiography. In conclusion, the multi-centre, multi-institutional and multi-continental trial CORE-64 has great potential to ultimately assess the per-patient diagnostic performance of coronary CT angiography using 64 simultaneous detector rows.
Stamatakis, Alexandros; Ott, Michael
2008-12-27
The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.
Plasmon waveguide resonance sensor using an Au-MgF2 structure.
Zhou, Yanfei; Zhang, Pengfei; He, Yonghong; Xu, Zihao; Liu, Le; Ji, Yanhong; Ma, Hui
2014-10-01
We report an Au − MgF(2) plasmon waveguide resonance (PWR) sensor in this work. The characteristics of this sensing structure are compared with a surface plasmon resonance (SPR) structure theoretically and experimentally. The transverse-magnetic-polarized PWR sensor has a refractive index resolution of 9.3 × 10(-7) RIU, which is 6 times smaller than that of SPR at the incident light wavelength of 633 nm, and the transverse-electric-polarized PWR sensor has a refractive index resolution of 3.0 × 10(-6) RIU. This high-resolution sensor is easy to build and is less sensitive to film coating deviations.
Multidimensional Normalization to Minimize Plate Effects of Suspension Bead Array Data.
Hong, Mun-Gwan; Lee, Woojoo; Nilsson, Peter; Pawitan, Yudi; Schwenk, Jochen M
2016-10-07
Enhanced by the growing number of biobanks, biomarker studies can now be performed with reasonable statistical power by using large sets of samples. Antibody-based proteomics by means of suspension bead arrays offers one attractive approach to analyze serum, plasma, or CSF samples for such studies in microtiter plates. To expand measurements beyond single batches, with either 96 or 384 samples per plate, suitable normalization methods are required to minimize the variation between plates. Here we propose two normalization approaches utilizing MA coordinates. The multidimensional MA (multi-MA) and MA-loess both consider all samples of a microtiter plate per suspension bead array assay and thus do not require any external reference samples. We demonstrate the performance of the two MA normalization methods with data obtained from the analysis of 384 samples including both serum and plasma. Samples were randomized across 96-well sample plates, processed, and analyzed in assay plates, respectively. Using principal component analysis (PCA), we could show that plate-wise clusters found in the first two components were eliminated by multi-MA normalization as compared with other normalization methods. Furthermore, we studied the correlation profiles between random pairs of antibodies and found that both MA normalization methods substantially reduced the inflated correlation introduced by plate effects. Normalization approaches using multi-MA and MA-loess minimized batch effects arising from the analysis of several assay plates with antibody suspension bead arrays. In a simulated biomarker study, multi-MA restored associations lost due to plate effects. Our normalization approaches, which are available as R package MDimNormn, could also be useful in studies using other types of high-throughput assay data.
Enhancement of Antiviral Agents through the Use of Controlled-Release Technology
1988-03-11
Microencapsulated Poly(I*C) 10 B. Comparison of the Subcutaneous and Intraperltoneal Routes of Poly(I*C) Microcapsule Administration 11 C... microencapsulation solvents and techniques in order to improve the core loading and surface morphology of the JE vaccine microcapsules . After...Days 0, 14, and 42, d3.0 mg unencapsulated JE vaccine, 3.0 mg microencapsulated JE vaccine prepared with 50:50 DL-PLG excipient ( microcapsule Batch
Core-Collapse Supernovae Explored by Multi-D Boltzmann Hydrodynamic Simulations
NASA Astrophysics Data System (ADS)
Sumiyoshi, Kohsuke; Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Matsufuru, Hideo; Imakura, Akira; Yamada, Shoichi
We report the latest results of numerical simulations of core-collapse supernovae by solving multi-D neutrino-radiation hydrodynamics with Boltzmann equations. One of the longstanding issues of the explosion mechanism of supernovae has been uncertainty in the approximations of the neutrino transfer in multi-D such as the diffusion approximation and ray-by-ray method. The neutrino transfer is essential, together with 2D/3D hydrodynamical instabilities, to evaluate the neutrino heating behind the shock wave for successful explosions and to predict the neutrino burst signals. We tackled this difficult problem by utilizing our solver of the 6D Boltzmann equation for neutrinos in 3D space and 3D neutrino momentum space coupled with multi-D hydrodynamics adding special and general relativistic extensions. We have performed a set of 2D core-collapse simulations from 11M ⊙ and 15M ⊙ stars on K-computer in Japan by following long-term evolution over 400 ms after bounce to reveal the outcome from the full Boltzmann hydrodynamic simulations with a sophisticated equation of state with multi-nuclear species and updated rates for electron captures on nuclei.
NASA Astrophysics Data System (ADS)
Liu, Ting; Qu, Yunhuan; Meng, De; Zhang, Qiaoer; Lu, Xinhua
2018-01-01
China’s spent fuel storage in the pressurized water reactors(PWR) is stored with wet storage way. With the rapid development of nuclear power industry, China’s NPPs(NPPs) will not be able to meet the problem of the production of spent fuel. Currently the world’s major nuclear power countries use dry storage as a way of spent fuel storage, so in recent years, China study on additional spent fuel dry storage system mainly. Part of the PWR NPP is ready to apply for additional spent fuel dry storage system. It also need to safety classificate to spent fuel dry storage facilities in PWR, but there is no standard for safety classification of spent fuel dry storage facilities in China. Because the storage facilities of the spent fuel dry storage are not part of the NPP, the classification standard of China’s NPPs is not applicable. This paper proposes the safety classification suggestion of the spent fuel dry storage for China’s PWR NPP, through to the study on China’s safety classification principles of PWR NPP in “Classification for the items of pressurized water reactor nuclear power plants (GB/T 17569-2013)”, and safety classification about spent fuel dry storage system in NUREG/CR - 6407 in the United States.
Annual report, FY 1979 Spent fuel and fuel pool component integrity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, A.B. Jr.; Bailey, W.J.; Schreiber, R.E.
International meetings under the BEFAST program and under INFCE Working Group No. 6 during 1978 and 1979 continue to indicate that no cases of fuel cladding degradation have developed on pool-stored fuel from water reactors. A section from a spent fuel rack stand, exposed for 1.5 y in the Yankee Rowe (PWR) pool had 0.001- to 0.003-in.-deep (25- to 75-..mu..m) intergranular corrosion in weld heat-affected zones but no evidence of stress corrosion cracking. A section of a 304 stainless steel spent fuel storage rack exposed 6.67 y in the Point Beach reactor (PWR) spent fuel pool showed no significant corrosion.more » A section of 304 stainless steel 8-in.-dia pipe from the Three Mile Island No. 1 (PWR) spent fuel pool heat exchanger plumbing developed a through-wall crack. The crack was intergranular, initiating from the inside surface in a weld heat-affected zone. The zone where the crack occurred was severely sensitized during field welding. The Kraftwerk Union (Erlangen, GFR) disassembled a stainless-steel fuel-handling machine that operated for 12 y in a PWR (boric acid) spent fuel pool. There was no evidence of deterioration, and the fuel-handling machine was reassembled for further use. A spent fuel pool at a Swedish PWR was decontaminated. The procedure is outlined in this report.« less
NASA Astrophysics Data System (ADS)
Kim, Kyungnam; Jeong, Sohee; Woo, Ju Yeon; Han, Chang-Soo
2012-02-01
We report successive and large-scale synthesis of InP/ZnS core/shell nanocrystal quantum dots (QDs) using a customized hybrid flow reactor, which is based on serial combination of a batch-type mixer and a flow-type furnace. InP cores and InP/ZnS core/shell QDs were successively synthesized in the hybrid reactor in a simple one-step process. In this reactor, the flow rate of the solutions was typically 1 ml min-1, 100 times larger than that of conventional microfluidic reactors. In order to synthesize high-quality InP/ZnS QDs, we controlled both the flow rate and the crystal growth temperature. Finally, we obtained high-quality InP/ZnS QDs in colors from bluish green to red, and we demonstrated that these core/shell QDs could be incorporated into white-light-emitting diode (LED) devices to improve color rendering performance.
Kim, Kyungnam; Jeong, Sohee; Woo, Ju Yeon; Han, Chang-Soo
2012-02-17
We report successive and large-scale synthesis of InP/ZnS core/shell nanocrystal quantum dots (QDs) using a customized hybrid flow reactor, which is based on serial combination of a batch-type mixer and a flow-type furnace. InP cores and InP/ZnS core/shell QDs were successively synthesized in the hybrid reactor in a simple one-step process. In this reactor, the flow rate of the solutions was typically 1 ml min(-1), 100 times larger than that of conventional microfluidic reactors. In order to synthesize high-quality InP/ZnS QDs, we controlled both the flow rate and the crystal growth temperature. Finally, we obtained high-quality InP/ZnS QDs in colors from bluish green to red, and we demonstrated that these core/shell QDs could be incorporated into white-light-emitting diode (LED) devices to improve color rendering performance.
Nonlinear Light Dynamics in Multi-Core Structures
2017-02-27
be generated in continuous- discrete optical media such as multi-core optical fiber or waveguide arrays; localisation dynamics in a continuous... discrete nonlinear system. Detailed theoretical analysis is presented of the existence and stability of the discrete -continuous light bullets using a very...and pulse compression using wave collapse (self-focusing) energy localisation dynamics in a continuous- discrete nonlinear system, as implemented in a
In-Pile Instrumentation Multi- Parameter System Utilizing Photonic Fibers and Nanovision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgett, Eric
2015-10-13
An advanced in-pile multi-parameter reactor monitoring system is being proposed in this funding opportunity. The proposed effort brings cutting edge, high fidelity optical measurement systems into the reactor environment in an unprecedented fashion, including in-core, in-cladding and in-fuel pellet itself. Unlike instrumented leads, the proposed system provides a unique solution to a multi-parameter monitoring need in core while being minimally intrusive in the reactor core. Detector designs proposed herein can monitor fuel compression and expansion in both the radial and axial dimensions as well as monitor linear power profiles and fission rates during the operation of the reactor. In additionmore » to pressure, stress, strain, compression, neutron flux, neutron spectra, and temperature can be observed inside the fuel bundle and fuel rod using the proposed system. The proposed research aims at developing radiation-hard, harsh-environment multi-parameter systems for insertion into the reactor environment. The proposed research holds the potential to drastically increase the fidelity and precision of in-core instrumentation with little or no impact in the neutron economy in the reactor environment while providing a measurement system capable of operation for entire operating cycles.« less
Shape sensing using multi-core fiber optic cable and parametric curve solutions.
Moore, Jason P; Rogge, Matthew D
2012-01-30
The shape of a multi-core optical fiber is calculated by numerically solving a set of Frenet-Serret equations describing the path of the fiber in three dimensions. Included in the Frenet-Serret equations are curvature and bending direction functions derived from distributed fiber Bragg grating strain measurements in each core. The method offers advantages over prior art in that it determines complex three-dimensional fiber shape as a continuous parametric solution rather than an integrated series of discrete planar bends. Results and error analysis of the method using a tri-core optical fiber is presented. Maximum error expressed as a percentage of fiber length was found to be 7.2%.
MetAlign 3.0: performance enhancement by efficient use of advances in computer hardware.
Lommen, Arjen; Kools, Harrie J
2012-08-01
A new, multi-threaded version of the GC-MS and LC-MS data processing software, metAlign, has been developed which is able to utilize multiple cores on one PC. This new version was tested using three different multi-core PCs with different operating systems. The performance of noise reduction, baseline correction and peak-picking was 8-19 fold faster compared to the previous version on a single core machine from 2008. The alignment was 5-10 fold faster. Factors influencing the performance enhancement are discussed. Our observations show that performance scales with the increase in processor core numbers we currently see in consumer PC hardware development.
Optimizing Constrained Single Period Problem under Random Fuzzy Demand
NASA Astrophysics Data System (ADS)
Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin
2008-09-01
In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.
Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems
NASA Astrophysics Data System (ADS)
Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo
2017-07-01
In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.
A Multi-Stage Wear Model for Grid-to-Rod Fretting of Nuclear Fuel Rods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blau, Peter Julian
The wear of fuel rod cladding against the supporting structures in the cores of pressurized water nuclear reactors (PWRs) is an important and potentially costly tribological issue. Grid-to-rod fretting (GTRF), as it is known, involves not only time-varying contact conditions, but also elevated temperatures, flowing hot water, aqueous tribo-corrosion, and the embrittling effects of neutron fluences. The multi-stage, closed-form analytical model described in this paper relies on published out-of-reactor wear and corrosion data and a set of simplifying assumptions to portray the conversion of frictional work into wear depth. The cladding material of interest is a zirconium-based alloy called Zircaloy-4,more » and the grid support is made of a harder and more wear-resistant material. Focus is on the wear of the cladding. The model involves an incubation stage, a surface oxide wear stage, and a base alloy wear stage. The wear coefficient, which is a measure of the efficiency of conversion of frictional work into wear damage, can change to reflect the evolving metallurgical condition of the alloy. Wear coefficients for Zircaloy-4 and for a polyphase zirconia layer were back-calculated for a range of times required to wear to a critical depth. Inputs for the model, like the friction coefficient, are taken from the tribology literature in lieu of in-reactor tribological data. Concepts of classical fretting were used as a basis, but are modified to enable the model to accommodate the complexities of the PWR environment. Factors like grid spring relaxation, pre-oxidation of the cladding, multiple oxide phases, gap formation, impact, and hydrogen embrittlement are part of the problem definition but uncertainties in their relative roles limits the ability to validate the model. Sample calculations of wear depth versus time in the cladding illustrate how GTRF wear might occur in a discontinuous fashion during months-long reactor operating cycles. A means to account for grid/rod gaps and repetitive impact effects on GTRF wear is proposed« less
Bender, P.; Bogart, L. K.; Posth, O.; Szczerba, W.; Rogers, S. E.; Castro, A.; Nilsson, L.; Zeng, L. J.; Sugunan, A.; Sommertune, J.; Fornara, A.; González-Alonso, D.; Barquín, L. Fernández; Johansson, C.
2017-01-01
The structural and magnetic properties of magnetic multi-core particles were determined by numerical inversion of small angle scattering and isothermal magnetisation data. The investigated particles consist of iron oxide nanoparticle cores (9 nm) embedded in poly(styrene) spheres (160 nm). A thorough physical characterisation of the particles included transmission electron microscopy, X-ray diffraction and asymmetrical flow field-flow fractionation. Their structure was ultimately disclosed by an indirect Fourier transform of static light scattering, small angle X-ray scattering and small angle neutron scattering data of the colloidal dispersion. The extracted pair distance distribution functions clearly indicated that the cores were mostly accumulated in the outer surface layers of the poly(styrene) spheres. To investigate the magnetic properties, the isothermal magnetisation curves of the multi-core particles (immobilised and dispersed in water) were analysed. The study stands out by applying the same numerical approach to extract the apparent moment distributions of the particles as for the indirect Fourier transform. It could be shown that the main peak of the apparent moment distributions correlated to the expected intrinsic moment distribution of the cores. Additional peaks were observed which signaled deviations of the isothermal magnetisation behavior from the non-interacting case, indicating weak dipolar interactions. PMID:28397851
NASA Technical Reports Server (NTRS)
Rogge, Matthew D. (Inventor); Moore, Jason P. (Inventor)
2014-01-01
Shape of a multi-core optical fiber is determined by positioning the fiber in an arbitrary initial shape and measuring strain over the fiber's length using strain sensors. A three-coordinate p-vector is defined for each core as a function of the distance of the corresponding cores from a center point of the fiber and a bending angle of the cores. The method includes calculating, via a controller, an applied strain value of the fiber using the p-vector and the measured strain for each core, and calculating strain due to bending as a function of the measured and the applied strain values. Additionally, an apparent local curvature vector is defined for each core as a function of the calculated strain due to bending. Curvature and bend direction are calculated using the apparent local curvature vector, and fiber shape is determined via the controller using the calculated curvature and bend direction.
Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.
2010-01-01
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190
Flexible architecture of data acquisition firmware based on multi-behaviors finite state machine
NASA Astrophysics Data System (ADS)
Arpaia, Pasquale; Cimmino, Pasquale
2016-11-01
A flexible firmware architecture for different kinds of data acquisition systems, ranging from high-precision bench instruments to low-cost wireless transducers networks, is presented. The key component is a multi-behaviors finite state machine, easily configurable to both low- and high-performance requirements, to diverse operating systems, as well as to on-line and batch measurement algorithms. The proposed solution was validated experimentally on three case studies with data acquisition architectures: (i) concentrated, in a high-precision instrument for magnetic measurements at CERN, (ii) decentralized, for telemedicine remote monitoring of patients at home, and (iii) distributed, for remote monitoring of building's energy loss.
Ingham, Richard J; Battilocchio, Claudio; Fitzpatrick, Daniel E; Sliwinski, Eric; Hawkins, Joel M; Ley, Steven V
2015-01-01
Performing reactions in flow can offer major advantages over batch methods. However, laboratory flow chemistry processes are currently often limited to single steps or short sequences due to the complexity involved with operating a multi-step process. Using new modular components for downstream processing, coupled with control technologies, more advanced multi-step flow sequences can be realized. These tools are applied to the synthesis of 2-aminoadamantane-2-carboxylic acid. A system comprising three chemistry steps and three workup steps was developed, having sufficient autonomy and self-regulation to be managed by a single operator. PMID:25377747
Software Aids Visualization of Computed Unsteady Flow
NASA Technical Reports Server (NTRS)
Kao, David; Kenwright, David
2003-01-01
Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.
2008-07-01
generation of process partitioning, a thread pipelining becomes possible. In this paper we briefly summarize the requirements and trends for FADEC based... FADEC environment, presenting a hypothetical realization of an example application. Finally we discuss the application of Time-Triggered...based control applications of the future. 15. SUBJECT TERMS Gas turbine, FADEC , Multi-core processing technology, disturbed based control
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-13
... (iPWR). This guidance applies to environmental reviews associated with iPWR applications for limited... received on or before this date. ADDRESSES: You may submit comments by any of the following methods (unless... this document. You may access publicly-available information related to this document by any of the...
NASA Astrophysics Data System (ADS)
Wells, R. K.; Xiong, W.; Bae, Y.; Sesti, E.; Skemer, P. A.; Giammar, D.; Conradi, M.; Ellis, B. R.; Hayes, S. E.
2015-12-01
The injection of CO2 into fractured basalts is one of several possible solutions to mitigate global climate change; however, research on carbonation in natural basalts in relation to carbon sequestration is limited, which impedes our understanding of the processes that may influence the viability of this strategy. We are conducting bench-scale experiments to characterize the mineral dissolution and precipitation and the evolution of permeability in synthetic and natural basalts exposed to CO2-rich fluids. Analytical methods include optical and electron microscopy, electron microprobe, Raman spectroscopy, nuclear magnetic resonance (NMR), and micro X-ray computed tomography (μCT) with variable flow rates. Reactive rock and mineral samples consist of 1) packed powders of olivine or natural basalt, and 2) sintered cores of olivine or a synthetic basalt mixture. Each sample was reacted in a batch reactor at 100 °C, and 100 bars CO2. Magnesite is detected within one day in olivine packed beds, and within 15 days in olivine sintered cores. Forsterite and synthetic basalt sinters were also reacted in an NMR apparatus at 102 °C and 65 bars CO2. Carbonate signatures are observed within 72 days of reaction. Longer reaction times are needed for carbonate precipitation in natural basalt samples. Cores from the Columbia River flood basalt flows that contain Mg-rich olivine and a serpentinized basalt from Colorado were cut lengthwise, the interface mechanically roughened or milled, and edges sealed with epoxy to simulate a fractured interface. The cores were reacted in a batch reactor at 50-150 °C and 100 bars CO2. At lower temperatures, calcite precipitation is rare within the fracture after 4 weeks. At higher temperatures, numerous calcite and aragonite crystals are observed within 1 mm of the fracture entrance along the roughened fracture surface. In flow-through experiments, permeability decreased along the fracture paths within a few hours to several days of flow.
Architecture of optical sensor for recognition of multiple toxic metal ions from water.
Shenashen, M A; El-Safty, S A; Elshehy, E A
2013-09-15
Here, we designed novel optical sensor based on the wormhole hexagonal mesoporous core/multi-shell silica nanoparticles that enabled the selective recognition and removal of these extremely toxic metals from drinking water. The surface-coating process of a mesoporous core/double-shell silica platforms by several consequence decorations using a cationic surfactant with double alkyl tails (CS-DAT) and then a synthesized dicarboxylate 1,5-diphenyl-3-thiocarbazone (III) signaling probe enabled us to create a unique hierarchical multi-shell sensor. In this design, the high loading capacity and wrapping of the CS-DAT and III organic moieties could be achieved, leading to the formation of silica core with multi-shells that formed from double-silica, CS-DAT, and III dressing layers. In this sensing system, notable changes in color and reflectance intensity of the multi-shelled sensor for Cu(2+), Co(2+), Cd(2+), and Hg(2+) ions, were observed at pH 2, 8, 9.5 and 11.5, respectively. The multi-shelled sensor is added to enable accessibility for continuous monitoring of several different toxic metal ions and efficient multi-ion sensing and removal capabilities with respect to reversibility, selectivity, and signal stability. Copyright © 2013 Elsevier B.V. All rights reserved.
Data Parallel Bin-Based Indexing for Answering Queries on Multi-Core Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosink, Luke; Wu, Kesheng; Bethel, E. Wes
2009-06-02
The multi-core trend in CPUs and general purpose graphics processing units (GPUs) offers new opportunities for the database community. The increase of cores at exponential rates is likely to affect virtually every server and client in the coming decade, and presents database management systems with a huge, compelling disruption that will radically change how processing is done. This paper presents a new parallel indexing data structure for answering queries that takes full advantage of the increasing thread-level parallelism emerging in multi-core architectures. In our approach, our Data Parallel Bin-based Index Strategy (DP-BIS) first bins the base data, and then partitionsmore » and stores the values in each bin as a separate, bin-based data cluster. In answering a query, the procedures for examining the bin numbers and the bin-based data clusters offer the maximum possible level of concurrency; each record is evaluated by a single thread and all threads are processed simultaneously in parallel. We implement and demonstrate the effectiveness of DP-BIS on two multi-core architectures: a multi-core CPU and a GPU. The concurrency afforded by DP-BIS allows us to fully utilize the thread-level parallelism provided by each architecture--for example, our GPU-based DP-BIS implementation simultaneously evaluates over 12,000 records with an equivalent number of concurrently executing threads. In comparing DP-BIS's performance across these architectures, we show that the GPU-based DP-BIS implementation requires significantly less computation time to answer a query than the CPU-based implementation. We also demonstrate in our analysis that DP-BIS provides better overall performance than the commonly utilized CPU and GPU-based projection index. Finally, due to data encoding, we show that DP-BIS accesses significantly smaller amounts of data than index strategies that operate solely on a column's base data; this smaller data footprint is critical for parallel processors that possess limited memory resources (e.g., GPUs).« less
Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation
NASA Technical Reports Server (NTRS)
Leachman, Jonathan
2010-01-01
A three-channel data acquisition system was developed for the NASA Multi-Frequency Radar (MFR) system. The system is based on a commercial-off-the-shelf (COTS) industrial PC (personal computer) and two dual-channel 14-bit digital receiver cards. The decimated complex envelope representations of the three radar signals are passed to the host PC via the PCI bus, and then processed in parallel by multiple cores of the PC CPU (central processing unit). The innovation is this parallelization of the radar data processing using multiple cores of a standard COTS multi-core CPU. The data processing portion of the data acquisition software was built using autonomous program modules or threads, which can run simultaneously on different cores. A master program module calculates the optimal number of processing threads, launches them, and continually supplies each with data. The benefit of this new parallel software architecture is that COTS PCs can be used to implement increasingly complex processing algorithms on an increasing number of radar range gates and data rates. As new PCs become available with higher numbers of CPU cores, the software will automatically utilize the additional computational capacity.
Integral Inherently Safe Light Water Reactor (I 2S-LWR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrovic, Bojan; Memmott, Matthew; Boy, Guy
This final report summarizes results of the multi-year effort performed during the period 2/2013- 12/2016 under the DOE NEUP IRP Project “Integral Inherently Safe Light Water Reactors (I 2S-LWR)”. The goal of the project was to develop a concept of a 1 GWe PWR with integral configuration and inherent safety features, at the same time accounting for lessons learned from the Fukushima accident, and keeping in mind the economic viability of the new concept. Essentially (see Figure 1-1) the project aimed to implement attractive safety features, typically found only in SMRs, to a larger power (1 GWe) reactor, to addressmore » the preference of some utilities in the US power market for unit power level on the order of 1 GWe.« less
NASA Astrophysics Data System (ADS)
Ali, Amir R.; Kamel, Mohamed A.
2017-05-01
This paper studies the effect of the electrostriction force on the single optical dielectric core coated with multi-layers based on whispering gallery mode (WGM). The sensing element is a dielectric core made of polymeric material coated with multi-layers having different dielectric and mechanical properties. The external electric field deforming the sensing element causing shifts in its WGM spectrum. The multi-layer structures will enhance the body and the pressure forces acting on the core of the sensing element. Due to the gradient on the dielectric permittivity; pressure forces at the interface between every two layers will be created. Also, the gradient on Young's modulus will affect the overall stiffness of the optical sensor. In turn the sensitivity of the optical sensor to the electric field will be increased when the materials of each layer selected properly. A mathematical model is used to test the effect for that multi-layer structures. Two layering techniques are considered to increase the sensor's sensitivity; (i) Pressure force enhancement technique; and (ii) Young's modulus reduction technique. In the first technique, Young's modulus is kept constant for all layers, while the dielectric permittivity is varying. In this technique the results will be affected by the value dielectric permittivity of the outer medium surrounding the cavity. If the medium's dielectric permittivity is greater than that of the cavity, then the ascending ordered layers of the cavity will yield the highest sensitivity (the core will have the smallest dielectric permittivity) to the applied electric field and vice versa. In the second technique, Young's modulus is varying along the layers, while the dielectric permittivity has a certain constant value per layer. On the other hand, the descending order will enhance the sensitivity in the second technique. Overall, results show the multi-layer cavity based on these techniques will enhance the sensitivity compared to the typical polymeric optical sensor.
2013-01-01
Background In this study, a multi-parent population of barley cultivars was grown in the field for two consecutive years and then straw saccharification (sugar release by enzymes) was subsequently analysed in the laboratory to identify the cultivars with the highest consistent sugar yield. This experiment was used to assess the benefit of accounting for both the multi-phase and multi-environment aspects of large-scale phenotyping experiments with field-grown germplasm through sound statistical design and analysis. Results Complementary designs at both the field and laboratory phases of the experiment ensured that non-genetic sources of variation could be separated from the genetic variation of cultivars, which was the main target of the study. The field phase included biological replication and plot randomisation. The laboratory phase employed re-randomisation and technical replication of samples within a batch, with a subset of cultivars chosen as duplicates that were randomly allocated across batches. The resulting data was analysed using a linear mixed model that incorporated field and laboratory variation and a cultivar by trial interaction, and ensured that the cultivar means were more accurately represented than if the non-genetic variation was ignored. The heritability detected was more than doubled in each year of the trial by accounting for the non-genetic variation in the analysis, clearly showing the benefit of this design and approach. Conclusions The importance of accounting for both field and laboratory variation, as well as the cultivar by trial interaction, by fitting a single statistical model (multi-environment trial, MET, model), was evidenced by the changes in list of the top 40 cultivars showing the highest sugar yields. Failure to account for this interaction resulted in only eight cultivars that were consistently in the top 40 in different years. The correspondence between the rankings of cultivars was much higher at 25 in the MET model. This approach is suited to any multi-phase and multi-environment population-based genetic experiment. PMID:24359577
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Andrs; Ray Berry; Derek Gaston
The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less
Hadoop distributed batch processing for Gaia: a success story
NASA Astrophysics Data System (ADS)
Riello, Marco
2015-12-01
The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.
Multi-view L2-SVM and its multi-view core vector machine.
Huang, Chengquan; Chung, Fu-lai; Wang, Shitong
2016-03-01
In this paper, a novel L2-SVM based classifier Multi-view L2-SVM is proposed to address multi-view classification tasks. The proposed Multi-view L2-SVM classifier does not have any bias in its objective function and hence has the flexibility like μ-SVC in the sense that the number of the yielded support vectors can be controlled by a pre-specified parameter. The proposed Multi-view L2-SVM classifier can make full use of the coherence and the difference of different views through imposing the consensus among multiple views to improve the overall classification performance. Besides, based on the generalized core vector machine GCVM, the proposed Multi-view L2-SVM classifier is extended into its GCVM version MvCVM which can realize its fast training on large scale multi-view datasets, with its asymptotic linear time complexity with the sample size and its space complexity independent of the sample size. Our experimental results demonstrated the effectiveness of the proposed Multi-view L2-SVM classifier for small scale multi-view datasets and the proposed MvCVM classifier for large scale multi-view datasets. Copyright © 2015 Elsevier Ltd. All rights reserved.
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-01-01
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-02-08
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okrent, D.
1997-06-23
This is the final report on DOE Award No. DE-FG03-92ER75838 A000, a three year matching grant program with Pacific Gas and Electric Company (PG and E) to support strengthening of the fission reactor nuclear science and engineering program at UCLA. The program began on September 30, 1992. The program has enabled UCLA to use its strong existing background to train students in technological problems which simultaneously are of interest to the industry and of specific interest to PG and E. The program included undergraduate scholarships, graduate traineeships and distinguished lecturers. Four topics were selected for research the first year, withmore » the benefit of active collaboration with personnel from PG and E. These topics remained the same during the second year of this program. During the third year, two topics ended with the departure o the students involved (reflux cooling in a PWR during a shutdown and erosion/corrosion of carbon steel piping). Two new topics (long-term risk and fuel relocation within the reactor vessel) were added; hence, the topics during the third year award were the following: reflux condensation and the effect of non-condensable gases; erosion/corrosion of carbon steel piping; use of artificial intelligence in severe accident diagnosis for PWRs (diagnosis of plant status during a PWR station blackout scenario); the influence on risk of organization and management quality; considerations of long term risk from the disposal of hazardous wastes; and a probabilistic treatment of fuel motion and fuel relocation within the reactor vessel during a severe core damage accident.« less
Polytopol computing for multi-core and distributed systems
NASA Astrophysics Data System (ADS)
Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan
2009-05-01
Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.
Pressurized-water reactor internals aging degradation study. Phase 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luk, K.H.
1993-09-01
This report documents the results of a Phase I study on the effects of aging degradations on pr internals. Primary stressers for internals an generated by the primary coolant flow in the they include unsteady hydrodynamic forces and pump-generated pressure pulsations. Other stressors are applied loads, manufacturing processes, impurities in the coolant and exposures to fast neutron fluxes. A survey of reported aging-related failure information indicates that fatigue, stress corrosion cracking (SCC) and mechanical wear are the three major aging-related degradation mechanisms for PWR internals. Significant reported failures include thermal shield flow-induced vibration problems, SCC in guide tube support pinsmore » and core support structure bolts, fatigue-induced core baffle water-jet impingement problems and excess wear in flux thimbles. Many of the reported problems have been resolved by accepted engineering practices. Uncertainties remain in the assessment of long-term neutron irradiation effects and environmental factors in high-cycle fatigue failures. Reactor internals are examined by visual inspections and the technique is access limited. Improved inspection methods, especially one with an early failure detection capability, can enhance the safety and efficiency of reactor operations.« less
Fault Tolerance Middleware for a Multi-Core System
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Springer, Paul L.; Zima, Hans P.; James, Mark; Wagner, David A.
2012-01-01
Fault Tolerance Middleware (FTM) provides a framework to run on a dedicated core of a multi-core system and handles detection of single-event upsets (SEUs), and the responses to those SEUs, occurring in an application running on multiple cores of the processor. This software was written expressly for a multi-core system and can support different kinds of fault strategies, such as introspection, algorithm-based fault tolerance (ABFT), and triple modular redundancy (TMR). It focuses on providing fault tolerance for the application code, and represents the first step in a plan to eventually include fault tolerance in message passing and the FTM itself. In the multi-core system, the FTM resides on a single, dedicated core, separate from the cores used by the application. This is done in order to isolate the FTM from application faults and to allow it to swap out any application core for a substitute. The structure of the FTM consists of an interface to a fault tolerant strategy module, a responder module, a fault manager module, an error factory, and an error mapper that determines the severity of the error. In the present reference implementation, the only fault tolerant strategy implemented is introspection. The introspection code waits for an application node to send an error notification to it. It then uses the error factory to create an error object, and at this time, a severity level is assigned to the error. The introspection code uses its built-in knowledge base to generate a recommended response to the error. Responses might include ignoring the error, logging it, rolling back the application to a previously saved checkpoint, swapping in a new node to replace a bad one, or restarting the application. The original error and recommended response are passed to the top-level fault manager module, which invokes the response. The responder module also notifies the introspection module of the generated response. This provides additional information to the introspection module that it can use in generating its next response. For example, if the responder triggers an application rollback and errors are still occurring, the introspection module may decide to recommend an application restart.
NASA Astrophysics Data System (ADS)
Ren, Lijiao; Ahn, Yongtae; Hou, Huijie; Zhang, Fang; Logan, Bruce E.
2014-07-01
Power production of four hydraulically connected microbial fuel cells (MFCs) was compared with the reactors operated using individual electrical circuits (individual), and when four anodes were wired together and connected to four cathodes all wired together (combined), in fed-batch or continuous flow conditions. Power production under these different conditions could not be made based on a single resistance, but instead required polarization tests to assess individual performance relative to the combined MFCs. Based on the power curves, power produced by the combined MFCs (2.12 ± 0.03 mW, 200 Ω) was the same as the summed power (2.13 mW, 50 Ω) produced by the four individual reactors in fed-batch mode. With continuous flow through the four MFCs, the maximum power (0.59 ± 0.01 mW) produced by the combined MFCs was slightly lower than the summed maximum power of the four individual reactors (0.68 ± 0.02 mW). There was a small parasitic current flow from adjacent anodes and cathodes, but overall performance was relatively unaffected. These findings demonstrate that optimal power production by reactors hydraulically and electrically connected can be predicted from performance by individual reactors.
Diverse Expected Gradient Active Learning for Relative Attributes.
You, Xinge; Wang, Ruxin; Tao, Dacheng
2014-06-02
The use of relative attributes for semantic understanding of images and videos is a promising way to improve communication between humans and machines. However, it is extremely labor- and time-consuming to define multiple attributes for each instance in large amount of data. One option is to incorporate active learning, so that the informative samples can be actively discovered and then labeled. However, most existing active-learning methods select samples one at a time (serial mode), and may therefore lose efficiency when learning multiple attributes. In this paper, we propose a batch-mode active-learning method, called Diverse Expected Gradient Active Learning (DEGAL). This method integrates an informativeness analysis and a diversity analysis to form a diverse batch of queries. Specifically, the informativeness analysis employs the expected pairwise gradient length as a measure of informativeness, while the diversity analysis forces a constraint on the proposed diverse gradient angle. Since simultaneous optimization of these two parts is intractable, we utilize a two-step procedure to obtain the diverse batch of queries. A heuristic method is also introduced to suppress imbalanced multi-class distributions. Empirical evaluations of three different databases demonstrate the effectiveness and efficiency of the proposed approach.
Development of an extensible dual-core wireless sensing node for cyber-physical systems
NASA Astrophysics Data System (ADS)
Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.
2014-04-01
The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.
Fabrication of SiO2@ZrO2@Y2O3:Eu3+ core-multi-shell structured phosphor.
Gao, Xuan; He, Diping; Jiao, Huan; Chen, Juan; Meng, Xin
2011-08-01
ZrO2 interface was designed to block the reaction between SiO2 and Y2O3 in SiO2@Y2O3:Eu coreshell structure phosphor. SiO2@ZrO2@Y2O3:Eu core-multi-shell phosphors were successfully synthesized by combing an LBL method with a Sol-gel process. Based on electron microscopy, X-ray diffraction, and spectroscopy experiments, compelling evidence for the formation of the Y2O3:Eu outer shell on ZrO2 were presented. The presence of ZrO2 layer on SiO2 core can block the reaction of SiO2 core and Y2O3 shell effectively. By this kind of structure, the reaction temperature of the SiO2 core and Y2O3 shell in the SiO2@Y2O3:Eu core-shell structure phosphor can be increased about 200-300 degrees C and the luminescent intensity of this structure phosphor can be improved obviously. Under the excitation of ultraviolet (254 nm), the Eu3+ ion mainly shows its characteristic red (611 nm, 5D0-7F2) emissions in the core-multi-shell particles from Y2O3:Eu3+ shells. The emission intensity of Eu3+ ions can be tuned by the annealing temperatures, the number of coating times, and the thickness of ZrO2 interface, respectively.
Neutron Collar Evolution and Fresh PWR Assembly Measurements with a New Fast Neutron Passive Collar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menlove, Howard Olsen; Geist, William H.; Root, Margaret A.
The passive neutron collar approach removes the effect of poison rods when using a 1mm Gd liner. This project sets out to solve the following challenges: BWR fuel assemblies have less mass and less neutron multiplication than PWR; and effective removal of cosmic ray spallation neutron bursts needed via QC tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Lanning, Donald D.
2010-02-01
The hypothesis of this paper is that the Zircaloy clad fuel source is minimal and that secondary startup neutron sources are the significant contributors of the tritium in the RCS that was previously assigned to release from fuel. Currently there are large uncertainties in the attribution of tritium in a Pressurized Water Reactor (PWR) Reactor Coolant System (RCS). The measured amount of tritium in the coolant cannot be separated out empirically into its individual sources. Therefore, to quantify individual contributors, all sources of tritium in the RCS of a PWR must be understood theoretically and verified by the sum ofmore » the individual components equaling the measured values.« less
Tank 241-AY-101 Privatization Push Mode Core Sampling and Analysis Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
TEMPLETON, A.M.
2000-01-12
This sampling and analysis plan (SAP) identifies characterization objectives pertaining to sample collection, laboratory analytical evaluation, and reporting requirements for samples obtained from tank 241-AY-101. The purpose of this sampling event is to obtain information about the characteristics of the contents of 241-AY-101 required to satisfy Data Quality Objectives For RPP Privatization Phase I: Confirm Tank T Is An Appropriate Feed Source For High-Level Waste Feed Batch X(HLW DQO) (Nguyen 1999a), Data Quality Objectives For TWRS Privatization Phase I : Confirm Tank T Is An Appropriate Feed Source For Low-Activity Waste Feed Batch X (LAW DQO) (Nguyen 1999b), Low Activitymore » Waste and High-Level Waste Feed Data Quality Objectives (L and H DQO) (Patello et al. 1999), and Characterization Data Needs for Development, Design, and Operation of Retrieval Equipment Developed through the Data Quality Objective Process (Equipment DQO) (Bloom 1996). Special instructions regarding support to the LAW and HLW DQOs are provided by Baldwin (1999). Push mode core samples will be obtained from risers 15G and 150 to provide sufficient material for the chemical analyses and tests required to satisfy these data quality objectives. The 222-S Laboratory will extrude core samples; composite the liquids and solids; perform chemical analyses on composite and segment samples; archive half-segment samples; and provide subsamples to the Process Chemistry Laboratory. The Process Chemistry Laboratory will prepare test plans and perform process tests to evaluate the behavior of the 241-AY-101 waste undergoing the retrieval and treatment scenarios defined in the applicable DQOs. Requirements for analyses of samples originating in the process tests will be documented in the corresponding test plans and are not within the scope of this SAP.« less
NASA Astrophysics Data System (ADS)
Khairy, Mohamed; El-Safty, Sherif A.; Shenashen, Mohamed. A.; Elshehy, Emad A.
2013-08-01
The highly toxic properties, bioavailability, and adverse effects of Pb2+ species on the environment and living organisms necessitate periodic monitoring and removal whenever possible of Pb2+ concentrations in the environment. In this study, we designed a novel optical multi-shell nanosphere sensor that enables selective recognition, unrestrained accessibility, continuous monitoring, and efficient removal (on the order of minutes) of Pb2+ ions from water and human blood, i.e., red blood cells (RBCs). The consequent decoration of the mesoporous core/double-shell silica nanospheres through a chemically responsive azo-chromophore with a long hydrophobic tail enabled us to create a unique hierarchical multi-shell sensor. We examined the efficiency of the multi-shell sensor in removing lead ions from the blood to ascertain the potential use of the sensor in medical applications. The lead-induced hemolysis of RBCs in the sensing/capture assay was inhibited by the ability of the hierarchical sensor to remove lead ions from blood. The results suggest the higher flux and diffusion of Pb2+ ions into the mesopores of the core/multi-shell sensor than into the RBC membranes. These findings indicate that the sensor could be used in the prevention of health risks associated with elevated blood lead levels such as anemia.The highly toxic properties, bioavailability, and adverse effects of Pb2+ species on the environment and living organisms necessitate periodic monitoring and removal whenever possible of Pb2+ concentrations in the environment. In this study, we designed a novel optical multi-shell nanosphere sensor that enables selective recognition, unrestrained accessibility, continuous monitoring, and efficient removal (on the order of minutes) of Pb2+ ions from water and human blood, i.e., red blood cells (RBCs). The consequent decoration of the mesoporous core/double-shell silica nanospheres through a chemically responsive azo-chromophore with a long hydrophobic tail enabled us to create a unique hierarchical multi-shell sensor. We examined the efficiency of the multi-shell sensor in removing lead ions from the blood to ascertain the potential use of the sensor in medical applications. The lead-induced hemolysis of RBCs in the sensing/capture assay was inhibited by the ability of the hierarchical sensor to remove lead ions from blood. The results suggest the higher flux and diffusion of Pb2+ ions into the mesopores of the core/multi-shell sensor than into the RBC membranes. These findings indicate that the sensor could be used in the prevention of health risks associated with elevated blood lead levels such as anemia. Electronic supplementary information (ESI) available: The experimental procedures for synthesis of AC-LHT, mesoporous core/double shell silica, and optical core/multi-shell sensors. The adsorption capacity, optical recognition of Pb ions, colorimetric response of Pb ions in ethanol medium, Langmuir adsorption isotherm and reusability of captor are addressed. See DOI: 10.1039/c3nr02403b
Modelling Agent-Environment Interaction in Multi-Agent Simulations with Affordances
2010-04-01
allow operations analysts to conduct statistical studies comparing the effectiveness of different systems or tactics in different scenarios. 11 Instead of...in a Monte-Carlo batch mode, producing statistical outcomes for particular measures of effectiveness. They typically also run at many times faster...Combined with annotated signs, the affordances allowed the traveller agents to find their way around the virtual airport and to conduct their business
Pei, Ke; Duan, Yu; Qiao, Feng-Xian; Tu, Si-Cong; Liu, Xiao; Wang, Xiao-Li; Song, Xiao-Qing; Fan, Kai-Lei; Cai, Bao-Chang
2016-01-01
An accurate and reliable method of high-performance liquid chromatographic fingerprint combining with multi-ingredient determination was developed and validated to evaluate the influence of sulfur-fumigated Paeoniae Radix Alba on the quality and chemical constituents of Si Wu Tang. Multivariate data analysis including hierarchical cluster analysis and principal component analysis, which integrated with high-performance liquid chromatographic fingerprint and multi-ingredient determination, was employed to evaluate Si Wu Tang in a more objective and scientific way. Interestingly, in this paper, a total of 37 and 36 peaks were marked as common peaks in ten batches of Si Wu Tang containing sun-dried Paeoniae Radix Alba and ten batches of Si Wu Tang containing sulfur-fumigated Paeoniae Radix Alba, respectively, which indicated the changed fingerprint profile of Si Wu Tang when containing sulfur-fumigated herb. Furthermore, the results of simultaneous determination for multiple ingredients showed that the contents of albiflorin and paeoniflorin decreased significantly (P < 0.01) and the contents of gallic acid and Z-ligustilide decreased to some extent at the same time when Si Wu Tang contained sulfur-fumigated Paeoniae Radix Alba. Therefore, sulfur-fumigation processing may have great influence on the quality of Chinese herbal prescription. PMID:27034892
NASA Astrophysics Data System (ADS)
Li, Min; Meng, Xiaojing; Yuan, Jinhai; Deng, Wenwen; Liang, Xiuke
2018-01-01
In the present study, the adsorption behavior of cadmium (II) ion from aqueous solution onto multi-carboxylic-functionalized silica gel (SG-MCF) has been investigated in detail by means of batch and column experiments. Batch experiments were performed to evaluate the effects of various experimental parameters such as pH value, contact time and initial concentration on adsorption capacity of cadmium (II) ion. The kinetic data were analyzed on the basis of the pseudo-first-order kinetic and the pseudo-second-order kinetic models and consequently, the pseudo-second-order kinetic can better describe the adsorption process than the pseudo-first-order kinetic model. Equilibrium isotherms for the adsorption of cadmium (II) ion were analyzed by Freundlich and Langmuir isotherm models, the results indicate that Langmuir isotherm model was found to be credible to express the data for cadmium (II) ion from aqueous solution onto the SG-MCF. Various thermodynamics parameters of the adsorption process, including free energy of adsorption (ΔG0 ), the enthalpy of adsorption (ΔH0 ) and standard entropy changes (ΔS0 ), were calculated to predict the nature of adsorption. The positive value of the enthalpy change and the negative value of free energy change indicate that the process is endothermic and spontaneous process.
Adsorption kinetic and desorption studies of Cd2+ on Multi-Carboxylic-Functionalized Silica Gel
NASA Astrophysics Data System (ADS)
Li, Min; Wei, Jian; Meng, Xiaojing; Wu, Zhuqiang; Liang, Xiuke
2018-01-01
In the present study, the adsorption behavior of cadmium (II) ion from aqueous solution onto multi-carboxylic-functionalized silica gel (SG-MCF) has been investigated in detail by means of batch and column experiments. Batch experiments were performed to evaluate the effects of contact time on adsorption capacity of cadmium (II) ion. The kinetic data were analyzed on the basis of the pseudo-first-order kinetic and the pseudo-second-order kinetic models and consequently, the pseudo-second-order kinetic can better describe the adsorption process than the pseudo-first-order kinetic model. And the adsorption mechanism of the process was studied by intra-particle and film diffusion, it was found out that the adsorption rate was governed primarily by film diffusion to the adsorption onto the SG-MCF. In addition, column experiments were conducted to assess the effects initial inlet concentration and the flow rate on breakthrough time and adsorption capacity ascertaining the practical applicability of the adsorbent. The results suggest that the total amount of adsorbed cadmium (II) ion increased with declined flow rate and increased the inlet concentration. The adsorption-desorption experiment confirmed that adsorption capacity of cadmium (II) ion didn’t present an obvious decrease after five cycles.
Adsorption kinetic and desorption studies of Cu2+ on Multi-Carboxylic-Functionalized Silica Gel
NASA Astrophysics Data System (ADS)
Li, Min; Meng, Xiaojing; Liu, Yushuang; Hu, Xinju; Liang, Xiuke
2018-01-01
In the present study, the adsorption behavior of copper (II) ion from aqueous solution onto multi-carboxylic-functionalized silica gel (SG-MCF) has been investigated in detail by means of batch and column experiments. Batch experiments were performed to evaluate the effects of contact time on adsorption capacity of copper (II) ion. The kinetic data were analyzed on the basis of the pseudo-first-order kinetic and the pseudo-second-order kinetic models and consequently, the pseudo-second-order kinetic can better describe the adsorption process than the pseudo-first-order kinetic model. And the adsorption mechanism of the process was studied by intra-particle and film diffusion, it was found out that the adsorption rate was governed primarily by film diffusion to the adsorption onto the SG-MCF. In addition, column experiments were conducted to assess the effects initial inlet concentration and the flow rate on breakthrough time and adsorption capacity ascertaining the practical applicability of the adsorbent. The results suggest that the total amount of adsorbed copper (II) ion increased with declined flow rate and increased the inlet concentration. The adsorption-desorption experiment confirmed that adsorption capacity of copper (II) ion didn’t present an obvious decrease after five cycles.
The parallel algorithm for the 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel
2018-04-01
The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.
Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun
2015-08-10
With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Jiang, Qian; Zeng, Wenxia; Zhang, Canying; Meng, Zhaoguo; Wu, Jiawei; Zhu, Qunzhi; Wu, Daxiong; Zhu, Haitao
2017-12-19
Photothermal conversion materials have promising applications in many fields and therefore they have attracted tremendous attention. However, the multi-functionalization of a single nanostructure to meet the requirements of multiple photothermal applications is still a challenge. The difficulty is that most nanostructures have specific absoprtion band and are not flexible to different demands. In the current work, we reported the synthesis and multi-band photothermal conversion of Ag@Ag 2 S core@shell structures with gradually varying shell thickness. We synthesized the core@shell structures through the sulfidation of Ag nanocubes by taking the advantage of their spatially different reactivity. The resulting core@shell structures show an octopod-like mopgorlogy with a Ag 2 S bulge sitting at each corner of the Ag nanocubes. The thickness of the Ag 2 S shell gradually increases from the central surface towards the corners of the structure. The synthesized core@shell structures show a broad band absorption spectrum from 300 to 1100 nm. Enhanced photothermal conversion effect is observed under the illuminations of 635, 808, and 1064 nm lasers. The results indicate that the octopod-like Ag@Ag 2 S core@shell structures have characteristics of multi-band photothermal conversion. The current work might provide a guidance for the design and synthesis of multifunctional photothermal conversion materials.
Multi-phase model development to assess RCIC system capabilities under severe accident conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkland, Karen Vierow; Ross, Kyle; Beeny, Bradley
The Reactor Core Isolation Cooling (RCIC) System is a safety-related system that provides makeup water for core cooling of some Boiling Water Reactors (BWRs) with a Mark I containment. The RCIC System consists of a steam-driven Terry turbine that powers a centrifugal, multi-stage pump for providing water to the reactor pressure vessel. The Fukushima Dai-ichi accidents demonstrated that the RCIC System can play an important role under accident conditions in removing core decay heat. The unexpectedly sustained, good performance of the RCIC System in the Fukushima reactor demonstrates, firstly, that its capabilities are not well understood, and secondly, that themore » system has high potential for extended core cooling in accident scenarios. Better understanding and analysis tools would allow for more options to cope with a severe accident situation and to reduce the consequences. The objectives of this project were to develop physics-based models of the RCIC System, incorporate them into a multi-phase code and validate the models. This Final Technical Report details the progress throughout the project duration and the accomplishments.« less
Terascale Cluster for Advanced Turbulent Combustion Simulations
2008-07-25
the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and
Bele, M; Jovanovič, P; Pavlišič, A; Jozinović, B; Zorko, M; Rečnik, A; Chernyshova, E; Hočevar, S; Hodnik, N; Gaberšček, M
2014-11-07
We present a novel, scaled-up sol-gel synthesis which enables one to produce 20 g batches of highly active and stable carbon supported PtCu3 nanoparticles as cathode materials for low temperature fuel cell application. We confirm the presence of an ordered intermetallic phase underneath a multilayered Pt-skin together with firm embedment of nanoparticles in the carbon matrix.
Integration of Weather Avoidance and Traffic Separation
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Chamberlain, James P.; Wilson, Sara R.
2011-01-01
This paper describes a dynamic convective weather avoidance concept that compensates for weather motion uncertainties; the integration of this weather avoidance concept into a prototype 4-D trajectory-based Airborne Separation Assurance System (ASAS) application; and test results from a batch (non-piloted) simulation of the integrated application with high traffic densities and a dynamic convective weather model. The weather model can simulate a number of pseudo-random hazardous weather patterns, such as slow- or fast-moving cells and opening or closing weather gaps, and also allows for modeling of onboard weather radar limitations in range and azimuth. The weather avoidance concept employs nested "core" and "avoid" polygons around convective weather cells, and the simulations assess the effectiveness of various avoid polygon sizes in the presence of different weather patterns, using traffic scenarios representing approximately two times the current traffic density in en-route airspace. Results from the simulation experiment show that the weather avoidance concept is effective over a wide range of weather patterns and cell speeds. Avoid polygons that are only 2-3 miles larger than their core polygons are sufficient to account for weather uncertainties in almost all cases, and traffic separation performance does not appear to degrade with the addition of weather polygon avoidance. Additional "lessons learned" from the batch simulation study are discussed in the paper, along with insights for improving the weather avoidance concept. Introduction
NASA Astrophysics Data System (ADS)
Gomez, Lizabeth
Gold nanoshells can be designed to possess high light scattering and strong absorption of near-infrared light. Thus, they have the potential to be used in biological applications as contrast agents for diagnostic imaging as well as for thermal ablation of tumor cells in future cancer treatments. In this study, gold nanoshells with dye-loaded star polymer cores were investigated. Uniform near-infrared gold nanoshells with 100 nm diameters were successfully generated using different batches of star polymer templates and were characterized by UV-visible spectroscopy and scanning electron microscopy. The star polymers used were block copolymer structures with a hydrophobic polystyrene (PS) core and a hydrophilic poly(N,N-dimethylaminoethylmethracrylate) (DMAEMA) outer shell. Within this work, a general procedure was established in order to achieve a desired gold nanoshell size regardless of the star polymer batch used, since the synthesis process conditions can cause star polymers to vary in size as well in the number and length of amino-functionalized arms. Control of the gold nanoshell diameter was optimized after an in-depth analysis of the synthesis parameters that affected the formation and final size of the dye-loaded star polymer gold nanoshells. The main parameters examined were pH of the gold seeds used to nucleate the templates and the ratio of star polymer to gold hydroxide used during the growth of the outer gold shell.
NASA Astrophysics Data System (ADS)
Rivas Rojas, P. C.; Tancredi, P.; Moscoso Londoño, O.; Knobel, M.; Socolovsky, L. M.
2018-04-01
Single and fixed size core, core-shell nanoparticles of iron oxides coated with a silica layer of tunable thickness were prepared by chemical routes, aiming to generate a frame of study of magnetic nanoparticles with controlled dipolar interactions. The batch of iron oxides nanoparticles of 4.5 nm radii, were employed as cores for all the coated samples. The latter was obtained via thermal decomposition of organic precursors, resulting on nanoparticles covered with an organic layer that was subsequently used to promote the ligand exchange in the inverse microemulsion process, employed to coat each nanoparticle with silica. The amount of precursor and times of reaction was varied to obtain different silica shell thicknesses, ranging from 0.5 nm to 19 nm. The formation of the desired structures was corroborated by TEM and SAXS measurements, the core single-phase spinel structure was confirmed by XRD, and superparamagnetic features with gradual change related to dipolar interaction effects were obtained by the study of the applied field and temperature dependence of the magnetization. To illustrate that dipolar interactions are consistently controlled, the main magnetic properties are presented and analyzed as a function of center to center minimum distance between the magnetic cores.
Developing core-shell upconversion nanoparticles for optical encoding
NASA Astrophysics Data System (ADS)
Huang, Kai
Lanthanide-doped upconversion nanoparticles (UCNPs) are an emerging class of luminescent materials that emit UV or visible light under near infra-red (NIR) excitations, thereby possessing a large anti-Stokes shift property. Also considering their sharp emission bands, excellent photo- and chemical stability, and almost zero auto-fluorescence of their NIR excitation, UCNPs are advantageous for optical encoding. Fabricating core-shell structured UCNPs provides a promising strategy to tune and enhance their upconverting luminescence. However, the energy transfer between core and shell had been rarely studied. Moreover, this strategy had been limited by the difficulty of coating thick shells onto the large cores of UCNPs. To overcome these constraints, the overall aim of this project is to study the inter-layers energy transfer in core-shell UCNPs and to develop an approach for coating thicker shell onto the core UCNPs, in order to fabricate UCNPs with enhanced and tunable luminescence for optical encoding. The strategy for encapsulating UCNPs into hydrogel droplet to fabricate multi-color bead barcodes has also been developed. Firstly, to study the inter-layers energy transfer between the core and shell of coreshell UCNPs, the activator and sensitizer ions were separately doped in the core or shell by fabricating NaYF4:Er NaYF4:Yb and NaYF4:Yb NaYF4:Er UCNPs. This eliminated the intra-layer energy transfer, resulting in a luminescence that is solely based on the energy transfer between layers, which facilitated the study of inter-layers energy transfer. The results demonstrated that the NaYF4:Yb NaYF4:Er structure, with sensitizer ions doped in the core, was preferable because of the strong luminescence, through minimizing the cross relaxations between Er3+ and Yb3+ and the surface quenching. Based on these information, a strategy of enhancing and tuning upconversion luminescence of core-shell UCNPs by accumulating sensitizer in the core has been developed. Next, a strategy of coating a thick shell by lutetium doping has been developed. With a smaller ion radius compared to Y3+, when Lu3+ partially replace Y3+ in the NaYF4 UCNPs during nanoparticle synthesis, nucleation process is suppressed and the growth process is promoted, which are favorable for increasing the nanoparticle size and coating a thicker shell onto the core UCNPs. Through the rational doping of Lu3+, core UCNPs with bigger sizes and enhanced luminescence were produced. Using NaLuF4 as the shell material, shells with tremendous thickness were coated onto core UCNPs, with the shell/core ratio of up to 10:1. This led to the fabrication of multi-color UCNPs with well-designed core-shell structures with multiple layers and controllable thicknesses. Finally, a strategy of encapsulating these UCNPs to produce optically encoded micro-beads through high-throughput microfluidics has been developed. The hydrophobic UCNPs were first modified with Pluronic F127 to render them hydrophilic and uniformly distributed in the poly (ethylene glycol) diacrylate (PEGDA) hydrogel precursor. Droplets of the hydrogel precursor were formed in a microfluidic device and cross-linked into micro-beads under UV irradiation. Through encapsulation of multi-color UCNPs and by controlling their ratio, optically encoded multi-color micro-beads have been easily fabricated. These multi-color UCNPs and micro-bead barcodes have great potential for use in multiplexed bioimaging and detection.
Dual-core optical fiber based strain sensor for remote sensing in hard-to-reach areas
NASA Astrophysics Data System (ADS)
MÄ kowska, Anna; Szostkiewicz, Łukasz; Kołakowska, Agnieszka; Budnicki, Dawid; Bieńkowska, Beata; Ostrowski, Łukasz; Murawski, Michał; Napierała, Marek; Mergo, Paweł; Nasiłowski, Tomasz
2017-10-01
We present research on optical fiber sensors based on microstructured multi-core fiber. Elaborated sensor can be advantageously used in hard-to-reach areas by taking advantage of the fact, that optical fibers can play both the role of sensing elements and they can realize signal delivery. By using the sensor, it is possible to increase the level of the safety in the explosive endangered areas, e.g. in mine-like objects. As a base for the strain remote sensor we use dual-core fibers. The multi-core fibers possess a characteristic parameter called crosstalk, which is a measure of the amount of signal which can pass to the adjacent core. The strain-sensitive area is made by creating the tapered section, in which the level of crosstalk is changed. Due to this fact, we present broadened conception of fiber optic sensor designing. Strain measurement is realized thanks to the fact, that depending on the strain applied, the power distribution between the cores of dual-core fibers changes. Principle of operation allows realization of measurements both in wavelength and power domain.
2016-05-07
REPORT DOCUMENTATION PAGE I . ... ... .. . ,...,.., ............. OMB No. 0704-0188 The public reporting burden for this collection of...Student Support for Appl ication of Advanced Multi- Core Processor N00014-12-1-0298 Technologies to Oceanographic Research Sb. GRANT NUMBER Sc...communications protocols (i.e. UART, I2C, and SPI), through the , ’ . handing off of the data to the server APis. By providing a common set of tools
NASA Astrophysics Data System (ADS)
Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.
2011-12-01
With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Development a computer codes to couple PWR-GALE output and PC-CREAM input
NASA Astrophysics Data System (ADS)
Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.
2018-02-01
Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.
Design of Multi-core Fiber Patch Panel for Space Division Multiplexing Implementations
NASA Astrophysics Data System (ADS)
González, Luz E.; Morales, Alvaro; Rommel, Simon; Jørgensen, Bo F.; Porras-Montenegro, N.; Tafur Monroy, Idelfonso
2018-03-01
A multi-core fiber (MCF) patch panel was designed, allowing easy coupling of individual signals to and from a 7-core MCF. The device was characterized, measuring insertion loss and cross talk, finding highest insertion loss and lowest crosstalk at 1300 nm with values of 9.7 dB and -36.5 dB respectively, while at 1600 nm insertion loss drops to 4.8 dB and crosstalk increases to -24.1 dB. Two MCF splices between the fan-in module, the MCF, and the fan-out module are included in the characterization, and splicing parameters are discussed.
Software Defined Radio with Parallelized Software Architecture
NASA Technical Reports Server (NTRS)
Heckler, Greg
2013-01-01
This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Song, Shuaiwen; Fu, Haohuan
2014-08-16
Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less
Data management integration for biomedical core facilities
NASA Astrophysics Data System (ADS)
Zhang, Guo-Qiang; Szymanski, Jacek; Wilson, David
2007-03-01
We present the design, development, and pilot-deployment experiences of MIMI, a web-based, Multi-modality Multi-Resource Information Integration environment for biomedical core facilities. This is an easily customizable, web-based software tool that integrates scientific and administrative support for a biomedical core facility involving a common set of entities: researchers; projects; equipments and devices; support staff; services; samples and materials; experimental workflow; large and complex data. With this software, one can: register users; manage projects; schedule resources; bill services; perform site-wide search; archive, back-up, and share data. With its customizable, expandable, and scalable characteristics, MIMI not only provides a cost-effective solution to the overarching data management problem of biomedical core facilities unavailable in the market place, but also lays a foundation for data federation to facilitate and support discovery-driven research.
Dietrich, Philipp-Immanuel; Harris, Robert J; Blaicher, Matthias; Corrigan, Mark K; Morris, Tim M; Freude, Wolfgang; Quirrenbach, Andreas; Koos, Christian
2017-07-24
Coupling of light into multi-core fibers (MCF) for spatially resolved spectroscopy is of great importance to astronomical instrumentation. To achieve high coupling efficiencies along with fill-fractions close to unity, micro-optical elements are required to concentrate the incoming light to the individual cores of the MCF. In this paper we demonstrate facet-attached lens arrays (LA) fabricated by two-photon polymerization. The LA provide close to 100% fill-fraction along with efficiencies of up to 73% (down to 1.4 dB loss) for coupling of light from free space into an MCF core. We show the viability of the concept for astrophotonic applications by integrating an MCF-LA assembly in an adaptive-optics test bed and by assessing its performance as a tip/tilt sensor.
Joshi, Anuradha; Ganjiwale, Jaishree
2015-07-01
Various studies in medical education have shown that active learning strategies should be incorporated into the teaching-learning process to make learning more effective, efficient and meaningful. The aim of this study was to evaluate student's perceptions on an innovative revision method conducted in Pharmacology i.e. in form of Autobiography of Drugs. The main objective of study was to help students revise the core topics in Pharmacology in an interesting way. Questionnaire based survey on a newer method of pharmacology revision in two batches of second year MBBS students of a tertiary care teaching medical college. Various sessions on Autobiography of Drugs were conducted amongst two batches of second year MBBS students, during their Pharmacology revision classes. Student's perceptions were documented with the help of a five point likert scale through a questionnaire regarding quality, content and usefulness of this method. Descriptive analysis. Students of both the batches appreciated the innovative method taken up for revision. The median scores in most of the domains in both batches were four out of five, indicative of good response. Feedback from open-ended questions also revealed that the innovative module on "Autobiography of Drugs" was taken as a positive learning experience by students. Autobiography of drugs has been used to help students recall topics that they have learnt through other teachings methods. Autobiography sessions in Pharmacology during revision slots, can be one of the interesting ways in helping students revise and recall topics which have already been taught in theory classes.
Shuttle Engine Designs Revolutionize Solar Power
NASA Technical Reports Server (NTRS)
2014-01-01
The Space Shuttle Main Engine was built under contract to Marshall Space Flight Center by Rocketdyne, now part of Pratt & Whitney Rocketdyne (PWR). PWR applied its NASA experience to solar power technology and licensed the technology to Santa Monica, California-based SolarReserve. The company now develops concentrating solar power projects, including a plant in Nevada that has created 4,300 jobs during construction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobbitt, Jonathan M; Weibel, Stephen C; Elshobaki, Moneim
2014-12-16
Fourier transform (FT)-plasmon waveguide resonance (PWR) spectroscopy measures light reflectivity at a waveguide interface as the incident frequency and angle are scanned. Under conditions of total internal reflection, the reflected light intensity is attenuated when the incident frequency and angle satisfy conditions for exciting surface plasmon modes in the metal as well as guided modes within the waveguide. Expanding upon the concept of two-frequency surface plasmon resonance developed by Peterlinz and Georgiadis [ Opt. Commun. 1996, 130, 260], the apparent index of refraction and the thickness of a waveguide can be measured precisely and simultaneously by FT-PWR with an averagemore » percent relative error of 0.4%. Measuring reflectivity for a range of frequencies extends the analysis to a wide variety of sample compositions and thicknesses since frequencies with the maximum attenuation can be selected to optimize the analysis. Additionally, the ability to measure reflectivity curves with both p- and s-polarized light provides anisotropic indices of refraction. FT-PWR is demonstrated using polystyrene waveguides of varying thickness, and the validity of FT-PWR measurements are verified by comparing the results to data from profilometry and atomic force microscopy (AFM).« less
Bobbitt, Jonathan M; Weibel, Stephen C; Elshobaki, Moneim; Chaudhary, Sumit; Smith, Emily A
2014-12-16
Fourier transform (FT)-plasmon waveguide resonance (PWR) spectroscopy measures light reflectivity at a waveguide interface as the incident frequency and angle are scanned. Under conditions of total internal reflection, the reflected light intensity is attenuated when the incident frequency and angle satisfy conditions for exciting surface plasmon modes in the metal as well as guided modes within the waveguide. Expanding upon the concept of two-frequency surface plasmon resonance developed by Peterlinz and Georgiadis [Opt. Commun. 1996, 130, 260], the apparent index of refraction and the thickness of a waveguide can be measured precisely and simultaneously by FT-PWR with an average percent relative error of 0.4%. Measuring reflectivity for a range of frequencies extends the analysis to a wide variety of sample compositions and thicknesses since frequencies with the maximum attenuation can be selected to optimize the analysis. Additionally, the ability to measure reflectivity curves with both p- and s-polarized light provides anisotropic indices of refraction. FT-PWR is demonstrated using polystyrene waveguides of varying thickness, and the validity of FT-PWR measurements are verified by comparing the results to data from profilometry and atomic force microscopy (AFM).
NASA Astrophysics Data System (ADS)
Rodrigues, Manuel J.; Fernandes, David E.; Silveirinha, Mário G.; Falcão, Gabriel
2018-01-01
This work introduces a parallel computing framework to characterize the propagation of electron waves in graphene-based nanostructures. The electron wave dynamics is modeled using both "microscopic" and effective medium formalisms and the numerical solution of the two-dimensional massless Dirac equation is determined using a Finite-Difference Time-Domain scheme. The propagation of electron waves in graphene superlattices with localized scattering centers is studied, and the role of the symmetry of the microscopic potential in the electron velocity is discussed. The computational methodologies target the parallel capabilities of heterogeneous multi-core CPU and multi-GPU environments and are built with the OpenCL parallel programming framework which provides a portable, vendor agnostic and high throughput-performance solution. The proposed heterogeneous multi-GPU implementation achieves speedup ratios up to 75x when compared to multi-thread and multi-core CPU execution, reducing simulation times from several hours to a couple of minutes.
A theoretical framework for negotiating the path of emergency management multi-agency coordination.
Curnin, Steven; Owen, Christine; Paton, Douglas; Brooks, Benjamin
2015-03-01
Multi-agency coordination represents a significant challenge in emergency management. The need for liaison officers working in strategic level emergency operations centres to play organizational boundary spanning roles within multi-agency coordination arrangements that are enacted in complex and dynamic emergency response scenarios creates significant research and practical challenges. The aim of the paper is to address a gap in the literature regarding the concept of multi-agency coordination from a human-environment interaction perspective. We present a theoretical framework for facilitating multi-agency coordination in emergency management that is grounded in human factors and ergonomics using the methodology of core-task analysis. As a result we believe the framework will enable liaison officers to cope more efficiently within the work domain. In addition, we provide suggestions for extending the theory of core-task analysis to an alternate high reliability environment. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
An Energy-Aware Runtime Management of Multi-Core Sensory Swarms.
Kim, Sungchan; Yang, Hoeseok
2017-08-24
In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today's sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique.
An Energy-Aware Runtime Management of Multi-Core Sensory Swarms
Kim, Sungchan
2017-01-01
In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today’s sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle) measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique. PMID:28837094
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Cyclic and SCC Behavior of Alloy 690 HAZ in a PWR Environment
NASA Astrophysics Data System (ADS)
Alexandreanu, Bogdan; Chen, Yiren; Natesan, Ken; Shack, Bill
The objective of this work is to determine the cyclic and stress corrosion cracking (SCC) crack growth rates (CGRs) in a simulated PWR water environment for Alloy 690 heat affected zone (HAZ). In order to meet the objective, an Alloy 152 J-weld was produced on a piece of Alloy 690 tubing, and the test specimens were aligned with the HAZ. The environmental enhancement of cyclic CGRs for Alloy 690 HAZ was comparable to that measured for the same alloy in the as-received condition. The two Alloy 690 HAZ samples tested exhibited maximum SCC CGR rates of 10-11 m/s in the simulated PWR environment at 320°C, however, on average, these rates are similar or only slightly higher than those for the as-received alloy.
Multivariate analysis of gamma spectra to characterize used nuclear fuel
Coble, Jamie; Orton, Christopher; Schwantes, Jon
2017-01-17
The Multi-Isotope Process (MIP) Monitor provides an efficient means to monitor the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of key stages in the reprocessing stream in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor; PWR and BWR, respectively), initial enrichment, burn up, and cooling time. Simulated gammamore » spectra were used in this paper to develop and test three fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type for the three PWR and three BWR reactor designs studied. Locally weighted PLS models were fitted on-the-fly to estimate the remaining fuel characteristics. For the simulated gamma spectra considered, burn up was predicted with 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment with approximately 2% RMSPE. Finally, this approach to automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and to inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters that may indicate issues with operational control or malicious activities.« less
Multivariate analysis of gamma spectra to characterize used nuclear fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie; Orton, Christopher; Schwantes, Jon
The Multi-Isotope Process (MIP) Monitor provides an efficient means to monitor the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of key stages in the reprocessing stream in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor; PWR and BWR, respectively), initial enrichment, burn up, and cooling time. Simulated gammamore » spectra were used in this paper to develop and test three fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type for the three PWR and three BWR reactor designs studied. Locally weighted PLS models were fitted on-the-fly to estimate the remaining fuel characteristics. For the simulated gamma spectra considered, burn up was predicted with 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment with approximately 2% RMSPE. Finally, this approach to automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and to inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters that may indicate issues with operational control or malicious activities.« less
Multi-level Hierarchical Poly Tree computer architectures
NASA Technical Reports Server (NTRS)
Padovan, Joe; Gute, Doug
1990-01-01
Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
NASA Astrophysics Data System (ADS)
Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.
2018-04-01
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.
NASA Astrophysics Data System (ADS)
Coppock, Matthew B.; Farrow, Blake; Warner, Candice; Finch, Amethist S.; Lai, Bert; Sarkes, Deborah A.; Heath, James R.; Stratis-Cullum, Dimitra
2014-05-01
Current biodetection assays that employ monoclonal antibodies as primary capture agents exhibit limited fieldability, shelf life, and performance due to batch-to-batch production variability and restricted thermal stability. In order to improve upon the detection of biological threats in fieldable assays and systems for the Army, we are investigating protein catalyzed capture (PCC) agents as drop-in replacements for the existing antibody technology through iterative in situ click chemistry. The PCC agent oligopeptides are developed against known protein epitopes and can be mass produced using robotic methods. In this work, a PCC agent under development will be discussed. The performance, including affinity, selectivity, and stability of the capture agent technology, is analyzed by immunoprecipitation, western blotting, and ELISA experiments. The oligopeptide demonstrates superb selectivity coupled with high affinity through multi-ligand design, and improved thermal, chemical, and biochemical stability due to non-natural amino acid PCC agent design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Colby B.; Folsom, Charles P.; Davis, Cliff B.
Experimental testing in the Multi-Static Environment Rodlet Transient Test Apparatus (SERTTA) will lead the rebirth of transient fuel testing in the United States as part of the Accident Tolerant Fuels (ATF) progam. The Multi-SERTTA is comprised of four isolated pressurized environments capable of a wide variety of working fluids and thermal conditions. Ultimately, the TREAT reactor as well as the Multi-SERTTA test vehicle serve the purpose of providing desired thermal-hydraulic boundary conditions to the test specimen. The initial ATF testing in TREAT will focus on reactivity insertion accident (RIA) events using both gas and water environments including typical PWR operatingmore » pressures and temperatures. For the water test environment, a test configuration is envisioned using the expansion tank as part of the gas-filled expansion volume seen by the test to provide additional pressure relief. The heat transfer conditions during the high energy power pulses of RIA events remains a subject of large uncertainty and great importance for fuel performance predictions. To support transient experiments, the Multi-SERTTA vehicle has been modeled using RELAP5 with a baseline test specimen composed of UO2 fuel in zircaloy cladding. The modeling results show the influence of the designs of the specimen, vehicle, and transient power pulses. The primary purpose of this work is to provide input and boundary conditions to fuel performance code BISON. Therefore, studies of parameters having influence on specimen performance during RIA transients are presented including cladding oxidation, power pulse magnitude and width, cladding-to-coolant heat fluxes, fuel-to-cladding gap, transient boiling effects (modified CHF values), etc. The results show the great flexibility and capacity of the TREAT Multi-SERTTA test vehicle to provide testing under a wide range of prototypic thermal-hydraulic conditions as never done before.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishii, Mamoru
The NEUP funded project, NEUP-3496, aims to experimentally investigate two-phase natural circulation flow instability that could occur in Small Modular Reactors (SMRs), especially for natural circulation SMRs. The objective has been achieved by systematically performing tests to study the general natural circulation instability characteristics and the natural circulation behavior under start-up or design basis accident conditions. Experimental data sets highlighting the effect of void reactivity feedback as well as the effect of power ramp-up rate and system pressure have been used to develop a comprehensive stability map. The safety analysis code, RELAP5, has been used to evaluate experimental results andmore » models. Improvements to the constitutive relations for flashing have been made in order to develop a reliable analysis tool. This research has been focusing on two generic SMR designs, i.e. a small modular Simplified Boiling Water Reactor (SBWR) like design and a small integral Pressurized Water Reactor (PWR) like design. A BWR-type natural circulation test facility was firstly built based on the three-level scaling analysis of the Purdue Novel Modular Reactor (NMR) with an electric output of 50 MWe, namely NMR-50, which represents a BWR-type SMR with a significantly reduced reactor pressure vessel (RPV) height. The experimental facility was installed with various equipment to measure thermalhydraulic parameters such as pressure, temperature, mass flow rate and void fraction. Characterization tests were performed before the startup transient tests and quasi-steady tests to determine the loop flow resistance. The control system and data acquisition system were programmed with LabVIEW to realize the realtime control and data storage. The thermal-hydraulic and nuclear coupled startup transients were performed to investigate the flow instabilities at low pressure and low power conditions for NMR-50. Two different power ramps were chosen to study the effect of startup power density on the flow instability. The experimental startup transient results showed the existence of three different flow instability mechanisms, i.e., flashing instability, condensation induced flow instability, and density wave oscillations. In addition, the void-reactivity feedback did not have significant effects on the flow instability during the startup transients for NMR-50. ii Several initial startup procedures with different power ramp rates were experimentally investigated to eliminate the flow instabilities observed from the startup transients. Particularly, the very slow startup transient and pressurized startup transient tests were performed and compared. It was found that the very slow startup transients by applying very small power density can eliminate the flashing oscillations in the single-phase natural circulation and stabilize the flow oscillations in the phase of net vapor generation. The initially pressurized startup procedure was tested to eliminate the flashing instability during the startup transients as well. The pressurized startup procedure included the initial pressurization, heat-up, and venting process. The startup transient tests showed that the pressurized startup procedure could eliminate the flow instability during the transition from single-phase flow to two-phase flow at low pressure conditions. The experimental results indicated that both startup procedures were applicable to the initial startup of NMR. However, the pressurized startup procedures might be preferred due to short operating hours required. In order to have a deeper understanding of natural circulation flow instability, the quasi-steady tests were performed using the test facility installed with preheater and subcooler. The effect of system pressure, core inlet subcooling, core power density, inlet flow resistance coefficient, and void reactivity feedback were investigated in the quasi-steady state tests. The experimental stability boundaries were determined between unstable and stable flow conditions in the dimensionless stability plane of inlet subcooling number and Zuber number. To predict the stability boundary theoretically, linear stability analysis in the frequency domain was performed at four sections of the natural circulation test loop. The flashing phenomena in the chimney section was considered as an axially uniform heat source. And the dimensionless characteristic equation of the pressure drop perturbation was obtained by considering the void fraction effect and outlet flow resistance in the core section. The theoretical flashing boundary showed some discrepancies with previous experimental data from the quasi-steady state tests. In the future, thermal non-equilibrium was recommended to improve the accuracy of flashing instability boundary. As another part of the funded research, flow instabilities of a PWR-type SMR under low pressure and low power conditions were investigated experimentally as well. The NuScale reactor design was selected as the prototype for the PWR-type SMR. In order to experimentally study the natural circulation behavior of NuScale iii reactor during accidental scenarios, detailed scaling analyses are necessary to ensure that the scaled phenomena could be obtained in a laboratory test facility. The three-level scaling method is used as well to obtain the scaling ratios derived from various non-dimensional numbers. The design of the ideally scaled facility (ISF) was initially accomplished based on these scaling ratios. Then the engineering scaled facility (ESF) was designed and constructed based on the ISF by considering engineering limitations including laboratory space, pipe size, and pipe connections etc. PWR-type SMR experiments were performed in this well-scaled test facility to investigate the potential thermal hydraulic flow instability during the blowdown events, which might occur during the loss of coolant accident (LOCA) and loss of heat sink accident (LOHS) of the prototype PWR-type SMR. Two kinds of experiments, normal blowdown event and cold blowdown event, were experimentally investigated and compared with code predictions. The normal blowdown event was experimentally simulated since an initial condition where the pressure was lower than the designed pressure of the experiment facility, while the code prediction of blowdown started from the normal operation condition. Important thermal hydraulic parameters including reactor pressure vessel (RPV) pressure, containment pressure, local void fraction and temperature, pressure drop and natural circulation flow rate were measured and analyzed during the blowdown event. The pressure and water level transients are similar to the experimental results published by NuScale [51], which proves the capability of current loop in simulating the thermal hydraulic transient of real PWR-type SMR. During the 20000s blowdown experiment, water level in the core was always above the active fuel assemble during the experiment and proved the safety of natural circulation cooling and water recycling design of PWR-type SMR. Besides, pressure, temperature, and water level transient can be accurately predicted by RELAP5 code. However, the oscillations of natural circulation flow rate, water level and pressure drops were observed during the blowdown transients. This kind of flow oscillations are related to the water level and the location upper plenum, which is a path for coolant flow from chimney to steam generator and down comer. In order to investigate the transients start from the opening of ADS valve in both experimental and numerical way, the cold blow-down experiment is conducted. For the cold blowdown event, different from setting both reactor iv pressure vessel (RPV) and containment at high temperature and pressure, only RPV was heated close to the highest designed pressure and then open the ADS valve, same process was predicted using RELAP5 code. By doing cold blowdown experiment, the entire transients from the opening of ADS can be investigated by code and benchmarked with experimental data. Similar flow instability observed in the cold blowdown experiment. The comparison between code prediction and experiment data showed that the RELAP5 code can successfully predict the pressure void fraction and temperature transient during the cold blowdown event with limited error, but numerical instability exists in predicting natural circulation flow rate. Besides, the code is lack of capability in predicting the water level related flow instability observed in experiments.« less
Sniegowski, Jeffrey J.; Rodgers, Murray S.; McWhorter, Paul J.; Aeschliman, Daniel P.; Miller, William M.
2002-01-01
A microturbine fabricated by a three-level semiconductor batch-fabrication process based on polysilicon surface-micromachining. The microturbine comprises microelectromechanical elements formed from three polysilicon multi-layer surfaces applied to a silicon substrate. Interleaving sacrificial oxide layers provides electrical and physical isolation, and selective etching of both the sacrificial layers and the polysilicon layers allows formation of individual mechanical and electrical elements as well as the required space for necessary movement of rotating turbine parts and linear elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodiac, F.; Hudelot, JP.; Lecerf, J.
CABRI is an experimental pulse reactor operated by CEA at the Cadarache research center. Since 1978 the experimental programs have aimed at studying the fuel behavior under Reactivity Initiated Accident (RIA) conditions. Since 2003, it has been refurbished in order to be able to provide RIA and LOCA (Loss Of Coolant Accident) experiments in prototypical PWR conditions (155 bar, 300 deg. C). This project is part of a broader scope including an overall facility refurbishment and a safety review. The global modification is conducted by the CEA project team. It is funded by IRSN, which is conducting the CIP experimentalmore » program, in the framework of the OECD/NEA project CIP. It is financed in the framework of an international collaboration. During the reactor restart, commissioning tests are realized for all equipment, systems and circuits of the reactor. In particular neutronics and power commissioning tests will be performed respectively in 2015 and 2016. This paper focuses on the design of a complete and original dosimetry program that was built in support to the CABRI core characterization and to the power calibration. Each one of the above experimental goals will be fully described, as well as the target uncertainties and the forecasted experimental techniques and data treatment. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barry, Kenneth
The Nuclear Energy Institute (NEI) Small Modular Reactor (SMR) Licensing Task Force (TF) has been evaluating licensing issues unique and important to iPWRs, ranking these issues, and developing NEI position papers for submittal to the U.S. Nuclear Regulatory Commission (NRC) during the past three years. Papers have been developed and submitted to the NRC in a range of areas including: Price-Anderson Act, NRC annual fees, security, modularity, and staffing. In December, 2012, NEI completed a draft position paper on SMR source terms and participated in an NRC public meeting presenting a summary of this paper, which was subsequently submitted tomore » the NRC. One important conclusion of the source term paper was the evaluation and selection of high importance areas where additional research would have a significant impact on source terms. The highest ranked research area was iPWR containment aerosol natural deposition. The NRC accepts the use of existing aerosol deposition correlations in Regulatory Guide 1.183, but these were developed for large light water reactor (LWR) containments. Application of these correlations to an iPWR design has resulted in greater than a ten-fold reduction of containment airborne aerosol inventory as compared to large LWRs. Development and experimental justification of containment aerosol natural deposition correlations specifically for the unique iPWR containments is expected to result in a large reduction of design basis and beyond-design-basis accident source terms with concomitantly smaller dose to workers and the public. Therefore, NRC acceptance of iPWR containment aerosol natural deposition correlations will directly support the industry’s goal of reducing the Emergency Planning Zone (EPZ) for SMRs. Based on the results in this work, it is clear that thermophoresis is relatively unimportant for iPWRs. Gravitational settling is well understood, and may be the dominant process for a dry environment. Diffusiophoresis and enhanced settling by particle growth are the dominant processes for determining DFs for expected conditions in an iPWR containment. These processes are dependent on the areato-volume (A/V) ratio, which should benefit iPWR designs because these reactors have higher A/Vs compared to existing LWRs.« less
Method and apparatus for monitoring two-phase flow. [PWR
Sheppard, J.D.; Tong, L.S.
1975-12-19
A method and apparatus for monitoring two-phase flow is provided that is particularly related to the monitoring of transient two-phase (liquid-vapor) flow rates such as may occur during a pressurized water reactor core blow-down. The present invention essentially comprises the use of flanged wire screens or similar devices, such as perforated plates, to produce certain desirable effects in the flow regime for monitoring purposes. One desirable effect is a measurable and reproducible pressure drop across the screen. The pressure drop can be characterized for various known flow rates and then used to monitor nonhomogeneous flow regimes. Another useful effect of the use of screens or plates in nonhomogeneous flow is that such apparatus tends to create a uniformly dispersed flow regime in the immediate downstream vicinity. This is a desirable effect because it usually increases the accuracy of flow rate measurements determined by conventional methods.
AREVA Team Develops Sump Strainer Blockage Solution for PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phan, Ray
2006-07-01
The purpose of this paper is to discuss the methodology, testing challenges, and results of testing that a team of experts from Areva NP, Alden Research Laboratory, Inc (ALDEN), and Performance Contracting Inc. (PCI) has developed. The team is currently implementing a comprehensive solution to the issue of Emergency Core Cooling System (ECCS) sump strainer blockage facing Pressurized Water Reactor (PWR) Nuclear Plants. The team has successfully demonstrated two key results from the testing of passive Sure-FlowTM strainers, which were designed to distribute the required flow over a large surface area resulting in extremely low approach velocities. First, the actualmore » head loss (pressure drop) as tested, across the prototype strainers, was much lower than the calculated head loss using the Nuclear Regulatory Commission (NRC) approved NUREG/CR-6224 head loss correlation. Second, the penetration fractions were much lower than those seen in the NRC sponsored debris penetration tests. (author)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baek, M. H.; Kim, S. J.; Yoo, J.
The major roles of a prototype SFR are to provide irradiation test capability for the fuel and structure materials, and to obtain operational experiences of systems. Due to a compromise between the irradiation capability and construction costs, the power level should be properly determined. In this paper, a trade-off study on the power level of the prototype SFR was performed from a neutronics viewpoint. To select candidate cores, the parametric study of pin diameters was estimated using 20 wt.% uranium fuel. The candidate cores of different power levels, 125 MWt, 250 MWt, 400 MWt, and 500 MWt, were compared withmore » the 1500 MWt reference core. The resulting core performance and economic efficiency indices became insensitive to the power at about 400-500 MWt and sharply deteriorated at about 125-250 MWt with decreasing core sizes. Fuel management scheme, TRU core performance comparing with uranium core, and sodium void reactivity were also evaluated with increasing power levels. It is found that increasing the number of batches showed higher burnup performance and economic efficiency. However, increasing the cycle length showed the trends in lower economic efficiency. Irradiation performance of TRU and enriched TRU cores was improved about 20 % and 50 %, respectively. The maximum sodium void reactivity of 5.2$ was confirmed less than the design limit of 7.5$. As a result, the power capacity of the prototype SFR should not be less than 250 MWt and would be appropriate at {approx} 500 MWt considering the performance and economic efficiency. (authors)« less
Interactive Parallel Data Analysis within Data-Centric Cluster Facilities using the IPython Notebook
NASA Astrophysics Data System (ADS)
Pascoe, S.; Lansdowne, J.; Iwi, A.; Stephens, A.; Kershaw, P.
2012-12-01
The data deluge is making traditional analysis workflows for many researchers obsolete. Support for parallelism within popular tools such as matlab, IDL and NCO is not well developed and rarely used. However parallelism is necessary for processing modern data volumes on a timescale conducive to curiosity-driven analysis. Furthermore, for peta-scale datasets such as the CMIP5 archive, it is no longer practical to bring an entire dataset to a researcher's workstation for analysis, or even to their institutional cluster. Therefore, there is an increasing need to develop new analysis platforms which both enable processing at the point of data storage and which provides parallelism. Such an environment should, where possible, maintain the convenience and familiarity of our current analysis environments to encourage curiosity-driven research. We describe how we are combining the interactive python shell (IPython) with our JASMIN data-cluster infrastructure. IPython has been specifically designed to bridge the gap between the HPC-style parallel workflows and the opportunistic curiosity-driven analysis usually carried out using domain specific languages and scriptable tools. IPython offers a web-based interactive environment, the IPython notebook, and a cluster engine for parallelism all underpinned by the well-respected Python/Scipy scientific programming stack. JASMIN is designed to support the data analysis requirements of the UK and European climate and earth system modeling community. JASMIN, with its sister facility CEMS focusing the earth observation community, has 4.5 PB of fast parallel disk storage alongside over 370 computing cores provide local computation. Through the IPython interface to JASMIN, users can make efficient use of JASMIN's multi-core virtual machines to perform interactive analysis on all cores simultaneously or can configure IPython clusters across multiple VMs. Larger-scale clusters can be provisioned through JASMIN's batch scheduling system. Outputs can be summarised and visualised using the full power of Python's many scientific tools, including Scipy, Matplotlib, Pandas and CDAT. This rich user experience is delivered through the user's web browser; maintaining the interactive feel of a workstation-based environment with the parallel power of a remote data-centric processing facility.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
NASA Astrophysics Data System (ADS)
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...
2016-01-26
This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.
NASA Astrophysics Data System (ADS)
Tu, Yiyou; Tong, Zhen; Jiang, Jianqing
2013-04-01
The effect of microstructure on clad/core interactions during the brazing of 4343/3005/4343 multi-layer aluminum brazing sheet was investigated employing differential scanning calorimetry (DSC) and electron back-scattering diffraction (EBSD). The thickness of the melted clad layer gradually decreased during the brazing operation. It could be completely removed isothermally as a result of diffusional solidification at the brazing temperature. During the brazing cycle, the rate of loss of the melt in the brazing sheet, with small equiaxed grains' core layer, was higher than that with the core layer consisting of elongated large grains. The difference in microstructure affected the amount of liquid formed during brazing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Klimentov, A
2016-01-01
The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Managementmore » System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full production for the ATLAS experiment since September 2015. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
De, K; Jha, S; Maeno, T
Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Datamore » Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accom- plishments in running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facility s infrastructure for High Energy and Nuclear Physics, as well as other data-intensive science applications, such as bioinformatics and astro-particle physics.« less
NASA Technical Reports Server (NTRS)
Go, B. M.; Righter, K.; Danielson, L.; Pando, K.
2015-01-01
Previous geochemical and geophysical experiments have proposed the presence of a small, metallic lunar core, but its composition is still being investigated. Knowledge of core composition can have a significant effect on understanding the thermal history of the Moon, the conditions surrounding the liquid-solid or liquid-liquid field, and siderophile element partitioning between mantle and core. However, experiments on complex bulk core compositions are very limited. One limitation comes from numerous studies that have only considered two or three element systems such as Fe-S or Fe-C, which do not supply a comprehensive understanding for complex systems such as Fe-Ni-S-Si-C. Recent geophysical data suggests the presence of up to 6% lighter elements. Reassessments of Apollo seismological analyses and samples have also shown the need to acquire more data for a broader range of pressures, temperatures, and compositions. This study considers a complex multi-element system (Fe-Ni-S-C) for a relevant pressure and temperature range to the Moon's core conditions.
Coropceanu, Igor; Rossinelli, Aurelio; Caram, Justin R; Freyria, Francesca S; Bawendi, Moungi G
2016-03-22
A two-step process has been developed for growing the shell of CdSe/CdS core/shell nanorods. The method combines an established fast-injection-based step to create the initial elongated shell with a second slow-injection growth that allows for a systematic variation of the shell thickness while maintaining a high degree of monodispersity at the batch level and enhancing the uniformity at the single-nanorod level. The second growth step resulted in nanorods exhibiting a fluorescence quantum yield up to 100% as well as effectively complete energy transfer from the shell to the core. This improvement suggests that the second step is associated with a strong suppression of the nonradiative channels operating both before and after the thermalization of the exciton. This hypothesis is supported by the suppression of a defect band, ubiquitous to CdSe-based nanocrystals after the second growth.
Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro
2012-10-15
There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.
Report on the PWR-radiation protection/ALARA Committee
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, D.J.
1995-03-01
In 1992, representatives from several utilities with operational Pressurized Water Reactors (PWR) formed the PWR-Radiation Protection/ALARA Committee. The mission of the Committee is to facilitate open communications between member utilities relative to radiation protection and ALARA issues such that cost effective dose reduction and radiation protection measures may be instituted. While industry deregulation appears inevitable and inter-utility competition is on the rise, Committee members are fully committed to sharing both positive and negative experiences for the benefit of the health and safety of the radiation worker. Committee meetings provide current operational experiences through members providing Plant status reports, and informationmore » relative to programmatic improvements through member presentations and topic specific workshops. The most recent Committee workshop was facilitated to provide members with defined experiences that provide cost effective ALARA performance.« less
Physical and Electronic Isolation of Carbon Nanotube Conductors
NASA Technical Reports Server (NTRS)
OKeeffe, James; Biegel, Bryan (Technical Monitor)
2001-01-01
Multi-walled nanotubes are proposed as a method to electrically and physically isolate nanoscale conductors from their surroundings. We use tight binding (TB) and density functional theory (DFT) to simulate the effects of an external electric field on multi-wall nanotubes. Two categories of multi-wall nanotube are investigated, those with metallic and semiconducting outer shells. In the metallic case, simulations show that the outer wall effectively screens the inner core from an applied electric field. This offers the ability to reduce crosstalk between nanotube conductors. A semiconducting outer shell is found not to perturb an electric field incident on the inner core, thereby providing physical isolation while allowing the tube to remain electrically coupled to its surroundings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Harris, James Austin; Hix, William Raphael
Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport,more » and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.« less
Large Scale Document Inversion using a Multi-threaded Computing System
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2018-01-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701
Large Scale Document Inversion using a Multi-threaded Computing System.
Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won
2017-06-01
Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.
Optimization of the coherence function estimation for multi-core central processing unit
NASA Astrophysics Data System (ADS)
Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.
2017-02-01
The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.
Performance implications from sizing a VM on multi-core systems: A Data analytic application s view
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Horey, James L; Begoli, Edmon
In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Antonacci, M.; Bagnasco, S.; Boccali, T.; Bucchi, R.; Caballer, M.; Costantini, A.; Donvito, G.; Gaido, L.; Italiano, A.; Michelotto, D.; Panella, M.; Salomoni, D.; Vallero, S.
2017-10-01
One of the challenges a scientific computing center has to face is to keep delivering well consolidated computational frameworks (i.e. the batch computing farm), while conforming to modern computing paradigms. The aim is to ease system administration at all levels (from hardware to applications) and to provide a smooth end-user experience. Within the INDIGO- DataCloud project, we adopt two different approaches to implement a PaaS-level, on-demand Batch Farm Service based on HTCondor and Mesos. In the first approach, described in this paper, the various HTCondor daemons are packaged inside pre-configured Docker images and deployed as Long Running Services through Marathon, profiting from its health checks and failover capabilities. In the second approach, we are going to implement an ad-hoc HTCondor framework for Mesos. Container-to-container communication and isolation have been addressed exploring a solution based on overlay networks (based on the Calico Project). Finally, we have studied the possibility to deploy an HTCondor cluster that spans over different sites, exploiting the Condor Connection Broker component, that allows communication across a private network boundary or firewall as in case of multi-site deployments. In this paper, we are going to describe and motivate our implementation choices and to show the results of the first tests performed.
Experimental design of a twin-column countercurrent gradient purification process.
Steinebach, Fabian; Ulmer, Nicole; Decker, Lara; Aumann, Lars; Morbidelli, Massimo
2017-04-07
As typical for separation processes, single unit batch chromatography exhibits a trade-off between purity and yield. The twin-column MCSGP (multi-column countercurrent solvent gradient purification) process allows alleviating such trade-offs, particularly in the case of difficult separations. In this work an efficient and reliable procedure for the design of the twin-column MCSGP process is developed. This is based on a single batch chromatogram, which is selected as the design chromatogram. The derived MCSGP operation is not intended to provide optimal performance, but it provides the target product in the selected fraction of the batch chromatogram, but with higher yield. The design procedure is illustrated for the isolation of the main charge isoform of a monoclonal antibody from Protein A eluate with ion-exchange chromatography. The main charge isoform was obtained at a purity and yield larger than 90%. At the same time process related impurities such as HCP and leached Protein A as well as aggregates were at least equally well removed. Additionally, the impact of several design parameters on the process performance in terms of purity, yield, productivity and buffer consumption is discussed. The obtained results can be used for further fine-tuning of the process parameters so as to improve its performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Primary water chemistry improvement for radiation exposure reduction at Japanese PWR Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishizawa, Eiichi
1995-03-01
Radiation exposure during the refueling outages at Japanese Pressurized Water Reactor (PWR) Plants has been gradually decreased through continuous efforts keeping the radiation dose rates at relatively low level. The improvement of primary water chemistry in respect to reduction of the radiation sources appears as one of the most important contributions to the achieved results and can be classified by the plant operation conditions as follows
Comparison of Measures of Vibration Affecting Occupants of Military Vehicles
1986-12-01
8217 ,, l I WES equipment 27. The WES equipment consisted of a battery operated absorbed power ( ABS -PW) meter with signal conditioning...West Germany. These will be referred to as the ISO ride meter and the ABS -PWR ridemeter, respectively. The first implemented the vibration measure...the ABS -PWR algorithms were used with each acceleration signal source (analog and digital) to provide a comprehensive basis for comparing the vibration
NASA Astrophysics Data System (ADS)
Hicks, P. D.; Robinson, F. P. A.
1986-10-01
Corrosion fatigue (CF) tests have been carried out on SA508 Cl 3 pressure vessel steel, in simulated P.W.R. environments. The test variables investigated included air and P.W.R. water environments, frequency variation over the range 1 Hz to 10 Hz, transverse and longitudinal crack growth directions, temperatures of 20 °C and 50 °C, and R-ratios of 0.2 and 0.7. It was found that decreasing the test frequency increased fatigue crack growth rates (FCGR) in P.W.R. environments, P.W.R. environment testing gave enhanced crack growth (vs air tests), FCGRs were greater for cracks growing in the longitudinal direction, slight increases in temperature gave noticeable accelerations in FCGR, and several air tests gave FCGR greater than those predicted by the existing ASME codes. Fractographic evidence indicates that FCGRs were accelerated by a hydrogen embrittlement mechanism. The presence of elongated MnS inclusions aided both mechanical fatigue and hydrogen embrittlement processes, thus producing synergistically fast FCGRs. Both anodic dissolution and hydrogen embrittlement mechanisms have been proposed for the environmental enhancement of crack growth rates. Electrochemical potential measurements and potentiostatic tests have shown that sample isolation of the test specimens from the clevises in the apparatus is not essential during low temperature corrosion fatigue testing.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Kim, Hyerin; Kang, NaNa; An, KyuHyeon; Koo, JaeHyung; Kim, Min-Soo
2016-01-01
Design of high-quality primers for multiple target sequences is essential for qPCR experiments, but is challenging due to the need to consider both homology tests on off-target sequences and the same stringent filtering constraints on the primers. Existing web servers for primer design have major drawbacks, including requiring the use of BLAST-like tools for homology tests, lack of support for ranking of primers, TaqMan probes and simultaneous design of primers against multiple targets. Due to the large-scale computational overhead, the few web servers supporting homology tests use heuristic approaches or perform homology tests within a limited scope. Here, we describe the MRPrimerW, which performs complete homology testing, supports batch design of primers for multi-target qPCR experiments, supports design of TaqMan probes and ranks the resulting primers to return the top-1 best primers to the user. To ensure high accuracy, we adopted the core algorithm of a previously reported MapReduce-based method, MRPrimer, but completely redesigned it to allow users to receive query results quickly in a web interface, without requiring a MapReduce cluster or a long computation. MRPrimerW provides primer design services and a complete set of 341 963 135 in silico validated primers covering 99% of human and mouse genes. Free access: http://MRPrimerW.com. PMID:27154272
Bapat, Prashant M; Das, Debasish; Dave, Nishant N; Wangikar, Pramod P
2006-12-15
Antibiotic fermentation processes are raw material cost intensive and the profitability is greatly dependent on the product yield per unit substrate consumed. In order to reduce costs, industrial processes use organic nitrogen substrates (ONS) such as corn steep liquor and yeast extract. Thus, although the stoichiometric analysis is the first logical step in process development, it is often difficult to achieve due to the ill-defined nature of the medium. Here, we present a black-box stoichiometric model for rifamycin B production via Amycolatopsis mediterranei S699 fermentation in complex multi-substrate medium. The stoichiometric coefficients have been experimentally evaluated for nine different media compositions. The ONS was quantified in terms of the amino acid content that it provides. Note that the black box stoichiometric model is an overall result of the metabolic reactions that occur during growth. Hence, the observed stoichiometric coefficients are liable to change during the batch cycle. To capture the shifts in stoichiometry, we carried out the stoichiometric analysis over short intervals of 8-16 h in a batch cycle of 100-200 h. An error analysis shows that there are no systematic errors in the measurements and that there are no unaccounted products in the process. The growth stoichiometry shows a shift from one substrate combination to another during the batch cycle. The shifts were observed to correlate well with the shifts in the trends of pH and exit carbon dioxide profiles. To exemplify, the ammonia uptake and nitrate uptake phases were marked by a decreasing pH trend and an increasing pH trend, respectively. Further, we find the product yield per unit carbon substrate to be greatly dependent on the nature of the nitrogen substrate. The analysis presented here can be readily applied to other fermentation systems that employ multi-substrate complex media.
Microfluidics for producing poly (lactic-co-glycolic acid)-based pharmaceutical nanoparticles.
Li, Xuanyu; Jiang, Xingyu
2017-12-24
Microfluidic chips allow the rapid production of a library of nanoparticles (NPs) with distinct properties by changing the precursors and the flow rates, significantly decreasing the time for screening optimal formulation as carriers for drug delivery compared to conventional methods. The batch-to-batch reproducibility which is essential for clinical translation is achieved by precisely controlling the precursors and the flow rate, regardless of operators. Poly (lactic-co-glycolic acid) (PLGA) is the most widely used Food and Drug Administration (FDA)-approved biodegradable polymers. Researchers often combine PLGA with lipids or amphiphilic molecules to assemble into a core/shell structure to exploit the potential of PLGA-based NPs as powerful carriers for cancer-related drug delivery. In this review, we discuss the advantages associated with microfluidic chips for producing PLGA-based functional nanocomplexes for drug delivery. These laboratory-based methods can readily scale up to provide sufficient amount of PLGA-based NPs in microfluidic chips for clinical studies and industrial-scale production. Copyright © 2017. Published by Elsevier B.V.
Ganjiwale, Jaishree
2015-01-01
Introduction Various studies in medical education have shown that active learning strategies should be incorporated into the teaching–learning process to make learning more effective, efficient and meaningful. Objectives The aim of this study was to evaluate student’s perceptions on an innovative revision method conducted in Pharmacology i.e. in form of Autobiography of Drugs. The main objective of study was to help students revise the core topics in Pharmacology in an interesting way. Settings and Design Questionnaire based survey on a newer method of pharmacology revision in two batches of second year MBBS students of a tertiary care teaching medical college. Materials and Methods Various sessions on Autobiography of Drugs were conducted amongst two batches of second year MBBS students, during their Pharmacology revision classes. Student’s perceptions were documented with the help of a five point likert scale through a questionnaire regarding quality, content and usefulness of this method. Statistical analysis used Descriptive analysis. Results Students of both the batches appreciated the innovative method taken up for revision. The median scores in most of the domains in both batches were four out of five, indicative of good response. Feedback from open-ended questions also revealed that the innovative module on “Autobiography of Drugs” was taken as a positive learning experience by students. Conclusions Autobiography of drugs has been used to help students recall topics that they have learnt through other teachings methods. Autobiography sessions in Pharmacology during revision slots, can be one of the interesting ways in helping students revise and recall topics which have already been taught in theory classes. PMID:26393138
Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors
Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee
2012-01-01
In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181
Turkbey, Baris; Xu, Sheng; Kruecker, Jochen; Locklin, Julia; Pang, Yuxi; Shah, Vijay; Bernardo, Marcelino; Baccala, Angelo; Rastinehad, Ardeshir; Benjamin, Compton; Merino, Maria J; Wood, Bradford J; Choyke, Peter L; Pinto, Peter A
2011-03-29
During transrectal ultrasound (TRUS)-guided prostate biopsies, the actual location of the biopsy site is rarely documented. Here, we demonstrate the capability of TRUS-magnetic resonance imaging (MRI) image fusion to document the biopsy site and correlate biopsy results with multi-parametric MRI findings. Fifty consecutive patients (median age 61 years) with a median prostate-specific antigen (PSA) level of 5.8 ng/ml underwent 12-core TRUS-guided biopsy of the prostate. Pre-procedural T2-weighted magnetic resonance images were fused to TRUS. A disposable needle guide with miniature tracking sensors was attached to the TRUS probe to enable fusion with MRI. Real-time TRUS images during biopsy and the corresponding tracking information were recorded. Each biopsy site was superimposed onto the MRI. Each biopsy site was classified as positive or negative for cancer based on the results of each MRI sequence. Sensitivity, specificity, and receiver operating curve (ROC) area under the curve (AUC) values were calculated for multi-parametric MRI. Gleason scores for each multi-parametric MRI pattern were also evaluated. Six hundred and 5 systemic biopsy cores were analyzed in 50 patients, of whom 20 patients had 56 positive cores. MRI identified 34 of 56 positive cores. Overall, sensitivity, specificity, and ROC area values for multi-parametric MRI were 0.607, 0.727, 0.667, respectively. TRUS-MRI fusion after biopsy can be used to document the location of each biopsy site, which can then be correlated with MRI findings. Based on correlation with tracked biopsies, T2-weighted MRI and apparent diffusion coefficient maps derived from diffusion-weighted MRI are the most sensitive sequences, whereas the addition of delayed contrast enhancement MRI and three-dimensional magnetic resonance spectroscopy demonstrated higher specificity consistent with results obtained using radical prostatectomy specimens.
Till, Ugo; Gaucher-Delmas, Mireille; Saint-Aguet, Pascale; Hamon, Glenn; Marty, Jean-Daniel; Chassenieux, Christophe; Payré, Bruno; Goudounèche, Dominique; Mingotaud, Anne-Françoise; Violleau, Frédéric
2014-12-01
Polymersomes formed from amphiphilic block copolymers, such as poly(ethyleneoxide-b-ε-caprolactone) (PEO-b-PCL) or poly(ethyleneoxide-b-methylmethacrylate), were characterized by asymmetrical flow field-flow fractionation coupled with quasi-elastic light scattering (QELS), multi-angle light scattering (MALS), and refractive index detection, leading to the determination of their size, shape, and molecular weight. The method was cross-examined with more classical ones, like batch dynamic and static light scattering, electron microscopy, and atomic force microscopy. The results show good complementarities between all the techniques; asymmetrical flow field-flow fractionation being the most pertinent one when the sample exhibits several different types of population.
Direct data access protocols benchmarking on DPM
NASA Astrophysics Data System (ADS)
Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina
2015-12-01
The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.
NASA Astrophysics Data System (ADS)
Wang, Yaping; Pan, Anqiang; Zhu, Qinyu; Nie, Zhiwei; Zhang, Yifang; Tang, Yan; Liang, Shuquan; Cao, Guozhong
2014-12-01
In this work, we report a novel strategy for the controlled synthesis of nanorod assembled multi-shelled cobalt oxide (Co3O4) hollow microspheres (HSs). The Co2CO3(OH)2 NRs are first vertically grown on the carbon microspheres (CS) to form the core-shelled composites by a low-temperature solution route. The multi-shelled hollow interiors within the Co3O4 microspheres are unconventionally obtained by annealing the as-prepared core-shell structured CS@Co2CO3(OH)2 composite in air. When evaluated for supercapacitive performance, the multi-shelled Co3O4 hollow microspheres exhibit high capacitance of 394.4 and 360 F g-1 at the current densities of 2 A g-1 and 10 A g-1, respectively. The superior electrochemical performance can be attributed to the multi-shelled hollow structures, which facilitate the electrolyte penetration and provide more active sites for the electrochemical reactions.
NASA Astrophysics Data System (ADS)
Thomas, D.; Garing, C.; Zahasky, C.; Harrison, A. L.; Bird, D. K.; Benson, S. M.; Oelkers, E. H.; Maher, K.
2017-12-01
Predicting the timing and magnitude of CO2 storage in basaltic rocks relies partly on quantifying the dependence of reactivity on flow path and mineral distribution. Flow-through experiments that use intact cores are advantageous because the spatial heterogeneity of pore space and reactive phases is preserved. Combining aqueous geochemical analyses and petrologic characterization with non-destructive imaging techniques (e.g. micro-computed tomography) constrains the relationship between irreversible reactions, pore connectivity and accessible surface area. Our work enhances these capabilities by dynamically imaging flow through vesicular basalts with Positron Emission Tomography (PET) scanning. PET highlights the path a fluid takes by detecting photons produced during radioactive decay of an injected radiotracer (FDG). We have performed single-phase, CO2-saturated flow-through experiments with basaltic core from Iceland at CO2 sequestration conditions (50 °C; 76-90 bar Ptot). Constant flow rate and continuous pressure measurements at the inlet and outlet of the core constrain permeability. We monitor geochemical evolution through cation and anion analysis of outlet fluid sampled periodically. Before and after reaction, we perform PET scans and characterize the core using micro-CT. The PET scans indicate a discrete, localized flow path that appears to be a micro-crack connecting vesicles, suggesting that vesicle-lining minerals are immediately accessible and important reactants. Rapid increases in aqueous cation concentration, pH and HCO3- indicate that the rock reacts nearly immediately after CO2 injection. After 24 hours the solute release decreases, which may reflect a transition to reaction with phases with slower kinetic dissolution rates (e.g. zeolites and glasses to feldspar), a decrease in available reactive surface area or precipitation. We have performed batch experiments using crushed material of the same rock to elucidate the effect of flow path geometry and mineral accessibility on geochemical evolution. Interestingly, surface area-normalized dissolution rates as evinced by SiO2 release in all experiments approach similar values ( 10-15 mol/cm2/s). Our experiments show how imaging techniques are helpful in interpreting path-dependent processes in open systems.
Wang, Rongming; Yang, Wantai; Song, Yuanjun; Shen, Xiaomiao; Wang, Junmei; Zhong, Xiaodi; Li, Shuai; Song, Yujun
2015-01-01
A new methodology based on core alloying and shell gradient-doping are developed for the synthesis of nanohybrids, realized by coupled competitive reactions, or sequenced reducing-nucleation and co-precipitation reaction of mixed metal salts in a microfluidic and batch-cooling process. The latent time of nucleation and the growth of nanohybrids can be well controlled due to the formation of controllable intermediates in the coupled competitive reactions. Thus, spatiotemporal-resolved synthesis can be realized by the hybrid process, which enables us to investigate nanohybrid formation at each stage through their solution color changes and TEM images. By adjusting the bi-channel solvents and kinetic parameters of each stage, the primary components of alloyed cores and the second components of transition metal doping ZnO or Al2O3 as surface coatings can be successively formed. The core alloying and shell gradient-doping strategy can efficiently eliminate the crystal lattice mismatch in different components. Consequently, varieties of gradient core-shell nanohybrids can be synthesized using CoM, FeM, AuM, AgM (M = Zn or Al) alloys as cores and transition metal gradient-doping ZnO or Al2O3 as shells, endowing these nanohybrids with unique magnetic and optical properties (e.g., high temperature ferromagnetic property and enhanced blue emission). PMID:25818342
On the Composition and Temperature of the Terrestrial Planetary Core
NASA Astrophysics Data System (ADS)
Fei, Yingwei
2013-06-01
The existence of liquid cores of terrestrial planets such as the Earth, Mar, and Mercury has been supported by various observation. The liquid state of the core provides a unique opportunity for us to estimate the temperature of the core if we know the melting temperature of the core materials at core pressure. Dynamic compression by shock wave, laser-heating in diamond-anvil cell, and resistance-heating in the multi-anvil device can melt core materials over a wide pressure range. There have been significant advances in both dynamic and static experimental techniques and characterization tool. In this tal, I will review some of the recent advances and results relevant to the composition and thermal state of the terrestrial core. I will also present new development to analyze the quenched samples recovered from laser-heating diamond-anvil cell experiments using combination of focused ion beam milling, high-resolution SEM imaging, and quantitative chemical analysi. With precision milling of the laser-heating spo, the melting point and element partitioning between solid and liquid can be precisely determined. It is also possible to re-construct 3D image of the laser-heating spot at multi-megabar pressures to better constrain melting point and understanding melting process. The new techniques allow us to extend precise measurements of melting relations to core pressures, providing better constraint on the temperature of the cor. The research is supported by NASA and NSF grants.
Design of batch audio/video conversion platform based on JavaEE
NASA Astrophysics Data System (ADS)
Cui, Yansong; Jiang, Lianpin
2018-03-01
With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.
SAS2H Generated Isotopic Concentrations For B&W 15X15 PWR Assembly (SCPB:N/A)
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.W. Davis
This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide pressurized water reactor (PWR) isotopic composition data as a function of time for use in criticality analyses. The objectives of this evaluation are to generate burnup and decay dependant isotopic inventories and to provide these inventories in a form which can easily be utilized in subsequent criticality calculations.
Dong, Bo; Zhou, Da-Peng; Wei, Li; Liu, Wing-Ki; Lit, John W Y
2008-11-10
A novel lateral force sensor based on a core-offset multi-mode fiber (MMF) interferometer is reported. High extinction ratio can be obtained by misaligning a fused cross section between the single-mode fiber (SMF) and MMF. With the variation of the lateral force applied to a short section of the MMF, the extinction ratio changes while the interference phase remains almost constant. The change of the extinction ratio is independent of temperature variations. The proposed force sensor has the advantages of temperature- and phase-independency, high extinction ratio sensitivity, good repeatability, low cost, and simple structure. Moreover, the core-offset MMF interferometer is expected to have applications in fiber filters and tunable phase-independent attenuators.
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang
2017-10-01
Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Cyrus; Larsen, Matt; Brugger, Eric
Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
1998-04-01
The result of the project is a demonstration of the fusion process, the sensors management and the real-time capabilities using simulated sensors...demonstrator (TAD) is a system that demonstrates the core ele- ment of a battlefield ground surveillance system by simulation in near real-time. The core...Management and Sensor/Platform simulation . The surveillance system observes the real world through a non-collocated heterogene- ous multisensory system
Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma
NASA Astrophysics Data System (ADS)
Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.
2018-01-01
High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.
Ellborg, Anders; Ferreira, Denise; Mohammadnejad, Javad; Wärnheim, Torbjörn
2010-06-15
The droplet size distribution of 50 batches of multi-chamber bags containing the parenteral nutrition emulsions Intralipid (Kabiven and Kabiven Peripheral) or Structolipid (StructoKabiven and StructoKabiven Peripheral), respectively, has been investigated. The results show that the non-compounded lipid emulsions analysed are in compliance with the United States Pharmacopeia (USP) chapter 729, Method II limit for the droplet size distribution, PFAT(5)<0.05%. Copyright 2010 Elsevier B.V. All rights reserved.
Solute-Gas Equilibria in Multi-Organic Aqueous Systems
1981-11-30
THIS PAGK(*7@nDae~ Kht.Eero*g) Block 20 (cont.) --z Henry’s constants for selected organic solvents were determined at ionic strengths up to 1.0 M (KC1...equili- brium, batch stripping reactor. Data were fit to a regression equation. Henry’s constants for selected organic solvents were determined at ionic...organic systems. The presence of additional or- ganics (some perhaps not even strippable ) likely to be found along with a particular volatile should be
Design, Construction and Testing of an In-Pile Loop for PWR (Pressurized Water Reactor) Simulation.
1987-06-01
computer modeling remains at best semiempirical (C-i), this large variation in scaling factor makes extrapolation of data impossible. The DIDO Water...in a full scale PWR are not practical. The reactor plant is not controlled to tolerances necessary for research, and utilities are reluctant to vary...MIT Reactor Safeguards Committee, in revision 1 to the PCCL Safety Evaluation Report (SER), for final approval to begin in-pile testing and
MELCOR model for an experimental 17x17 spent fuel PWR assembly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey
2010-11-01
A MELCOR model has been developed to simulate a pressurized water reactor (PWR) 17 x 17 assembly in a spent fuel pool rack cell undergoing severe accident conditions. To the extent possible, the MELCOR model reflects the actual geometry, materials, and masses present in the experimental arrangement for the Sandia Fuel Project (SFP). The report presents an overview of the SFP experimental arrangement, the MELCOR model specifications, demonstration calculation results, and the input model listing.
AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading
NASA Astrophysics Data System (ADS)
Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration
2017-10-01
ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.
Interactive high-resolution isosurface ray casting on multicore processors.
Wang, Qin; JaJa, Joseph
2008-01-01
We present a new method for the interactive rendering of isosurfaces using ray casting on multi-core processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate 3D data blocks for each small set of contiguous pixels, and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. While static screen partitioning is widely used in the literature, our scheme performs dynamic allocation of groups of ray casting tasks to ensure almost equal loads among the different threads running on multi-cores while maintaining spatial locality. We also make careful use of memory management environment commonly present in multi-core processors. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor, for a number of widely different benchmarks. The detailed experimental results show that our system is efficient and scalable, and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve an interactive isosurface rendering on a 1024(2) screen for all the datasets tested up to the maximum size of the main memory of our platform.
Ping, Yanyan; Deng, Yulan; Wang, Li; Zhang, Hongyi; Zhang, Yong; Xu, Chaohan; Zhao, Hongying; Fan, Huihui; Yu, Fulong; Xiao, Yun; Li, Xia
2015-01-01
The driver genetic aberrations collectively regulate core cellular processes underlying cancer development. However, identifying the modules of driver genetic alterations and characterizing their functional mechanisms are still major challenges for cancer studies. Here, we developed an integrative multi-omics method CMDD to identify the driver modules and their affecting dysregulated genes through characterizing genetic alteration-induced dysregulated networks. Applied to glioblastoma (GBM), the CMDD identified a core gene module of 17 genes, including seven known GBM drivers, and their dysregulated genes. The module showed significant association with shorter survival of GBM. When classifying driver genes in the module into two gene sets according to their genetic alteration patterns, we found that one gene set directly participated in the glioma pathway, while the other indirectly regulated the glioma pathway, mostly, via their dysregulated genes. Both of the two gene sets were significant contributors to survival and helpful for classifying GBM subtypes, suggesting their critical roles in GBM pathogenesis. Also, by applying the CMDD to other six cancers, we identified some novel core modules associated with overall survival of patients. Together, these results demonstrate integrative multi-omics data can identify driver modules and uncover their dysregulated genes, which is useful for interpreting cancer genome. PMID:25653168
Miyauchi, Minoru; Miao, Jianjun; Simmons, Trevor J.; Lee, Jong-Won; Doherty, Thomas V.; Dordick, Jonathan S.; Linhardt, Robert J.
2010-01-01
A core-sheath of multi-walled carbon nanotube (MWNT)-cellulose fibers of diameters from several hundreds nm to several µm were prepared by co-axial electrospinning from a non-valatile, non-flammable ionic liquid (IL) solvent, 1-methyl-3-methylimidazolium acetate ([EMIM][Ac]). MWNTs were dispersed in IL to form a gel solution. This gel core solution was electrospun surrounded by a sheath solution of cellulose disolved in the same IL. Electrospun fibers were collected in a coagulation bath containing ethanol-water to completely remove the IL and dried to form a core-sheath MWNT-cellulose fibers having a cable structure with a conductive core and insulating sheath. Enzymatic treatment of a portion of a mat of these fibers with cellulase selectively removed the cellulose sheath exposing the MWNT core for connection to an electrode. These MWNT-cellulose fiber mats demonstrated excellent conductivity due to a conductive pathway of bundleled MWNTs. Fiber mat conductivity increased with increasing ratio of MWNT in the fibers with a maximum conductivity of 10.7 S/m obtained at 45 wt% MWNT loading. PMID:20690644
Towards multi-decadal to multi-millennial ice core records from coastal west Greenland ice caps
NASA Astrophysics Data System (ADS)
Das, Sarah B.; Osman, Matthew B.; Trusel, Luke D.; McConnell, Joseph R.; Smith, Ben E.; Evans, Matthew J.; Frey, Karen E.; Arienzo, Monica; Chellman, Nathan
2017-04-01
The Arctic region, and Greenland in particular, is undergoing dramatic change as characterized by atmospheric warming, decreasing sea ice, shifting ocean circulation patterns, and rapid ice sheet mass loss, but longer records are needed to put these changes into context. Ice core records from the Greenland ice sheet have yielded invaluable insight into past climate change both regionally and globally, and provided important constraints on past surface mass balance more directly, but these ice cores are most often from the interior ice sheet accumulation zone, at high altitude and hundreds of kilometers from the coast. Coastal ice caps, situated around the margins of Greenland, have the potential to provide novel high-resolution records of local and regional maritime climate and sea surface conditions, as well as contemporaneous glaciological changes (such as accumulation and surface melt history). But obtaining these records is extremely challenging. Most of these ice caps are unexplored, and thus their thickness, age, stratigraphy, and utility as sites of new and unique paleoclimate records is largely unknown. Access is severely limited due to their high altitude, steep relief, small surface area, and inclement weather. Furthermore, their relatively low elevation and marine moderated climate can contribute to significant surface melting and degradation of the ice stratigraphy. We recently targeted areas near the Disko Bay region of central west Greenland where maritime ice caps are prevalent but unsampled, as potential sites for new multi-decadal to multi-millennial ice core records. In 2014 & 2015 we identified two promising ice caps, one on Disko Island (1250 m. asl) and one on Nuussuaq Peninsula (1980 m. asl) based on airborne and ground-based geophysical observations and physical and glaciochemical stratigraphy from shallow firn cores. In spring 2015 we collected ice cores at both sites using the Badger-Eclipse electromechanical drill, transported by a medley of small fixed wing and helicopter aircraft, and working out of small tent camps. On Disko Island, despite high accumulation rates and ice thickness of 250 meters, drilling was halted twice due to the encounter of liquid water at depths ranging from 18-20 meters, limiting the depth of the final core to 21 m, providing a multi-decadal record (1980-2015.) On Nuussuaq Peninsula, we collected a 138 m ice core, almost to bedrock, representing a 2500 year record. The ice cores were subsequently analyzed using a continuous flow analysis system (CFA). Age-depth profiles and accumulation histories were determined by combining annual layer counting and an ice flow thinning model, both constrained by glaciochemical tie points to other well-dated Greenland ice core records (e.g. volcanic horizons and continuous heavy metal records). Here we will briefly provide an overview of the project and the new sites, and the novel dating methodology, and describe the latest stratigraphic, isotopic and glaciochemical results. We will also provide a particular focus on new regional climatological insight gained from our records during three climatically sensitive time periods: the late 20th & early 21st centuries; the Little Ice Age; and the Medieval Climate Anomaly.
Park, Tae-Min; Kang, Donggu; Jang, Ilho; Yun, Won-Soo; Shim, Jin-Hyung; Jeong, Young Hun; Kwak, Jong-Young; Yoon, Sik; Jin, Songwan
2017-01-01
In general, a drug candidate is evaluated using 2D-cultured cancer cells followed by an animal model. Despite successful preclinical testing, however, most drugs that enter human clinical trials fail. The high failure rates are mainly caused by incompatibility between the responses of the current models and humans. Here, we fabricated a cancer microtissue array in a multi-well format that exhibits heterogeneous and batch-to-batch structure by continuous deposition of collagen-suspended Hela cells on a fibroblast-layered nanofibrous membrane via inkjet printing. Expression of both Matrix Metalloproteinase 2 (MMP2) and Matrix Metalloproteinase 9 (MMP9) was higher in cancer microtissues than in fibroblast-free microtissues. The fabricated microtissues were treated with an anticancer drug, and high drug resistance to doxorubicin occurred in cancer microtissues but not in fibroblast-free microtissues. These results introduce an inkjet printing fabrication method for cancer microtissue arrays, which can be used for various applications such as early drug screening and gradual 3D cancer studies. PMID:29112150
Adsorption of Zn(II) and Cd(II) ions in batch system by using the Eichhornia crassipes.
Módenes, A N; Espinoza-Quiñones, F R; Borba, C E; Trigueros, D E G; Lavarda, F L; Abugderah, M M; Kroumov, A D
2011-01-01
In this work, the displacement effects on the sorption capacities of zinc and cadmium ions of the Eichornia crassipes-type biosorbent in batch binary system has been studied. Preliminary single metal sorption experiments were carried out. An improvement on the Zn(II) and Cd(II) ions removal was achieved by working at 30 °C temperature and with non-uniform biosorbent grain sizes. A 60 min equilibrium time was achieved for both Zn(II) and Cd(II) ions. Furthermore, it was found that the overall kinetic data were best described by the pseudo second-order kinetic model. Classical multi-component adsorption isotherms have been tested as well as a modified extended Langmuir isotherm model, showing good agreement with the equilibrium binary data. Around 0.65 mequiv./g maximum metal uptake associated with the E. crassipes biosorbent was attained and the E. crassipes biosorbent has shown higher adsorption affinity for the zinc ions than for the cadmium ones in the binary system.
Xu, Min; Zhang, Lei; Yue, Hong-Shui; Pang, Hong-Wei; Ye, Zheng-Liang; Ding, Li
2017-10-01
To establish an on-line monitoring method for extraction process of Schisandrae Chinensis Fructus, the formula medicinal material of Yiqi Fumai lyophilized injection by combining near infrared spectroscopy with multi-variable data analysis technology. The multivariate statistical process control (MSPC) model was established based on 5 normal batches in production and 2 test batches were monitored by PC scores, DModX and Hotelling T2 control charts. The results showed that MSPC model had a good monitoring ability for the extraction process. The application of the MSPC model to actual production process could effectively achieve on-line monitoring for extraction process of Schisandrae Chinensis Fructus, and can reflect the change of material properties in the production process in real time. This established process monitoring method could provide reference for the application of process analysis technology in the process quality control of traditional Chinese medicine injections. Copyright© by the Chinese Pharmaceutical Association.
Using OpenMP vs. Threading Building Blocks for Medical Imaging on Multi-cores
NASA Astrophysics Data System (ADS)
Kegel, Philipp; Schellmann, Maraike; Gorlatch, Sergei
We compare two parallel programming approaches for multi-core systems: the well-known OpenMP and the recently introduced Threading Building Blocks (TBB) library by Intel®. The comparison is made using the parallelization of a real-world numerical algorithm for medical imaging. We develop several parallel implementations, and compare them w.r.t. programming effort, programming style and abstraction, and runtime performance. We show that TBB requires a considerable program re-design, whereas with OpenMP simple compiler directives are sufficient. While TBB appears to be less appropriate for parallelizing existing implementations, it fosters a good programming style and higher abstraction level for newly developed parallel programs. Our experimental measurements on a dual quad-core system demonstrate that OpenMP slightly outperforms TBB in our implementation.
Improved Sensitivity Spontaneous Raman Scattering Multi-Gas Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buric, Michael P.; Chen, Kevin P.; Falk, Joel
2009-01-01
We report a backward-wave spontaneous-Raman multi-gas sensor employing a hollow-core photonic-bandgap-fiber to contain gasses and increase interaction length. Silica Raman noise and detection speed are reduced using a digital spatial filter and a cladding seal.
Benchmarking NWP Kernels on Multi- and Many-core Processors
NASA Astrophysics Data System (ADS)
Michalakes, J.; Vachharajani, M.
2008-12-01
Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.
Scoping Study Investigating PWR Instrumentation during a Severe Accident Scenario
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rempe, J. L.; Knudson, D. L.; Lutz, R. J.
The accidents at the Three Mile Island Unit 2 (TMI-2) and Fukushima Daiichi Units 1, 2, and 3 nuclear power plants demonstrate the critical importance of accurate, relevant, and timely information on the status of reactor systems during a severe accident. These events also highlight the critical importance of understanding and focusing on the key elements of system status information in an environment where operators may be overwhelmed with superfluous and sometimes conflicting data. While progress in these areas has been made since TMI-2, the events at Fukushima suggests that there may still be a potential need to ensure thatmore » critical plant information is available to plant operators. Recognizing the significant technical and economic challenges associated with plant modifications, it is important to focus on instrumentation that can address these information critical needs. As part of a program initiated by the Department of Energy, Office of Nuclear Energy (DOE-NE), a scoping effort was initiated to assess critical information needs identified for severe accident management and mitigation in commercial Light Water Reactors (LWRs), to quantify the environment instruments monitoring this data would have to survive, and to identify gaps where predicted environments exceed instrumentation qualification envelop (QE) limits. Results from the Pressurized Water Reactor (PWR) scoping evaluations are documented in this report. The PWR evaluations were limited in this scoping evaluation to quantifying the environmental conditions for an unmitigated Short-Term Station BlackOut (STSBO) sequence in one unit at the Surry nuclear power station. Results were obtained using the MELCOR models developed for the US Nuclear Regulatory Commission (NRC)-sponsored State of the Art Consequence Assessment (SOARCA) program project. Results from this scoping evaluation indicate that some instrumentation identified to provide critical information would be exposed to conditions that significantly exceeded QE limits for extended time periods for the low frequency STSBO sequence evaluated in this study. It is recognized that the core damage frequency (CDF) of the sequence evaluated in this scoping effort would be considerably lower if evaluations considered new FLEX equipment being installed by industry. Nevertheless, because of uncertainties in instrumentation response when exposed to conditions beyond QE limits and alternate challenges associated with different sequences that may impact sensor performance, it is recommended that additional evaluations of instrumentation performance be completed to provide confidence that operators have access to accurate, relevant, and timely information on the status of reactor systems for a broad range of challenges associated with risk important severe accident sequences.« less
NASA Astrophysics Data System (ADS)
Wongsawaeng, Doonyapong; Jumpee, Chayanit; Jitpukdee, Manit
2014-08-01
In conventional nuclear fuel rods for light-water reactors, a helium-filled as-fabricated gap between the fuel and the cladding inner surface accommodates fuel swelling and cladding creep down. Because helium exhibits a very low thermal conductivity, it results in a large temperature rise in the gap. Liquid metal (LM; 1/3 weight portion each of lead, tin, and bismuth) has been proposed to be a gap filler because of its high thermal conductivity (∼100 times that of He), low melting point (∼100 °C), and lack of chemical reactivity with UO2 and water. With the presence of LM, the temperature drop across the gap is virtually eliminated and the fuel is operated at a lower temperature at the same power output, resulting in safer fuel, delayed fission gas release and prevention of massive secondary hydriding. During normal reactor operation, should an LM-bonded fuel rod failure occurs resulting in a discharge of liquid metal into the bottom of the reactor pressure vessel, it should not corrode stainless steel. An experiment was conducted to confirm that at 315 °C, LM in contact with 304 stainless steel in the PWR water chemistry environment for up to 30 days resulted in no observable corrosion. Moreover, during a hypothetical core-melt accident assuming that the liquid metal with elevated temperature between 1000 and 1600 °C is spread on a high-density concrete basement of the power plant, a small-scale experiment was performed to demonstrate that the LM-concrete interaction at 1000 °C for as long as 12 h resulted in no penetration. At 1200 °C for 5 h, the LM penetrated a distance of ∼1.3 cm, but the penetration appeared to stop. At 1400 °C the penetration rate was ∼0.7 cm/h. At 1600 °C, the penetration rate was ∼17 cm/h. No corrosion based on chemical reactions with high-density concrete occurred, and, hence, the only physical interaction between high-temperature LM and high-density concrete was from tiny cracks generated from thermal stress. Moreover, for as high as 1600 °C, the non-reactive LM was experimentally confirmed not to show any chemical reaction with air or moisture in the air. This experimental work confirmed the excellent compatibility behaviors between the LM as a PWR fuel gap filler and stainless steel and high-density concrete in the high-temperature regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Rosa, Felice
2006-07-01
In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to themore » accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when {delta}Tsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its specific models (candling, corium pool behaviour, etc.) they were less good. A future work will be the preparation of an input deck for the new MELCOR 1.8.6. and to perform a code-to-code comparison with ASTEC v1.2 rev. 1. (author)« less
The last Deglaciation in the Mediterranean region: a multi-archives synthesis
NASA Astrophysics Data System (ADS)
Bazin, Lucie; Siani, Giuseppe; Landais, Amaelle; Bassinot, Frank; Genty, Dominique; Govin, Aline; Michel, Elisabeth; Nomade, Sebastien; Waelbroeck, Claire
2016-04-01
Multiple proxies record past climatic changes in different climate archives. These proxies are influenced by different component of the climate system and bring complementary information on past climate variability. The major limitation when combining proxies from different archives comes from the coherency of their chronologies. Indeed, each climate archives possess their own dating methods, not necessarily coherent with each other's. Consequently, when we want to assess the latitudinal changes and mechanisms behind a climate event, we often have to rely on assumptions of synchronisation between the different archives, such as synchronous temperature changes during warming events (Austin and Hibbert 2010). Recently, a dating method originally developed to produce coherent chronologies for ice cores (Datice,Lemieux-Dudon et al., 2010) has been adapted in order to integrate different climate archives (ice cores, sediment cores and speleothems (Lemieux-Dudon et al., 2015, Bazin et al., in prep)). In this presentation we present the validation of this multi-archives dating tool with a first application covering the last Deglaciation in the Mediterranean region. For this experiment, we consider the records from Monticchio, the MD90-917, Tenaghi Philippon and Lake Orhid sediment cores as well as continuous speleothems from Sofular, Soreq and La Mine caves. Using the Datice dating tool, and with the identification of common tephra layers between the cores considered, we are able to produce a multi-archives coherent chronology for this region, independently of any climatic assumption. Using this common chronological framework, we show that the usual climatic synchronisation assumptions are not valid over this region for the last glacial-interglacial transition. Finally, we compare our coherent Mediterranean chronology with Greenland ice core records in order to discuss the sequence of events of the last Deglaciation between these two regions.
A multi-archive coherent chronology: from Greenland to the Mediterranean sea
NASA Astrophysics Data System (ADS)
Bazin, Lucie; Landais, Amaelle; Lemieux-Dudon, Bénédicte; Siani, Giuseppe; Michel, Elisabeth; Combourieu-Nebout, Nathalie; Blamart, Dominique; Genty, Dominique
2015-04-01
Understanding the climate mechanisms requires a precise knowledge of the sequence of events during major climate changes. In order to provide precise relationships between changes in orbital and/or greenhouse gases concentration forcing, sea level changes and high vs low latitudes temperatures, a common chronological framework for different paleoclimatic archives is required. Coherent chronologies for ice cores have been recently produced using a bayesian dating tool, DATICE (Lemieux-Dudon et al., 2010, Bazin et al., 2013, Veres et al., 2013). Such tool has been recently developed to include marine cores and speleothems in addition to ice cores. This new development should enable one to test the coherency of different chronologies using absolute and stratigraphic links as well as to provide relationship between climatic changes recorded in different archives. We present here a first application of multi-archive coherent dating including paleoclimatic archives from (1) Greenland (NGRIP ice core), (2) Mediterranean sea (marine core MD90-917, 41° N17° E, 1010 m) and (3) speleothems from the South of France and North Tunisia (Chauvet, Villars and La Mine speleothems, Genty et al., 2006). Thanks to the good absolute chronological constraints from annual layer counting in NGRIP, 14C and tephra layers in MD90-917 and U-Th dating in speleothems, we can provide a precise chronological framework for the last 50 ka (ie. thousand years before present). Then, we present different tests on how to combine the records from the different archives and give the most plausible scenario for the sequence of events at different latitudes over the last deglaciation. Bazin, L., Landais, A. ; Lemieu¬-Dudon, B. ; Kele, H. T. M. ; Veres, D. ; Parrenin, F. ; Martinerie, P. ; Ritz, C. ; Capron, E. ; Lipenkov, V. ; Loutre, M.-F. ; Raynaud, D. ; Vinther, B. ; Svensson, A. ; Rasmussen, S. ; Severi, M. ; Blunier, T. ; Leuenberger, M. ; Fischer, H. ; Masson-¬-Delmotte, V. ; Chappellaz, J. & Wolff, E., An optimized multi-proxy, multi-site Antarctic ice and gas orbital chronology (AICC2012): 120-800 ka,Clim. Past 9, 1715-1731, 2013. Genty, D., Blamart, D., Ghaleb B., Plagnes, V., Causse, Ch., Bakalowicz, M., Zouari, K., Chkir, N., Hellstrom, J., Wainer, K., Bourges, F., Timing and dynamics of the last deglaciation from European and North African δ13C stalagmite profiles - comparison with Chinese ans South Hemisphere stalagmites, Quat. Sci. Rev. 25, 2118-2142, 2006. Lemieux-Dudon, B. ; Blayo, E. ; Petit, J.-R. ; Waelbroeck, C. ;Svensson, A. ; Ritz, C. ; Barnola, J.-M. ; Narcisi, B.M. ; Parrenin, F., Consitent dating for Antarctic and Greenland ice cores, Quat. Sci. Rev. 29(1-2), 2010. Veres, D. ; Bazin, L. ; Landais, A. ; Lemieux-Dudon, B. ; Parrenin, F. ; Martinerie, P. ; Toyé Mahamadou Kele, H. ; Capron, E. ; Chappellaz, J. ; Rasmussen, S. ; Severi, M. ; Svensson, A. ; Vinther, B. & Wolff, E., The Antarctic ice core chronology (AICC2012): an optimized multi-parameter and multi-site dating approach for the last 120 thousand years, Clim. Past, 9, 1733-1748, 2013.
NASA Astrophysics Data System (ADS)
Chen, W.; Jiang, M.; Xu, Y.; Shi, P. W.; Yu, L. M.; Ding, X. T.; Shi, Z. B.; Ji, X. Q.; Yu, D. L.; Li, Y. G.; Yang, Z. C.; Zhong, W. L.; Qiu, Z. Y.; Li, J. Q.; Dong, J. Q.; Yang, Q. W.; Liu, Yi.; Yan, L. W.; Xu, M.; Duan, X. R.
2017-11-01
Multi-scale interactions have been observed recently in the HL-2A core NBI plasmas, including the synchronous coupling between m/n=1/1 kink mode and m/n=2/1 tearing mode, nonlinear couplings of TAE/BAE and m/n=2/1 TM near q=2 surface, AITG/KBM/BAE and m/n=1/1 kink mode near q=1 surface, and between m/n=1/1 kink mode and high-frequency turbulence. Experimental results suggest that several couplings can exist simultaneously, Alfvenic fluctuations have an important contribution to the high-frequency turbulence spectra, and the couplings reveal the electromagnetic character. Multi-scale interactions via the nonlinear modulation process maybe enhance plasma transport and trigger sawtooth-crash onset.
Spinelli, L.; Botwicz, M.; Zolek, N.; Kacprzak, M.; Milej, D.; Sawosz, P.; Liebert, A.; Weigel, U.; Durduran, T.; Foschum, F.; Kienle, A.; Baribeau, F.; Leclair, S.; Bouchard, J.-P.; Noiseux, I.; Gallant, P.; Mermut, O.; Farina, A.; Pifferi, A.; Torricelli, A.; Cubeddu, R.; Ho, H.-C.; Mazurenka, M.; Wabnitz, H.; Klauenberg, K.; Bodnar, O.; Elster, C.; Bénazech-Lavoué, M.; Bérubé-Lauzière, Y.; Lesage, F.; Khoptyar, D.; Subash, A. A.; Andersson-Engels, S.; Di Ninni, P.; Martelli, F.; Zaccanti, G.
2014-01-01
A multi-center study has been set up to accurately characterize the optical properties of diffusive liquid phantoms based on Intralipid and India ink at near-infrared (NIR) wavelengths. Nine research laboratories from six countries adopting different measurement techniques, instrumental set-ups, and data analysis methods determined at their best the optical properties and relative uncertainties of diffusive dilutions prepared with common samples of the two compounds. By exploiting a suitable statistical model, comprehensive reference values at three NIR wavelengths for the intrinsic absorption coefficient of India ink and the intrinsic reduced scattering coefficient of Intralipid-20% were determined with an uncertainty of about 2% or better, depending on the wavelength considered, and 1%, respectively. Even if in this study we focused on particular batches of India ink and Intralipid, the reference values determined here represent a solid and useful starting point for preparing diffusive liquid phantoms with accurately defined optical properties. Furthermore, due to the ready availability, low cost, long-term stability and batch-to-batch reproducibility of these compounds, they provide a unique fundamental tool for the calibration and performance assessment of diffuse optical spectroscopy instrumentation intended to be used in laboratory or clinical environment. Finally, the collaborative work presented here demonstrates that the accuracy level attained in this work for optical properties of diffusive phantoms is reliable. PMID:25071947
Zhang, Yi-Bei; DA, Juan; Zhang, Jing-Xian; Li, Shang-Rong; Chen, Xin; Long, Hua-Li; Wang, Qiu-Rong; Cai, Lu-Ying; Yao, Shuai; Hou, Jin-Jun; Wu, Wan-Ying; Guo, De-An
2017-04-01
Aconiti Lateralis Radix Praeparata (Fuzi) is a commonly used traditional Chinese medicine in clinic for its potency in restoring yang and rescuing from collapse. Aconiti alkaloids, mainly including monoester-diterpenoidaconitines (MDAs) and diester-diterpenoidaconitines (DDAs), are considered to act as both bioactive and toxic constituents. In the present study, a feasible, economical, and accurate HPLC method for simultaneous determination of six alkaloid markers using the Single Standard for Determination of Multi-Components (SSDMC) method was developed and fully validated. Benzoylmesaconine was used as the unique reference standard. This method was proven as accurate (recovery varying between 97.5%-101.8%, RSD < 3%), precise (RSD 0.63%-2.05%), and linear (R > 0.999 9) over the concentration ranges, and subsequently applied to quantitative evaluation of 62 batches of samples, among which 45 batches were from good manufacturing practice (GMP) facilities and 17 batches from the drug market. The contents were then analyzed by principal component analysis (PCA) and homogeneity test. The present study provided valuable information for improving the quality standard of Aconiti Lateralis Radix Praeparata. The developed method also has the potential in analysis of other Aconitum species, such as Aconitum carmichaelii (prepared parent root) and Aconitum kusnezoffii (prepared root). Copyright © 2017 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xiao, J.; Qiu, S. Y.; Chen, Y.; Fu, Z. H.; Lin, Z. X.; Xu, Q.
2015-01-01
Alloy 690(TT) is widely used for steam generator tubes in pressurized water reactor (PWR), where it is susceptible to corrosion fatigue. In this study, the corrosion fatigue behavior of Alloy 690(TT) in simulated PWR environments was investigated. The microstructure of the plastic zone near the crack tip was investigated and labyrinth structures were observed. The relationship between the crack tip plastic zone and fatigue crack growth rates and the environment factor Fen was illuminated.
Waterside corrosion of Zircaloy-clad fuel rods in a PWR environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzarolli, F.; Jorde, D.; Manzel, R.
A data base of Zircaloy corrosion behavior under PWR operating conditions has been established from previously published reports as well as from new Kraftwerk Union (KWU) fuel examinations. The data show that the reactor environment increases the corrosion. ZrO/sub 2/ film thermal conductivity is another major factor that influences corrosion behavior. It was inferred from KWU film thickness data that the oxide film thermal conductivity may decrease once circumferential cracks develop in the layer. 57 refs.
Chemical Agonists of the PML/Daxx Pathway for Prostate Cancer Therapy
2011-04-01
positive nuclei. These data suggest that the assay is highly specific and will not suffer from promiscuous reactivity with NIH library compounds...Figure 16B). Strikingly, when we compared Daxx levels in PCa cell lines to a nontumorigenic human prostatic epithelial line, PWR -1E, they were...Lysates from six different cell types ( PWR -1E, ALVA-31 Daxx K/D, ALVA-31 WT, DU145, LNCaP, and PC3) were normalized for total protein content (60 μg
NASA Astrophysics Data System (ADS)
Tamer, Ugur; Onay, Aykut; Ciftci, Hakan; Bozkurt, Akif Göktuğ; Cetin, Demet; Suludere, Zekiye; Hakkı Boyacı, İsmail; Daniel, Philippe; Lagarde, Fabienne; Yaacoub, Nader; Greneche, Jean-Marc
2014-10-01
The high product yield of multi-branched core-shell Fe3- x O4@Au magnetic nanoparticles was synthesized used as magnetic separation platform and surface-enhanced Raman scattering (SERS) substrates. The multi-branched magnetic nanoparticles were prepared by a seed-mediated growth approach using magnetic gold nanospheres as the seeds and subsequent reduction of metal salt with ascorbic acid in the presence of a stabilizing agent chitosan biopolymer and silver ions. The anisotropic growth of nanoparticles was observed in the presence of chitosan polymer matrix resulting in multi-branched nanoparticles with a diameter over 100 nm, and silver ions also play a crucial role on the growth of multi-branched nanoparticles. We propose the mechanism of the formation of multi-branched nanoparticles while the properties of nanoparticles embedded in chitosan matrix are discussed. The surface morphology of nanoparticles was characterized with transmission electron microscopy, scanning electron microscopy, ultraviolet visible spectroscopy (UV-Vis), X-ray diffraction, and fourier transform infrared spectroscopy and 57Fe Mössbauer spectrometry. Additionally, the magnetic properties of the nanoparticles were also examined. We also demonstrated that the synthesized Fe3- x O4@Au multi-branched nanoparticle is capable of targeted separation of pathogens from matrix and sensing as SERS substrates.
IGA-ADS: Isogeometric analysis FEM using ADS solver
NASA Astrophysics Data System (ADS)
Łoś, Marcin M.; Woźniak, Maciej; Paszyński, Maciej; Lenharth, Andrew; Hassaan, Muhamm Amber; Pingali, Keshav
2017-08-01
In this paper we present a fast explicit solver for solution of non-stationary problems using L2 projections with isogeometric finite element method. The solver has been implemented within GALOIS framework. It enables parallel multi-core simulations of different time-dependent problems, in 1D, 2D, or 3D. We have prepared the solver framework in a way that enables direct implementation of the selected PDE and corresponding boundary conditions. In this paper we describe the installation, implementation of exemplary three PDEs, and execution of the simulations on multi-core Linux cluster nodes. We consider three case studies, including heat transfer, linear elasticity, as well as non-linear flow in heterogeneous media. The presented package generates output suitable for interfacing with Gnuplot and ParaView visualization software. The exemplary simulations show near perfect scalability on Gilbert shared-memory node with four Intel® Xeon® CPU E7-4860 processors, each possessing 10 physical cores (for a total of 40 cores).
Time-resolved photoluminescence study of CdSe/CdMnS/CdS core/multi-shell nanoplatelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, J. R.; Department of Physics, State University of New York, University at Buffalo, Buffalo, New York 14260; Delikanli, S.
2016-06-13
We used photoluminescence spectroscopy to resolve two emission features in CdSe/CdMnS/CdS and CdSe/CdS core/multi-shell nanoplatelet heterostructures. The photoluminescence from the magnetic sample has a positive circular polarization with a maximum centered at the position of the lower energy feature. The higher energy feature has a corresponding signature in the absorption spectrum; this is not the case for the low-energy feature. We have also studied the temporal evolution of these features using a pulsed-excitation/time-resolved photoluminescence technique to investigate their corresponding recombination channels. A model was used to analyze the temporal dynamics of the photoluminescence which yielded two distinct timescales associated withmore » these recombination channels. The above results indicate that the low-energy feature is associated with recombination of electrons with holes localized at the core/shell interfaces; the high-energy feature, on the other hand, is excitonic in nature with the holes confined within the CdSe cores.« less
Teodoro, George; Kurc, Tahsin; Andrade, Guilherme; Kong, Jun; Ferreira, Renato; Saltz, Joel
2015-01-01
We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core-MIC) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared to classic strategies in hybrid configurations. PMID:28239253
The role of heterotrophic microorganism Galactomyces sp. Z3 in improving pig slurry bioleaching.
Zhou, Jun; Zheng, Guanyu; Zhou, Lixiang; Liu, Fenwu; Zheng, Chaocheng; Cui, Chunhong
2013-01-01
The feasibility of removing heavy metals and eliminating pathogens from pig slurry through bioleaching involving the fungus Galactomyces sp. Z3 and two acidophilic thiobacillus (A. ferrooxidans LX5 and A. thiooxidans TS6) was investigated. It was found that the isolated pig slurry dissolved organic matter (DOM) degrader Z3 was identified as Galactomyces sp. Z3, which could grow well at pH 2.5-7 and degrade pig slurry DOM from 1973 to 942 mg/l within 48 h. During the successive multi-batch bioleaching systems, the co-inoculation of pig slurry degrader Galactomyces sp. Z3 and the two Acidithiobacillus species could improve pig slurry bioleaching efficiency compared to the single system without Galactomyces sp. Z3. The removal efficiency of Zn and Cu exceeded 94% and 85%, respectively. In addition, the elimination efficiencies of pathogens, including both total coliform and faecal coliform counts, exceeded 99% after bioleaching treatment. However, the counts of Galactomyces sp. Z3 decreased with the fall of pH and did not restore to the initial level during successive multi-batch bioleaching systems, and it is necessary to re-inoculate Galactomyces sp. Z3 cells into the bioleaching system to maintain its role in degrading pig slurry DOM. Therefore, a bioleaching technique involving both Galactomyces sp. Z3 and Acidithiobacillus species is an efficient method for removing heavy metals and eliminating pathogens from pig slurry.
Lommen, Arjen; van der Kamp, Henk J; Kools, Harrie J; van der Lee, Martijn K; van der Weg, Guido; Mol, Hans G J
2012-11-09
A new alternative data processing tool set, metAlignID, is developed for automated pre-processing and library-based identification and concentration estimation of target compounds after analysis by comprehensive two-dimensional gas chromatography with mass spectrometric detection. The tool set has been developed for and tested on LECO data. The software is developed to run multi-threaded (one thread per processor core) on a standard PC (personal computer) under different operating systems and is as such capable of processing multiple data sets simultaneously. Raw data files are converted into netCDF (network Common Data Form) format using a fast conversion tool. They are then preprocessed using previously developed algorithms originating from metAlign software. Next, the resulting reduced data files are searched against a user-composed library (derived from user or commercial NIST-compatible libraries) (NIST=National Institute of Standards and Technology) and the identified compounds, including an indicative concentration, are reported in Excel format. Data can be processed batch wise. The overall time needed for conversion together with processing and searching of 30 raw data sets for 560 compounds is routinely within an hour. The screening performance is evaluated for detection of pesticides and contaminants in raw data obtained after analysis of soil and plant samples. Results are compared to the existing data-handling routine based on proprietary software (LECO, ChromaTOF). The developed software tool set, which is freely downloadable at www.metalign.nl, greatly accelerates data-analysis and offers more options for fine-tuning automated identification toward specific application needs. The quality of the results obtained is slightly better than the standard processing and also adds a quantitative estimate. The software tool set in combination with two-dimensional gas chromatography coupled to time-of-flight mass spectrometry shows great potential as a highly-automated and fast multi-residue instrumental screening method. Copyright © 2012 Elsevier B.V. All rights reserved.
MC3: Multi-core Markov-chain Monte Carlo code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan
2016-10-01
MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.
Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhary, Alok; Samatova, Nagiza; Wu, Kesheng
This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.
High-temperature Gas Reactor (HTGR)
NASA Astrophysics Data System (ADS)
Abedi, Sajad
2011-05-01
General Atomics (GA) has over 35 years experience in prismatic block High-temperature Gas Reactor (HTGR) technology design. During this period, the design has recently involved into a modular have been performed to demonstrate its versatility. This versatility is directly related to refractory TRISO coated - particle fuel that can contain any type of fuel. This paper summarized GA's fuel cycle studies individually and compares each based upon its cycle sustainability, proliferation-resistance capabilities, and other performance data against pressurized water reactor (PWR) fuel cycle data. Fuel cycle studies LEU-NV;commercial HEU-Th;commercial LEU-Th;weapons-grade plutonium consumption; and burning of LWR waste including plutonium and minor actinides in the MHR. results show that all commercial MHR options, with the exception of HEU-TH, are more sustainable than a PWR fuel cycle. With LEU-NV being the most sustainable commercial options. In addition, all commercial MHR options out perform the PWR with regards to its proliferation-resistance, with thorium fuel cycle having the best proliferation-resistance characteristics.
Review of PWR fuel rod waterside corrosion behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzarolli, F.; Jorde, D.; Manzel, R.
Waterside corrosion of Zircaloy has generally not been a problem under normal PWR operating conditions, although some instances of accelerated corrosion have been reported. However, an incentive exists to extend the average fuel rod discharge burnups to about 50,000 MWd/MTU. To minimize corrosion at these extended burnups, the factors which influence Zircaloy corrosion need to be better understood. A data base of Zircaloy corrosion behavior under PWR operating conditions has been established. The data are compiled previously published reports as well as from new Kraftwerk Union examinations. A non-destructive eddy-current technique is used to measure the oxide layer thickness onmore » fuel rods. Comparisons of measuremnts made using this eddy-current technique with those made by usual metallographic methods indicate good agreement. The data were evaluated by defining a fitting factor F which describes the increase in corrosion rate observed in-reactor over that observed from measurements of ex-reactor corrosion coupons.« less
Protocol for quantitative tracing of surface water with synthetic DNA
NASA Astrophysics Data System (ADS)
Foppen, J. W.; Bogaard, T. A.
2012-04-01
Based on experiments we carried out in 2010 with various synthetic single stranded DNA markers with a size of 80 nucleotides (ssDNA; Foppen et al., 2011), we concluded that ssDNA can be used to carry out spatially distributed multi-tracer experiments in the environment. Main advantages are in principle unlimited amount of tracers, environmental friendly and tracer recovery at very high dilution rates (detection limit is very low). However, when ssDNA was injected in headwater streams, we found that at selected downstream locations, the total mass recovery was less than 100%. The exact reason for low mass recovery was unknown. In order to start identifying the cause of the loss of mass in these surface waters, and to increase our knowledge of the behaviour of synthetic ssDNA in the environment, we examined the effect of laboratory and field protocols working with artificial DNA by performing numerous batch experiments. Then, we carried out several field tests in different headwater streams in the Netherlands and in Luxembourg. The laboratory experiments consisted of a batch of water in a vessel with in the order of 10^10 ssDNA molecules injected into the batch. The total duration of each experiment was 10 hour, and, at regular time intervals, 100 µl samples were collected in a 1.5 ml Eppendorf vial for qPCR analyses. The waters we used ranged from milliQ water to river water with an Electrical Conductivity of around 400 μS/cm. The batch experiments were performed in different vessel types: polyethylene bottles, polypropylene copolymer bottles , and glass bottles. In addition, two filter types were tested: 1 µm pore size glass fibre filters and 0.2 µm pore size cellulose acetate filters. Lastly, stream bed sediment was added to the batch experiments to quantify interaction of the DNA with sediment. For each field experiment around 10^15 ssDNA molecules were injected, and water samples were collected 100 - 600 m downstream of the point of injection. Additionally, the field tests were performed with salt and deuterium as tracer. To study possible decay by sunlight and/or microbial activity for synthetic DNA, immediately in the field and for the duration of the entire experiment, we carried out batch experiments. All samples were stored in a 1.5 ml Eppendorf vial in a cool-box in dry ice (-80°C). Quantitative PCR on a Mini Opticon (Bio Rad, Hercules, CA, USA) was carried out to determine DNA concentrations in the samples. Results showed the importance of a strict protocol for working with ssDNA as a tracer for quantitative tracing, since ssDNA interacts with surface areas of glass and plastic, depending on water quality and ionic strength. Interaction with the sediment and decay due to sunlight and/or microbial activity was negligible in most cases. The ssDNA protocol was then tested in natural streams. Promising results were obtained using ssDNA as quantitative tracer. The breakthrough curves using ssDNA were similar to the ones of salt or deuterium. We will present the revised protocol to use ssDNA for multi-tracing experiments in natural streams and discuss the opportunities and limitations.
Critical induction a key quantity for the optimisation of transformer core operation
NASA Astrophysics Data System (ADS)
Ilo, A.; Pfützner, H.; Nakata, T.
2000-06-01
No-load losses P of transformers core have been considerably decreased through introduction of the so-called multi-step-lap designs. However, profound guidelines for the optimum step-number N do not exist. This study shows that the combination of both N and working induction B characterises the flux distribution. Transformer cores can operate in an over or an under-critical way depending on N and B.
Advanced Wireless Integrated Navy Network - AWINN
2005-09-30
progress report No. 3 on AWINN hardware and software configurations of smart , wideband, multi-function antennas, secure configurable platform, close-in...results to the host PC via a UART soft core. The UART core used is a proprietary Xilinx core which incorporates features described in National...current software uses wheel odometry and visual landmarks to create a map and estimate position on an internal x, y grid . The wheel odometry provides a
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Fu, Haohuan; Song, Shuaiwen
2014-07-18
Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Tian, Rui; Yu, Xiaosong; Zhang, Jiawei; Zhang, Jie
2017-03-01
A proper traffic grooming strategy in dynamic optical networks can improve the utilization of bandwidth resources. An auxiliary graph (AG) is designed to solve the traffic grooming problem under a dynamic traffic scenario in spatial division multiplexing enabled elastic optical networks (SDM-EON) with multi-core fibers. Five traffic grooming policies achieved by adjusting the edge weights of an AG are proposed and evaluated through simulation: maximal electrical grooming (MEG), maximal optical grooming (MOG), maximal SDM grooming (MSG), minimize virtual hops (MVH), and minimize physical hops (MPH). Numeric results show that each traffic grooming policy has its own features. Among different traffic grooming policies, an MPH policy can achieve the lowest bandwidth blocking ratio, MEG can save the most transponders, and MSG can obtain the fewest cores for each request.
Hingerl, Ferdinand F.; Yang, Feifei; Pini, Ronny; ...
2016-02-02
In this paper we present the results of an extensive multiscale characterization of the flow properties and structural and capillary heterogeneities of the Heletz sandstone. We performed petrographic, porosity and capillary pressure measurements on several subsamples. We quantified mm-scale heterogeneity in saturation distributions in a rock core during multi-phase flow using conventional X-ray CT scanning. Core-flooding experiments were conducted under reservoirs conditions (9 MPa, 50 °C) to obtain primary drainage and secondary imbibition relative permeabilities and residual trapping was analyzed and quantified. We provide parameters for relative permeability, capillary pressure and trapping models for further modeling studies. A synchrotron-based microtomographymore » study complements our cm- to mm-scale investigation by providing links between the micromorphology and mm-scale saturation heterogeneities.« less
Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balcas, J.; Belforte, S.; Bockelman, B.
2015-12-23
CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid,more » cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.« less
Argon/UF6 plasma experiments: UF6 regeneration and product analysis
NASA Technical Reports Server (NTRS)
Roman, W. C.
1980-01-01
An experimental and analytical investigation was conducted to aid in developing some of the technology necessary for designing a self-critical fissioning uranium plasma core reactors (PCR). This technology is applicable to gaseous uranium hexafluoride nuclear-pumped laser systems. The principal equipment used included 1.2 MW RF induction heater, a d.c. plasma torch, a uranium tetrafluoride feeder system, and batch-type fluorine/UF6 regeneration systems. Overall objectives were to continue to develop and test materials and handling techniques suitable for use with high-temperature, high-pressure, gaseous UF6; and to continue development of complementary diagnostic instrumentation and measurement techniques to characterize the effluent exhaust gases and residue deposited on the test chamber and exhaust system components. Specific objectives include: a development of a batch-type UF6 regeneration system employing pure high-temperature fluorine; development of a ruggedized time-of-flight mass spectrometer and associated data acquisition system capable of making on-line concentration measurements of the volatile effluent exhaust gas species in a high RF environment and corrosive environment of UF6 and related halide compounds.
NASA Astrophysics Data System (ADS)
Cha, Joon-Hyeon; Kim, Su-Hyeon; Lee, Yun-Soo; Kim, Hyoung-Wook; Choi, Yoon Suk
2016-09-01
Multi-layered Al alloy sheets can exhibit unique properties by the combination of properties of component materials. A poor corrosion resistance of high strength Al alloys can be complemented by having a protective surface with corrosion resistant Al alloys. Here, a special care should be taken regarding the heat treatment of multi-layered Al alloy sheets because dissimilar Al alloys may exhibit unexpected interfacial reactions upon heat treatment. In the present study, A6022/A7075/A6022 sheets were fabricated by a cold roll-bonding process, and the effect of the heat treatment on the microstructure and mechanical properties was examined. The solution treatment gave rise to the diffusion of Zn, Mg, Cu and Si elements across the core/clad interface. In particular, the pronounced diffusion of Zn, which is a major alloying element (for solid-solution strengthening) of the A7075 core, resulted in a gradual hardness change across the core/clad interface. Mg2Si precipitates and the precipitate free zone were also formed near the interface after the heat treatment. The heat-treated sheet showed high strengths and reasonable elongation without apparent deformation misfit or interfacial delamination during the tensile deformation. The high strength of the sheet was mainly due to the T4 and T6 heat treatment of the A7075 core.
2014-01-01
Background X-ray mammography remains the predominant test for screening for breast cancer, with the aim of reducing breast cancer mortality. In the English NHS Breast Screening Programme each woman’s mammograms are examined separately by two expert readers. The two readers read each batch in the same order and each indicates if there should be recall for further tests. This is a highly skilled, pressurised, repetitive and frequently intellectually unchallenging activity where readers examine one or more batches of 30–50 women’s mammograms in each session. A vigilance decrement or performance decrease over time has been observed in similar repetitive visual tasks such as radar operation. Methods/Design The CO-OPS study is a pragmatic, multi-centre, two-arm, double blind cluster randomised controlled trial of a computer software intervention designed to reduce the effects of a vigilance decrement in breast cancer screening. The unit of randomisation is the batch. Intervention batches will be examined in the opposite order by the two readers (one forwards, one backwards). Control batches will be read in the same order as one another, as is current standard practice. The hypothesis is that cancer detection rates will be higher in the intervention group because each readers’ peak performance will occur when examining different women’s mammograms. The trial will take place in 44 English breast screening centres for 1 year and 4 months. The primary outcome is cancer detection rate, which will be extracted from computer records after 1 year of the trial. The secondary outcomes include rate of disagreement between readers (a more statistically powerful surrogate for cancer detection rate), recall rate, positive predictive value, and interval cancer rate (cancers found between screening rounds which will be measured three years after the end of the trial). Discussion This is the first trial of an intervention to ameliorate a vigilance decrement in breast cancer screening. Trial registration ISRCTN46603370 (submitted: 24 October 2012, date of registration: 26 March 2013). PMID:24411004
Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility
NASA Astrophysics Data System (ADS)
Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro
2014-06-01
In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.
Consistent criticality and radiation studies of Swiss spent nuclear fuel: The CS2M approach.
Rochman, D; Vasiliev, A; Ferroukhi, H; Pecchia, M
2018-06-15
In this paper, a new method is proposed to systematically calculate at the same time canister loading curves and radiation sources, based on the inventory information from an in-core fuel management system. As a demonstration, the isotopic contents of the assemblies come from a Swiss PWR, considering more than 6000 cases from 34 reactor cycles. The CS 2 M approach consists in combining four codes: CASMO and SIMULATE to extract the assembly characteristics (based on validated models), the SNF code for source emission and MCNP for criticality calculations for specific canister loadings. The considered cases cover enrichments from 1.9 to 5.0% for the UO 2 assemblies and 4.8% for the MOX, with assembly burnup values from 7 to 74 MWd/kgU. Because such a study is based on the individual fuel assembly history, it opens the possibility to optimize canister loadings from the point-of-view of criticality, decay heat and emission sources. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Anderson, Stan L.; Fero, Arnold H.; Roberts, George K.
2003-06-01
The neutron fluence associated with each material in the pressure vessel beltline region is determined on a plant specific basis at each surveillance capsule withdrawal. Based on an assumed mode of operation, fluence projections to account for future operation are then made for use in vessel integrity evaluations. The applicability of these assumed projections is normally verified and updated, if necessary, at each subsequent surveillance capsule withdrawal. However, following the last scheduled withdrawal of a surveillance capsule, there is generally no formal mechanism in place to assure that fluence projections for the remainder of plant operating lifetime remain valid. This paper provides a review of a methodology that can be efficiently used in conjunction with future fuel loading patterns or on-line core power distribution monitoring systems to track the actual fluence accrued by each of the pressure vessel beltline materials in the operating period following the last capsule withdrawal.