Sample records for mox core computational

  1. MOX fuel arrangement for nuclear core

    DOEpatents

    Kantrowitz, M.L.; Rosenstein, R.G.

    1998-10-13

    In order to use up a stockpile of weapons-grade plutonium, the plutonium is converted into a mixed oxide (MOX) fuel form wherein it can be disposed in a plurality of different fuel assembly types. Depending on the equilibrium cycle that is required, a predetermined number of one or more of the fuel assembly types are selected and arranged in the core of the reactor in accordance with a selected loading schedule. Each of the fuel assemblies is designed to produce different combustion characteristics whereby the appropriate selection and disposition in the core enables the resulting equilibrium cycle to closely resemble that which is produced using urania fuel. The arrangement of the MOX rods and burnable absorber rods within each of the fuel assemblies, in combination with a selective control of the amount of plutonium which is contained in each of the MOX rods, is used to tailor the combustion characteristics of the assembly. 38 figs.

  2. Mox fuel arrangement for nuclear core

    DOEpatents

    Kantrowitz, Mark L.; Rosenstein, Richard G.

    2001-05-15

    In order to use up a stockpile of weapons-grade plutonium, the plutonium is converted into a mixed oxide (MOX) fuel form wherein it can be disposed in a plurality of different fuel assembly types. Depending on the equilibrium cycle that is required, a predetermined number of one or more of the fuel assembly types are selected and arranged in the core of the reactor in accordance with a selected loading schedule. Each of the fuel assemblies is designed to produce different combustion characteristics whereby the appropriate selection and disposition in the core enables the resulting equilibrium cycle to closely resemble that which is produced using urania fuel. The arrangement of the MOX rods and burnable absorber rods within each of the fuel assemblies, in combination with a selective control of the amount of plutonium which is contained in each of the MOX rods, is used to tailor the combustion. characteristics of the assembly.

  3. MOX fuel arrangement for nuclear core

    DOEpatents

    Kantrowitz, Mark L.; Rosenstein, Richard G.

    2001-07-17

    In order to use up a stockpile of weapons-grade plutonium, the plutonium is converted into a mixed oxide (MOX) fuel form wherein it can be disposed in a plurality of different fuel assembly types. Depending on the equilibrium cycle that is required, a predetermined number of one or more of the fuel assembly types are selected and arranged in the core of the reactor in accordance with a selected loading schedule. Each of the fuel assemblies is designed to produce different combustion characteristics whereby the appropriate selection and disposition in the core enables the resulting equilibrium cycle to closely resemble that which is produced using urania fuel. The arrangement of the MOX rods and burnable absorber rods within each of the fuel assemblies, in combination with a selective control of the amount of plutonium which is contained in each of the MOX rods, is used to tailor the combustion characteristics of the assembly.

  4. MOX fuel arrangement for nuclear core

    DOEpatents

    Kantrowitz, Mark L.; Rosenstein, Richard G.

    1998-01-01

    In order to use up a stockpile of weapons-grade plutonium, the plutonium is converted into a mixed oxide (MOX) fuel form wherein it can be disposed in a plurality of different fuel assembly types. Depending on the equilibrium cycle that is required, a predetermined number of one or more of the fuel assembly types are selected and arranged in the core of the reactor in accordance with a selected loading schedule. Each of the fuel assemblies is designed to produce different combustion characteristics whereby the appropriate selection and disposition in the core enables the resulting equilibrium cycle to closely resemble that which is produced using urania fuel. The arrangement of the MOX rods and burnable absorber rods within each of the fuel assemblies, in combination with a selective control of the amount of plutonium which is contained in each of the MOX rods, is used to tailor the combustion characteristics of the assembly.

  5. Performance of the MTR core with MOX fuel using the MCNP4C2 code.

    PubMed

    Shaaban, Ismail; Albarhoum, Mohamad

    2016-08-01

    The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Isotopic Details of the Spent Catawba-1 MOX Fuel Rods at ORNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Ronald James

    The United States Department of Energy funded Shaw/AREVA MOX Services LLC to fabricate four MOX Lead Test Assemblies (LTA) from weapons-grade plutonium. A total of four MOX LTAs (including MX03) were irradiated in the Catawba Nuclear Station (Unit 1) Catawba-1 PWR which operated at a total thermal power of 3411 MWt and had a core with 193 total fuel assemblies. The MOX LTAs were irradiated along with Duke Energy s irradiation of eight Westinghouse Next Generation Fuel (NGF) LEU LTAs (ref.1) and the remaining 181 LEU fuel assemblies. The MX03 LTA was irradiated in the Catawba-1 PWR core (refs.2,3) duringmore » cycles C-16 and C-17. C-16 began on June 5, 2005, and ended on November 11, 2006, after 499 effective full power days (EFPDs). C-17 started on December 29, 2006, (after a shutdown of 48 days) and continued for 485 EFPDs. The MX03 and three other MOX LTAs (and other fuel assemblies) were discharged at the end of C-17 on May 3, 2008. The design of the MOX LTAs was based on the (Framatome ANP, Inc.) Mark-BW/MOX1 17 17 fuel assembly design (refs. 4,5,6) for use in Westinghouse PWRs, but with MOX fuel rods with three Pu loading ranges: the nominal Pu loadings are 4.94 wt%, 3.30 wt%, and 2.40 wt%, respectively, for high, medium, and low Pu content. The Mark-BW/MOX1 (MOX LTA) fuel assembly design is the same as the Advanced Mark-BW fuel assembly design but with the LEU fuel rods replaced by MOX fuel rods (ref. 5). The fabrication of the fuel pellets and fuel rods for the MOX LTAs was performed at the Cadarache facility in France, with the fabrication of the LTAs performed at the MELOX facility, also in France.« less

  7. All About MOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-07-29

    In 1999, the Nuclear Nuclear Security Administration (NNSA) signed a contract with a consortium, now called Shaw AREVA MOX Services, LLC to design, build, and operate a Mixed Oxide (MOX) Fuel Fabrication Facility. This facility will be a major component in the United States program to dispose of surplus weapon-grade plutonium. The facility will take surplus weapon-grade plutonium, remove impurities, and mix it with uranium oxide to form MOX fuel pellets for reactor fuel assemblies. These assemblies will be irradiated in commercial nuclear power reactors.

  8. All About MOX

    ScienceCinema

    None

    2018-01-16

    In 1999, the Nuclear Nuclear Security Administration (NNSA) signed a contract with a consortium, now called Shaw AREVA MOX Services, LLC to design, build, and operate a Mixed Oxide (MOX) Fuel Fabrication Facility. This facility will be a major component in the United States program to dispose of surplus weapon-grade plutonium. The facility will take surplus weapon-grade plutonium, remove impurities, and mix it with uranium oxide to form MOX fuel pellets for reactor fuel assemblies. These assemblies will be irradiated in commercial nuclear power reactors.

  9. Environment-based pin-power reconstruction method for homogeneous core calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-07-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less

  10. NNSA B-Roll: MOX Facility

    ScienceCinema

    None

    2017-12-09

    In 1999, the National Nuclear Security Administration (NNSA) signed a contract with a consortium, now called Shaw AREVA MOX Services, LLC to design, build, and operate a Mixed Oxide (MOX) Fuel Fabrication Facility. This facility will be a major component in the United States program to dispose of surplus weapon-grade plutonium. The facility will take surplus weapon-grade plutonium, remove impurities, and mix it with uranium oxide to form MOX fuel pellets for reactor fuel assemblies. These assemblies will be irradiated in commercial nuclear power reactors.

  11. NNSA B-Roll: MOX Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-05-21

    In 1999, the National Nuclear Security Administration (NNSA) signed a contract with a consortium, now called Shaw AREVA MOX Services, LLC to design, build, and operate a Mixed Oxide (MOX) Fuel Fabrication Facility. This facility will be a major component in the United States program to dispose of surplus weapon-grade plutonium. The facility will take surplus weapon-grade plutonium, remove impurities, and mix it with uranium oxide to form MOX fuel pellets for reactor fuel assemblies. These assemblies will be irradiated in commercial nuclear power reactors.

  12. Canadian experience in irradiation and testing of MOX fuel

    NASA Astrophysics Data System (ADS)

    Yatabe, S.; Floyd, M.; Dimayuga, F.

    2018-04-01

    Experimental irradiation and performance testing of Mixed OXide (MOX) fuel at the Canadian Nuclear Laboratories (CNL) has taken place for more than 40 years. These experiments investigated MOX fuel behaviour and compared it with UO2 behaviour to develop and verify fuel performance models. This article compares the performance of MOX of various concentrations and homogeneities, under different irradiation conditions. These results can be applied to future fuel designs. MOX fuel irradiated by CNL was found to be comparable in performance to similarly designed and operated UO2 fuel. MOX differs in behaviour from UO2 fuel in several ways. Fission-gas release, grain growth and the thickness of zirconium oxide on the inner sheath appear to be related to MOX fuel homogeneity. Columnar grains formed at the pellet centre begin to develop at lower powers in MOX than in UO2 fuel.

  13. New approaches for MOX multi-recycling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gain, T.; Bouvier, E.; Grosman, R.

    Due to its low fissile content after irradiation, Pu from used MOX fuel is considered by some as not recyclable in LWR (Light Water Reactors). The point of this paper is hence to go back to those statements and provide a new analysis based on AREVA extended experience in the fields of fissile and fertile material management and optimized waste management. This is done using the current US fuel inventory as a case study. MOX Multi-recycling in LWRs is a closed cycle scenario where U and Pu management through reprocessing and recycling leads to a significant reduction of the usedmore » assemblies to be stored. The recycling of Pu in MOX fuel is moreover a way to maintain the self-protection of the Pu-bearing assemblies. With this scenario, Pu content is also reduced repetitively via a multi-recycling of MOX in LWRs. Simultaneously, {sup 238}Pu content decreases. All along this scenario, HLW (High-Level Radioactive Waste) vitrified canisters are produced and planned for deep geological disposal. Contrary to used fuel, HLW vitrified canisters do not contain proliferation materials. Moreover, the reprocessing of used fuel limits the space needed on current interim storage. With MOX multi-recycling in LWR, Pu isotopy needs to be managed carefully all along the scenario. The early introduction of a limited number of SFRs (Sodium Fast Reactors) can therefore be a real asset for the overall system. A few SFRs would be enough to improve the Pu isotopy from used LWR MOX fuel and provide a Pu-isotopy that could be mixed back with multi-recycled Pu from LWRs, hence increasing the Pu multi-recycling potential in LWRs.« less

  14. A computationally simple model for determining the time dependent spectral neutron flux in a nuclear reactor core

    NASA Astrophysics Data System (ADS)

    Schneider, E. A.; Deinert, M. R.; Cady, K. B.

    2006-10-01

    The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.

  15. Low-power lead-cooled fast reactor loaded with MOX-fuel

    NASA Astrophysics Data System (ADS)

    Sitdikov, E. R.; Terekhova, A. M.

    2017-01-01

    Fast reactor for the purpose of implementation of research, education of undergraduate and doctoral students in handling innovative fast reactors and training specialists for atomic research centers and nuclear power plants (BRUTs) was considered. Hard neutron spectrum achieved in the fast reactor with compact core and lead coolant. Possibility of prompt neutron runaway of the reactor is excluded due to the low reactivity margin which is less than the effective fraction of delayed neutrons. The possibility of using MOX fuel in the BRUTs reactor was examined. The effect of Keff growth connected with replacement of natural lead coolant to 208Pb coolant was evaluated. The calculations and reactor core model were performed using the Serpent Monte Carlo code.

  16. Issues in the use of Weapons-Grade MOX Fuel in VVER-1000 Nuclear Reactors: Comparison of UO2 and MOX Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, J.J.

    2005-05-27

    The purpose of this report is to quantify the differences between mixed oxide (MOX) and low-enriched uranium (LEU) fuels and to assess in reasonable detail the potential impacts of MOX fuel use in VVER-1000 nuclear power plants in Russia. This report is a generic tool to assist in the identification of plant modifications that may be required to accommodate receiving, storing, handling, irradiating, and disposing of MOX fuel in VVER-1000 reactors. The report is based on information from work performed by Russian and U.S. institutions. The report quantifies each issue, and the differences between LEU and MOX fuels are describedmore » as accurately as possible, given the current sources of data.« less

  17. Identification of putative methanol dehydrogenase (moxF) structural genes in methylotrophs and cloning of moxF genes from methylococcus capsulatus bath and Methylomonas albus BG8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, R.L.; Haygood, M.G.; Lidstrom, M.E.

    An open-reading-frame fragment of a Methylobacterium sp. strain AM1 gene (moxF) encoding a portion of the methanol dehydrogenase structural protein has been used as a hybridization probe to detect similar sequences in a variety of methylotrophic bacteria. This hybridization was used to isolate clones containing putative moxF genes from two obligate methanotrophic bacteria, Methylococcus capsulatus Bath and Methylomonas albus BG8. The identity of these genes was confirmed in two ways. A T7 expression vector was used to produce methanol dehydrogenase protein in Escherichia coli from the cloned genes,a and in each case the protein was identified by immunoblotting with antiserummore » against the Methylomonas albus methanol dehydrogenase. In addition, a moxF mutant of Methylobacterium strain AM1 was complemented to a methanol-positive phenotype that partially restored methanol dehydrogenase activity, using broad-host-range plasmids containing the moxF genes from each methanotroph. The partial complementation of a moxF mutant in a facultative serine pathway methanol utilizer by moxF genes from type I and type X obligate methane utilizers suggests broad functional conservation of the methanol oxidation system among gram-negative methylotrophs.« less

  18. Development of ORIGEN Libraries for Mixed Oxide (MOX) Fuel Assembly Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertyurek, Ugur; Gauld, Ian C.

    In this research, ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup.more » The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. Finally, the nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.« less

  19. Development of ORIGEN Libraries for Mixed Oxide (MOX) Fuel Assembly Designs

    DOE PAGES

    Mertyurek, Ugur; Gauld, Ian C.

    2015-12-24

    In this research, ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup.more » The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. Finally, the nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.« less

  20. Thermal conductivity of heterogeneous LWR MOX fuels

    NASA Astrophysics Data System (ADS)

    Staicu, D.; Barker, M.

    2013-11-01

    It is generally observed that the thermal conductivity of LWR MOX fuel is lower than that of pure UO2. For MOX, the degradation is usually only interpreted as an effect of the substitution of U atoms by Pu. This hypothesis is however in contradiction with the observations of Duriez and Philiponneau showing that the thermal conductivity of MOX is independent of the Pu content in the ranges 3-15 and 15-30 wt.% PuO2 respectively. Attributing this degradation to Pu only implies that stoichiometric heterogeneous MOX can be obtained, while we show that any heterogeneity in the plutonium distribution in the sample introduces a variation in the local stoichiometry which in turn has a strong impact on the thermal conductivity. A model quantifying this effect is obtained and a new set of experimental results for homogeneous and heterogeneous MOX fuels is presented and used to validate the proposed model. In irradiated fuels, this effect is predicted to disappear early during irradiation. The 3, 6 and 10 wt.% Pu samples have a similar thermal conductivity. Comparison of the results for this homogeneous microstructure with MIMAS (heterogeneous) fuel of the same composition showed no difference for the Pu contents of 3, 5.9, 6, 7.87 and 10 wt.%. A small increase of the thermal conductivity was obtained for 15 wt.% Pu. This increase is of about 6% when compared to the average of the values obtained for 3, 6 and 10 wt.% Pu. For comparison purposes, Duriez also measured the thermal conductivity of FBR MOX with 21.4 wt.% Pu with O/M = 1.982 and a density close to 95% TD and found a value in good agreement with the estimation obtained using the formula of Philipponneau [8] for FBR MOX, and significantly lower than his results corresponding to the range 3-15 wt.% Pu. This difference in thermal conductivity is of about 20%, i.e. higher than the measurement uncertainties.Thus, a significant difference was observed between FBR and PWR MOX fuels, but was not explained. This difference

  1. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  2. The Mars oxidant experiment (MOx) for Mars '96

    NASA Technical Reports Server (NTRS)

    McKay, C. P.; Grunthaner, F. J.; Lane, A. L.; Herring, M.; Bartman, R. K.; Ksendzov, A.; Manning, C. M.; Lamb, J. L.; Williams, R. M.; Ricco, A. J.; hide

    1998-01-01

    The MOx instrument was developed to characterize the reactive nature of the martian soil. The objectives of MOx were: (1) to measure the rate of degradation of organics in the martian environment; (2) to determine if the reactions seen by the Viking biology experiments were caused by a soil oxidant and measure the reactivity of the soil and atmosphere: (3) to monitor the degradation, when exposed to the martian environment, of materials of potential use in future missions; and, finally, (4) to develop technologies and approaches that can be part of future soil analysis instrumentation. The basic approach taken in the MOx instrument was to place a variety of materials composed as thin films in contact with the soil and monitor the physical and chemical changes that result. The optical reflectance of the thin films was the primary sensing-mode. Thin films of organic materials, metals, and semiconductors were prepared. Laboratory simulations demonstrated the response of thin films to active oxidants.

  3. Experience from start-ups of the first ANITA Mox plants.

    PubMed

    Christensson, M; Ekström, S; Andersson Chan, A; Le Vaillant, E; Lemaire, R

    2013-01-01

    ANITA™ Mox is a new one-stage deammonification Moving-Bed Biofilm Reactor (MBBR) developed for partial nitrification to nitrite and autotrophic N-removal from N-rich effluents. This deammonification process offers many advantages such as dramatically reduced oxygen requirements, no chemical oxygen demand requirement, lower sludge production, no pre-treatment or requirement of chemicals and thereby being an energy and cost efficient nitrogen removal process. An innovative seeding strategy, the 'BioFarm concept', has been developed in order to decrease the start-up time of new ANITA Mox installations. New ANITA Mox installations are started with typically 3-15% of the added carriers being from the 'BioFarm', with already established anammox biofilm, the rest being new carriers. The first ANITA Mox plant, started up in 2010 at Sjölunda wastewater treatment plant (WWTP) in Malmö, Sweden, proved this seeding concept, reaching an ammonium removal rate of 1.2 kgN/m³ d and approximately 90% ammonia removal within 4 months from start-up. This first ANITA Mox plant is also the BioFarm used for forthcoming installations. Typical features of this first installation were low energy consumption, 1.5 kW/NH4-N-removed, low N₂O emissions, <1% of the reduced nitrogen and a very stable and robust process towards variations in loads and process conditions. The second ANITA Mox plant, started up at Sundets WWTP in Växjö, Sweden, reached full capacity with more than 90% ammonia removal within 2 months from start-up. By applying a nitrogen loading strategy to the reactor that matches the capacity of the seeding carriers, more than 80% nitrogen removal could be obtained throughout the start-up period.

  4. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  5. moxFG region encodes four polypeptides in the methanol-oxidizing bacterium Methylobacterium sp. strain AM1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, D.J.; Lidstrom, M.E.

    The polypeptides encoded by a putative methanol oxidation (mox) operon of Methylobacterium sp. strain AM1 were expressed in Escherichia coli, using a coupled in vivo T7 RNA polymerase/promoter gene expression system. Two mox genes had been previously mapped to this region: moxF, the gene encoding the methanol dehydrogenase (MeDH) polypeptide; and moxG, a gene believed to encode a soluble type c cytochrome, cytochrome c/sub L/. In this study, four polypeptides of M/sub r/, 60,000, 30,000, 20,000, and 12,000 were found to be encoded by the moxFG region and were tentatively designated moxF, -J, -G, and -I, respectively. The arrangement ofmore » the genes (5' to 3') was found to be moxFJGI. The identities of three of the four polypeptides were determined by protein immunoblot analysis. The product of moxF, the M/sub r/-60,000 polypeptide, was confirmed to be the MeDH polypeptide. The product of moxG, the M/sub r/-20,000 polypeptide, was identified as mature cytochrome c/sub L/, and the product of moxI, the M/sub r/-12,000 polypeptide, was identified as a MeDH-associated polypeptide that copurifies with the holoenzyme. The identity of the M/sub r/-30,000 polypeptide (the moxJ gene product) could not be determined. The function of the M/sub r/-12,000 MeDH-associated polypeptide is not yet clear. However, it is not present in mutants that lack the M/sub r/-60,000 MeDH subunit, and it appears that the stability of the MeDH-associated polypeptide is dependent on the presence of the M/sub r/-60,000 MeDH polypeptide. Our data suggest that both the M/sub r/-30,000 and -12,000 polypeptides are involved in methanol oxidation, which would bring to 12 the number of mox genes in Methylobacterium sp. strain AM1.« less

  6. ANALYSIS AND EXAMINATION OF MOX FUEL FROM NONPROLIFERATION PROGRAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, Kevin; Machut, Dr McLean; Morris, Robert Noel

    The U.S. Department of Energy has decided to dispose of a portion of the nation s surplus plutonium by reconstituting it into mixed oxide (MOX) fuel and irradiating it in commercial power reactors. Four lead assemblies were manufactured and irradiated to a maximum fuel rod burnup of 47.3 MWd/kg heavy metal. This was the first commercial irradiation of MOX fuel with a 240Pu/239Pu ratio of less than 0.10. Five fuel rods with varying burnups and plutonium contents were selected from one of the assemblies and shipped to Oak Ridge National Laboratory for hot cell examination. The performance of the rodsmore » was analyzed with AREVA s next-generation GALILEO code. The results of the analysis confirmed that the fuel rods had performed safely and predictably, and that GALILEO is applicable to MOX fuel with a low 240Pu/239Pu ratio as well as to standard MOX. The results are presented and compared to the GALILEO database. In addition, the fuel cladding was tested to confirm that traces of gallium in the fuel pellets had not affected the mechanical properties of the cladding. The irradiated cladding was found to remain ductile at both room temperature and 350 C for both the axial and circumferential directions.« less

  7. Cloning of a methanol-inducible moxF promoter and its analysis in moxB mutants of Methylobacterium extorquens AM1rif.

    PubMed Central

    Morris, C J; Lidstrom, M E

    1992-01-01

    In Methylobacterium extorquens AM1, gene encoding methanol dehydrogenase polypeptides are transcriptionally regulated in response to C1 compounds, including methanol (M. E. Lidstrom and D. I. Stirling, Annu. Rev. Microbiol. 44:27-57, 1990). In order to study this regulation, a transcriptional fusion has been constructed between a beta-galactosidase reporter gene and a 1.55-kb XhoI-SalI fragment of M. extorquens AM1rif DNA encoding the N terminus of the methanol dehydrogenase large subunit (moxF) and 1,289 bp of upstream DNA. The fusion exhibited orientation-specific promoter activity in M. extorquens AM1rif but was expressed constitutively when the transcriptional fusion was located on the plasmid. However, correct regulation was restored when the construction was inserted in the M. extorquens AM1rif chromosome. This DNA fragment was shown to contain both the moxFJGI promoter and the sequences necessary in cis for its transcriptional regulation by methanol. Transcription from this promoter was studied in the M. extorquens AM1rif moxB mutant strains UV4rif and UV25rif, which have a pleiotropic phenotype with regard to the components of methanol oxidation. In these mutants, beta-galactosidase activity from the fusion was reduced to a level equal to that of the vector background when the fusion was present in both plasmid and chromosomal locations. Since both constitutive and methanol-inducible promoter activities were lost in the mutants, moxB appears to be required for transcription of the genes encoding the methanol dehydrogenase polypeptides. Images PMID:1624436

  8. Opportunities for the Multi Recycling of Used MOX Fuel in the US - 12122

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, P.; Bailly, F.; Bouvier, E.

    Over the last 50 years the US has accumulated an inventory of used nuclear fuel (UNF) in the region of 64,000 metric tons in 2010, and adds an additional 2,200 metric tons each year from the current fleet of 104 Light Water Reactors. This paper considers a fuel cycle option that would be available for a future pilot U.S. recycling plant that could take advantage of the unique opportunities offered by the age and size of the large U.S. UNF inventory. For the purpose of this scenario, recycling of UNF must use the available reactor infrastructure, currently LWR's, and themore » main product of recycling is considered to be plutonium (Pu), recycled into MOX fuel for use in these reactors. Use of MOX fuels must provide the service (burn-up) expected by the reactor operator, with the required level of safety. To do so, the fissile material concentration (Pu-239, Pu-241) in the MOX must be high enough to maintain criticality, while, in current recycle facilities, the Pu-238 content has to be kept low enough to prevent excessive heat load, neutron emission, and neutron capture during recycle operations. In most countries, used MOX fuel (MOX UNF) is typically stored after one irradiation in an LWR, pending the development of the GEN IV reactors, since it is considered difficult to directly reuse the recycled MOX fuel in LWRs due to the degraded Pu fissile isotopic composition. In the US, it is possible to blend MOX UNF with LEUOx UNF from the large inventory, using the oldest UNF first. Blending at the ratio of about one MOX UNF assembly with 15 LEUOx UNF assemblies, would achieve a fissile plutonium concentration sufficient for reirradiation in new MOX fuel. The Pu-238 yield in the new fuel will be sufficiently low to meet current fuel fabrication standards. Therefore, it should be possible in the context of the US, for discharged MOX fuel to be recycled back into LWR's, using only technologies already industrially deployed worldwide. Building on that possibility, two

  9. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-MOX, R2-UO2 and MORGANE/R configurations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Klann, R. T.; Nuclear Engineering Division

    2007-08-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.

  10. MOXE: An X-ray all-sky monitor for Soviet Spectrum-X-Gamma Mission

    NASA Technical Reports Server (NTRS)

    Priedhorsky, W.; Fenimore, E. E.; Moss, C. E.; Kelley, R. L.; Holt, S. S.

    1989-01-01

    A Monitoring Monitoring X-Ray Equipment (MOXE) is being developed for the Soviet Spectrum-X-Gamma Mission. MOXE is an X-ray all-sky monitor based on array of pinhole cameras, to be provided via a collaboration between Goddard Space Flight Center and Los Alamos National Laboratory. The objectives are to alert other observers on Spectrum-X-Gamma and other platforms of interesting transient activity, and to synoptically monitor the X-ray sky and study long-term changes in X-ray binaries. MOXE will be sensitive to sources as faint as 2 milliCrab (5 sigma) in 1 day, and cover the 2 to 20 KeV band.

  11. Thermal property change of MOX and UO2 irradiated up to high burnup of 74 GWd/t

    NASA Astrophysics Data System (ADS)

    Nakae, Nobuo; Akiyama, Hidetoshi; Miura, Hiromichi; Baba, Toshikazu; Kamimura, Katsuichiro; Kurematsu, Shigeru; Kosaka, Yuji; Yoshino, Aya; Kitagawa, Takaaki

    2013-09-01

    Thermal property is important because it controls fuel behavior under irradiation. The thermal property change at high burnup of more than 70 GWd/t is examined. Two kinds of MOX fuel rods, which were fabricated by MIMAS and SBR methods, and one referenced UO2 fuel rod were used in the experiment. These rods were taken from the pre-irradiated rods (IFA 609/626, of which irradiation test were carried out by Japanese PWR group) and re-fabricated and re-irradiated in HBWR as IFA 702 by JNES. The specification of fuel corresponds to that of 17 × 17 PWR type fuel and the axially averaged linear heat rates (LHR) of MOX rods are 25 kW/m (BOL of IFA 702) and 20 kW/m (EOL of IFA 702). The axial peak burnups achieved are about 74 GWd/t for both of MOX and UO2. Centerline temperature and plenum gas pressure were measured in situ during irradiation. The measured centerline temperature is plotted against LHR at the position where thermocouples are fixed. The slopes of MOX are corresponded to each other, but that of UO2 is higher than those of MOX. This implies that the thermal conductivity of MOX is higher than that of UO2 at high burnup under the condition that the pellet-cladding gap is closed during irradiation. Gap closure is confirmed by the metallography of the postirradiation examinations. It is understood that thermal conductivity of MOX is lower than that of UO2 before irradiation since phonon scattering with plutonium in MOX becomes remarkable. A phonon scattering with plutonium decreases in MOX when burnup proceeds. Thus, thermal conductivity of MOX becomes close to that of UO2. A reverse phenomenon is observed at high burnup region. The phonon scattering with fission products such as Nd and Zr causes a degradation of thermal conductivity of burnt fuel. It might be speculated that this scattering effect causes the phenomenon and the mechanism is discussed here.

  12. Analysis on fuel breeding capability of FBR core region based on minor actinide recycling doping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Permana, Sidik; Novitrian,; Waris, Abdul

    Nuclear fuel breeding based on the capability of fuel conversion capability can be achieved by conversion ratio of some fertile materials into fissile materials during nuclear reaction processes such as main fissile materials of U-233, U-235, Pu-239 and Pu-241 and for fertile materials of Th-232, U-238, and Pu-240 as well as Pu-238. Minor actinide (MA) loading option which consists of neptunium, americium and curium will gives some additional contribution from converted MA into plutonium such as conversion Np-237 into Pu-238 and it's produced Pu-238 converts to Pu-239 via neutron capture. Increasing composition of Pu-238 can be used to produce fissilemore » material of Pu-239 as additional contribution. Trans-uranium (TRU) fuel (Mixed fuel loading of MOX (U-Pu) and MA composition) and mixed oxide (MOX) fuel compositions are analyzed for comparative analysis in order to show the effect of MA to the plutonium productions in core in term of reactor criticality condition and fuel breeding capability. In the present study, neptunium (Np) nuclide is used as a representative of MAin trans-uranium (TRU) fuel composition as Np-MOX fuel type. It was loaded into the core region gives significant contribution to reduce the excess reactivity in comparing to mixed oxide (MOX) fuel and in the same time it contributes to increase nuclear fuel breeding capability of the reactor. Neptunium fuel loading scheme in FBR core region gives significant production of Pu-238 as fertile material to absorp neutrons for reducing excess reactivity and additional contribution for fuel breeding.« less

  13. The phase state at high temperatures in the MOX-SiO 2 system

    NASA Astrophysics Data System (ADS)

    Nakamichi, S.; Kato, M.; Sunaoshi, T.; Uchida, T.; Morimoto, K.; Kashimura, M.; Kihara, Y.

    2009-06-01

    Influence of impurity Si on microstructure in a plutonium and uranium mixed oxide (MOX), which is used for fast breeder reactor fuel, was investigated, and phase state in 25% SiO 2 - (U 0.7Pu 0.3)O 2 was observed as a function of oxygen chemical potential. Compounds composed of Pu and Si with other elements were observed at grain boundaries of the MOX parent phase in the specimens after annealing. These compounds were not observed in the grain interior and the MOX phase was not affected significantly by impurity Si. It was found that the compounds tended to form more observably with decreasing O/M ratio and with increasing annealing temperatures.

  14. Computer-assisted design of flux-cored wires

    NASA Astrophysics Data System (ADS)

    Dubtsov, Yu N.; Zorin, I. V.; Sokolov, G. N.; Antonov, A. A.; Artem'ev, A. A.; Lysak, V. I.

    2017-02-01

    The algorithm and description of the AlMe-WireLaB software for the computer-assisted design of flux-cored wires are introduced. The software functionality is illustrated with the selection of the components for the flux-cored wire, ensuring the acquisition of the deposited metal of the Fe-Cr-C-Mo-Ni-Ti-B system. It is demonstrated that the developed software enables the technologically reliable flux-cored wire to be designed for surfacing, resulting in a metal of an ordered composition.

  15. Asymmetric Core Computing for U.S. Army High-Performance Computing Applications

    DTIC Science & Technology

    2009-04-01

    Playstation 4 (should one be announced). 8 4.2 FPGAs Reconfigurable computing refers to performing computations using Field Programmable Gate Arrays...2008 4 . TITLE AND SUBTITLE Asymmetric Core Computing for U.S. Army High-Performance Computing Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Acknowledgments vi  1.  Introduction 1  2.  Relevant Technologies 2  3.  Technical Approach 5  4 .  Research and Development Highlights 7  4.1  Cell

  16. Modeling of the structure and interactions of the B. anthracis antitoxin, MoxX: deletion mutant studies highlight its modular structure and repressor function

    NASA Astrophysics Data System (ADS)

    Chopra, Nikita; Agarwal, Shivangi; Verma, Shashikala; Bhatnagar, Sonika; Bhatnagar, Rakesh

    2011-03-01

    Our previous report on Bacillus anthracis toxin-antitoxin module (MoxXT) identified it to be a two component system wherein, PemK-like toxin (MoxT) functions as a ribonuclease (Agarwal S et al. JBC 285:7254-7270, 2010). The labile antitoxin (MoxX) can bind to/neutralize the action of the toxin and is also a DNA-binding protein mediating autoregulation. In this study, molecular modeling of MoxX in its biologically active dimeric form was done. It was found that it contains a conserved Ribbon-Helix-Helix (RHH) motif, consistent with its DNA-binding function. The modeled MoxX monomers dimerize to form a two-stranded antiparallel ribbon, while the C-terminal region adopts an extended conformation. Knowledge guided protein-protein docking, molecular dynamics simulation, and energy minimization was performed to obtain the structure of the MoxXT complex, which was exploited for the de novo design of a peptide capable of binding to MoxT. It was found that the designed peptide caused a decrease in MoxX binding to MoxT by 42% at a concentration of 2 μM in vitro. We also show that MoxX mediates negative transcriptional autoregulation by binding to its own upstream DNA. The interacting regions of both MoxX and DNA were identified in order to model their complex. The repressor activity of MoxX was found to be mediated by the 16 N-terminal residues that contains the ribbon of the RHH motif. Based on homology with other RHH proteins and deletion mutant studies, we propose a model of the MoxX-DNA interaction, with the antiparallel β-sheet of the MoxX dimer inserted into the major groove of its cognate DNA. The structure of the complex of MoxX with MoxT and its own upstream regulatory region will facilitate design of molecules that can disrupt these interactions, a strategy for development of novel antibacterials.

  17. Characterization of un-irradiated MIMAS MOX fuel by Raman spectroscopy and EPMA

    NASA Astrophysics Data System (ADS)

    Talip, Zeynep; Peuget, Sylvain; Magnin, Magali; Tribet, Magaly; Valot, Christophe; Vauchy, Romain; Jégou, Christophe

    2018-02-01

    In this study, Raman spectroscopy technique was implemented to characterize un-irradiated MIMAS (MIcronized - MASter blend) MOX fuel samples with average 7 wt.% Pu content and different damage levels, 13 years after fabrication, one year after thermal recovery and soon after annealing, respectively. The impacts of local Pu content, deviation from stoichiometry and self-radiation damage on Raman spectrum of the studied MIMAS MOX samples were assessed. MIMAS MOX fuel has three different phases Pu-rich agglomerate, coating phase and uranium matrix. In order to distinguish these phases, Raman results were associated with Pu content measurements performed by Electron Microprobe Analysis. Raman results show that T2g frequency significantly shifts from 445 to 453 cm-1 for Pu contents increasing from 0.2 to 25 wt.%. These data are satisfactorily consistent with the calculations obtained with Gruneisen parameters. It was concluded that the position of the T2g band is mainly controlled by Pu content and self-radiation damage. Deviation from stoichiometry does not have a significant influence on T2g band position. Self-radiation damage leads to a shift of T2g band towards lower frequency (∼1-2 cm-1 for the UO2 matrix of damaged sample). However, this shift is difficult to quantify for the coating phase and Pu agglomerates given the dispersion of high Pu concentrations. In addition, 525 cm-1 band, which was attributed to sub-stoichiometric structural defects, is presented for the first time for the self-radiation damaged MOX sample. Thanks to the different oxidation resistance of each phase, it was shown that laser induced oxidation could be alternatively used to identify the phases. It is demonstrated that micro-Raman spectroscopy is an efficient technique for the characterization of heterogeneous MOX samples, due to its low spatial resolution.

  18. Irradiation performance of PFBR MOX fuel after 112 GWd/t burn-up

    NASA Astrophysics Data System (ADS)

    Venkiteswaran, C. N.; Jayaraj, V. V.; Ojha, B. K.; Anandaraj, V.; Padalakshmi, M.; Vinodkumar, S.; Karthik, V.; Vijaykumar, Ran; Vijayaraghavan, A.; Divakar, R.; Johny, T.; Joseph, Jojo; Thirunavakkarasu, S.; Saravanan, T.; Philip, John; Rao, B. P. C.; Kasiviswanathan, K. V.; Jayakumar, T.

    2014-06-01

    The 500 MWe Prototype Fast Breeder Reactor (PFBR) which is in advanced stage of construction at Kalpakkam, India, will use mixed oxide (MOX) fuel with a target burnup of 100 GWd/t. The fuel pellet is of annular design to enable operation at a peak linear power of 450 W/cm with the requirement of minimum duration of pre-conditioning. The performance of the MOX fuel and the D9 clad and wrapper material was assessed through Post Irradiation Examinations (PIE) after test irradiation of 37 fuel pin subassembly in Fast Breeder Test Reactor (FBTR) to a burn-up of 112 GWd/t. Fission product distribution, swelling and fuel-clad gap evolution, central hole diameter variation, restructuring, fission gas release and clad wastage due to fuel-clad chemical interaction were evaluated through non-destructive and destructive examinations. The examinations have indicated that the MOX fuel can safely attain the desired target burn-up in PFBR.

  19. Multirecycling of Plutonium from LMFBR Blanket in Standard PWRs Loaded with MOX Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonat Sen; Gilles Youinou

    2013-02-01

    It is now well-known that, from a physics standpoint, Pu, or even TRU (i.e. Pu+M.A.), originating from LEU fuel irradiated in PWRs can be multirecycled also in PWRs using MOX fuel. However, the degradation of the isotopic composition during irradiation necessitates using enriched U in conjunction with the MOX fuel either homogeneously or heterogeneously to maintain the Pu (or TRU) content at a level allowing safe operation of the reactor, i.e. below about 10%. The study is related to another possible utilization of the excess Pu produced in the blanket of a LMFBR, namely in a PWR(MOX). In this casemore » the more Pu is bred in the LMFBR, the more PWR(MOX) it can sustain. The important difference between the Pu coming from the blanket of a LMFBR and that coming from a PWR(LEU) is its isotopic composition. The first one contains about 95% of fissile isotopes whereas the second one contains only about 65% of fissile isotopes. As it will be shown later, this difference allows the PWR fed by Pu from the LMFBR blanket to operate with natural U instead of enriched U when it is fed by Pu from PWR(LEU)« less

  20. Polytopol computing for multi-core and distributed systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  1. [Three-dimensional computer aided design for individualized post-and-core restoration].

    PubMed

    Gu, Xiao-yu; Wang, Ya-ping; Wang, Yong; Lü, Pei-jun

    2009-10-01

    To develop a method of three-dimensional computer aided design (CAD) of post-and-core restoration. Two plaster casts with extracted natural teeth were used in this study. The extracted teeth were prepared and scanned using tomography method to obtain three-dimensional digitalized models. According to the basic rules of post-and-core design, posts, cores and cavity surfaces of the teeth were designed using the tools for processing point clouds, curves and surfaces on the forward engineering software of Tanglong prosthodontic system. Then three-dimensional figures of the final restorations were corrected according to the configurations of anterior teeth, premolars and molars respectively. Computer aided design of 14 post-and-core restorations were finished, and good fitness between the restoration and the three-dimensional digital models were obtained. Appropriate retention forms and enough spaces for the full crown restorations can be obtained through this method. The CAD of three-dimensional figures of the post-and-core restorations can fulfill clinical requirements. Therefore they can be used in computer-aided manufacture (CAM) of post-and-core restorations.

  2. Correlation between electronic structure and electron conductivity in MoX2 (X = S, Se, and Te)

    NASA Astrophysics Data System (ADS)

    Muzakir, Saifful Kamaluddin

    2017-12-01

    Layered structure molybdenum dichalcogenides, MoX2 (X = S, Se, and Te) are in focus as reversible charge storage electrode for pseudocapacitor applications. Correlation between number of layer and bandgap of the materials has been established by previous researchers. The correlation would reveal a connection between the bandgap and charge storage properties i.e., amount of charges that could be stored, and speed of storage or dissociation. In this work, fundamental parameters viz., (i) size-offset between a monolayer and exciton Bohr radius of MoX2 and (ii) ground and excited state electron density have been studied. We have identified realistic monolayer models of MoX2 using quantum chemical calculations which explain a correlation between size-offset and charge storage properties. We conclude that as the size-offset decreases, the higher possibility of wave functions overlap between the excited state, and ground state electrons; therefore the higher the electron mobility, and conductivity of the MoX2 would be.

  3. Multiple core computer processor with globally-accessible local memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalf, John; Donofrio, David; Oliker, Leonid

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality ofmore » processor cores.« less

  4. Many-core computing for space-based stereoscopic imaging

    NASA Astrophysics Data System (ADS)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  5. A Clear Success for International Transport of Plutonium and MOX Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blachet, L.; Jacot, P.; Bariteau, J.P.

    2006-07-01

    An Agreement between the United States and Russia to eliminate 68 metric tons of surplus weapons-grade plutonium provided the basis for the United States government and its agency, the Department of Energy (DOE), to enter into contracts with industry leaders to fabricate mixed oxide (MOX) fuels (a blend of uranium oxide and plutonium oxide) for use in existing domestic commercial reactors. DOE contracted with Duke, COGEMA, Stone and Webster (DCS), a limited liability company comprised of Duke Energy, COGEMA Inc. and Stone and Webster to design a Mixed Oxide Fuel Fabrication Facility (MFFF) which would be built and operated atmore » the DOE Savannah River Site (SRS) near Aiken, South Carolina. During this same time frame, DOE commissioned fabrication and irradiation of lead test assemblies in one of the Mission Reactors to assist in obtaining NRC approval for batch implementation of MOX fuel prior to the operations phase of the MFFF facility. On February 2001, DOE directed DCS to initiate a pre-decisional investigation to determine means to obtain lead assemblies including all international options for manufacturing MOX fuels. This lead to implementation of the EUROFAB project and work was initiated in earnest on EUROFAB by DCS on November 7, 2003. (authors)« less

  6. Structural, electronic, magnetic and optical properties of semiconductor Zn1-xMoxTe compound

    NASA Astrophysics Data System (ADS)

    Feng, Zhong-Ying; Zhang, Jian-Min

    2018-03-01

    The structural, electronic, magnetic and optical properties of the Zn1-xMoxTe (x = 0.00, 0.25, 0.50, 0.75, 1.00) have been investigated by the spin-polarized first-principles calculations. The Zn0.50Mo0.50Te has tetragonal structure while the Zn1-xMoxTe (x = 0.00, 0.25, 0.75, 1.00) crystallize in cubic structures. For Zn1-xMoxTe (x = 0.25, 0.50, 0.75, 1.00) alloys, the lattice constant and the volume are found larger than those of pure ZnTe alloy. The Zn1-xMoxTe (x = 0.25, 0.50, 0.75, 1.00) is magnetic and the Mo element is found dominant in the bands crossing the Fermi level in the spin-up channel. The Zn0.75Mo0.25Te and MoTe have half-metallic (HM) behavior. In spin-down channel of the Zn0.75Mo0.25Te, the Zn atom mainly contributed to the conduction band minimum (CBM), while the valence band maximum (VBM) appears mainly due to contribution of Te element. A positive spin splitting and crystal field splitting of d-states of Mo atom has been observed for Zn0.75Mo0.25Te alloy. The maximum values of the absorption coefficients αMAX(ω) of the Zn0.50Mo0.50Te alloy along a or b axes are smaller than the absorption coefficient along c axis. The first absorption peak appearing in the energy range of 0.000-1.000 eV for Zn1-xMoxTe (x = 0.25, 0.50, 0.75 or 1.00) alloys is the new peak which is not observed in ZnTe.

  7. An FPGA computing demo core for space charge simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jinyuan; Huang, Yifei; /Fermilab

    2009-01-01

    In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computedmore » using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.« less

  8. Using E-mail in a Math/Computer Core Course.

    ERIC Educational Resources Information Center

    Gurwitz, Chaya

    This paper notes the advantages of using e-mail in computer literacy classes, and discusses the results of incorporating an e-mail assignment in the "Introduction to Mathematical Reasoning and Computer Programming" core course at Brooklyn College (New York). The assignment consisted of several steps. The students first read and responded…

  9. Test Anxiety, Computer-Adaptive Testing and the Common Core

    ERIC Educational Resources Information Center

    Colwell, Nicole Makas

    2013-01-01

    This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…

  10. Multiphysics Computational Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2007-01-01

    The objective of this effort is to develop an efficient and accurate computational heat transfer methodology to predict thermal, fluid, and hydrogen environments for a hypothetical solid-core, nuclear thermal engine - the Small Engine. In addition, the effects of power profile and hydrogen conversion on heat transfer efficiency and thrust performance were also investigated. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics platform, while formulations of conjugate heat transfer were implemented to describe the heat transfer from solid to hydrogen inside the solid-core reactor. The computational domain covers the entire thrust chamber so that the afore-mentioned heat transfer effects impact the thrust performance directly. The result shows that the computed core-exit gas temperature, specific impulse, and core pressure drop agree well with those of design data for the Small Engine. Finite-rate chemistry is very important in predicting the proper energy balance as naturally occurring hydrogen decomposition is endothermic. Locally strong hydrogen conversion associated with centralized power profile gives poor heat transfer efficiency and lower thrust performance. On the other hand, uniform hydrogen conversion associated with a more uniform radial power profile achieves higher heat transfer efficiency, and higher thrust performance.

  11. A Noise Spectroscopy-Based Selective Gas Sensing with MOX Gas Sensors

    NASA Astrophysics Data System (ADS)

    Gomri, S.; Seguin, J.; Contaret, T.; Fiorido, T.; Aguir, K.

    We propose a new method for obtaining a fluctuation-enhanced sensing (FES) signature of a gas using a single metal oxide (MOX) gas micro sensor. Starting from our model of adsorption-desorption (A-D) noise previously developed, we show theoretically that the product of frequency by the power spectrum density (PSD) of the gas sensing layer resistance fluctuations often has a maximum which is characteristic of the gas. This property was experimentally confirmed in the case of the detection of NO2 and O3 using a WO3 sensing layer. This method could be useful for classifying gases. Furthermore, our noise measurements confirm our previous model showing that PSD of the A-Dnoise in MOX gas sensor is a combination of Lorentzians having a low frequency magnitude and a cut-off frequency which depends on the nature of the detected gas.

  12. Microwave-assisted hydrothermal synthesis of Ag₂(W(1-x)Mox)O₄ heterostructures: Nucleation of Ag, morphology, and photoluminescence properties.

    PubMed

    Silva, M D P; Gonçalves, R F; Nogueira, I C; Longo, V M; Mondoni, L; Moron, M G; Santana, Y V; Longo, E

    2016-01-15

    Ag2W(1-x)MoxO4 (x=0.0 and 0.50) powders were synthesized by the co-precipitation (drop-by-drop) method and processed using a microwave-assisted hydrothermal method. We report the real-time in situ formation and growth of Ag filaments on the Ag2W(1-x)MoxO4 crystals using an accelerated electron beam under high vacuum. Various techniques were used to evaluate the influence of the network-former substitution on the structural and optical properties, including photoluminescence (PL) emission, of these materials. X-ray diffraction results confirmed the phases obtained by the synthesis methods. Raman spectroscopy revealed significant changes in local order-disorder as a function of the network-former substitution. Field-emission scanning electron microscopy was used to determine the shape as well as dimensions of the Ag2W(1-x)MoxO4 heterostructures. The PL spectra showed that the PL-emission intensities of Ag2W(1-x)MoxO4 were greater than those of pure Ag2WO4, probably because of the increase of intermediary energy levels within the band gap of the Ag2W(1-x)MoxO4 heterostructures, as evidenced by the decrease in the band-gap values measured by ultraviolet-visible spectroscopy. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Phenotypic characterization of ten methanol oxidation (Mox) mutant classes in methylobacterium AM1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nunn, D.N.; Lidstrom, M.E.

    Twenty-five methanol oxidation mutants of the facultative methylotroph Methylobacterium strain AM1 have been characterized by complementation analysis and assigned to ten complementation groups, Mox A1,A2,A3 and B-H. We have characterized each of the mutants belonging to the ten Mox complementation groups by PMS-DCPIP dye linked methanol dehydrogenase activity, by methanol-dependent whole cell oxygen consumption, by the presence or absence of methanol dehydrogenase protein by SDS-polyacrylamide gels and Western blotting, by the absorption spectra of purified mutant methanol dehydrogenase proteins and by the presence or absence of the soluble cytochrome c proteins of Methylobacterium AM1. We propose functions for each ofmore » the genes deficient in the mutants of the ten Mox complementation groups. These functions include two linked genes that encode the methanol dehydrogenase structural protein and the soluble cytochrome c/sub L/, a gene encoding a secretion function essential for the synthesis and export of methanol dehydrogenase and cytochrome c/sub L/, three gene functions responsible for the proper association of the PQQ prosthetic group with the methanol dehydrogenase apoprotein and four positive regulatory gene functions controlling the expression of the ability to oxidize methanol. 24 refs., 5 figs., 2 tabs.« less

  14. Further observations on OCOM MOX fuel: microstructure in the vicinity of the pellet rim and fuel — cladding interaction

    NASA Astrophysics Data System (ADS)

    Walker, C. T.; Goll, W.; Matsumura, T.

    1997-06-01

    The fuel investigated was manufactured by Siemens-KWU and irradiated at low rating in the KWO reactor in Germany. The MOX agglomerates in the cold outer region of the fuel shared several common features with the high burn-up structure at the rim of UO 2 fuel. It is proposed that in both cases the mechanism producing the microstructure change is recrystallisation. Further, it is shown that surface MOX agglomerates do not noticeably retard cladding creepdown although they swell into the gap. The contracting cladding appears able to push the agglomerates back into the fuel. The thickness of the oxide layer on the inner cladding surface increased at points where contact with surface MOX agglomerates had occurred. Despite this, the mean thickness of the oxide did not differ significantly from that found in UO 2 fuel rods of like design. It is judged that the high burn-up structure will form in the UO 2 matrix when the local burn-up there reaches 60 to 80 GWd/tM. Limiting the MOX scrap addition in the UO 2 matrix will delay its formation.

  15. Fault-Tolerant, Real-Time, Multi-Core Computer System

    NASA Technical Reports Server (NTRS)

    Gostelow, Kim P.

    2012-01-01

    A document discusses a fault-tolerant, self-aware, low-power, multi-core computer for space missions with thousands of simple cores, achieving speed through concurrency. The proposed machine decides how to achieve concurrency in real time, rather than depending on programmers. The driving features of the system are simple hardware that is modular in the extreme, with no shared memory, and software with significant runtime reorganizing capability. The document describes a mechanism for moving ongoing computations and data that is based on a functional model of execution. Because there is no shared memory, the processor connects to its neighbors through a high-speed data link. Messages are sent to a neighbor switch, which in turn forwards that message on to its neighbor until reaching the intended destination. Except for the neighbor connections, processors are isolated and independent of each other. The processors on the periphery also connect chip-to-chip, thus building up a large processor net. There is no particular topology to the larger net, as a function at each processor allows it to forward a message in the correct direction. Some chip-to-chip connections are not necessarily nearest neighbors, providing short cuts for some of the longer physical distances. The peripheral processors also provide the connections to sensors, actuators, radios, science instruments, and other devices with which the computer system interacts.

  16. Ultrasmall PdmMn1-mOx binary alloyed nanoparticles on graphene catalysts for ethanol oxidation in alkaline media

    NASA Astrophysics Data System (ADS)

    Ahmed, Mohammad Shamsuddin; Park, Dongchul; Jeon, Seungwon

    2016-03-01

    A rare combination of graphene (G)-supported palladium and manganese in mixed-oxides binary alloyed catalysts (BACs) have been synthesized with the addition of Pd and Mn metals in various ratios (G/PdmMn1-mOx) through a facile wet-chemical method and employed as an efficient anode catalyst for ethanol oxidation reaction (EOR) in alkaline fuel cells. The as prepared G/PdmMn1-mOx BACs have been characterized by several instrumental techniques; the transmission electron microscopy images show that the ultrafine alloyed nanoparticles (NPs) are excellently monodispersed onto the G. The Pd and Mn in G/PdmMn1-mOx BACs have been alloyed homogeneously, and Mn presents in mixed-oxidized form that resulted by X-ray diffraction. The electrochemical performances, kinetics and stability of these catalysts toward EOR have been evaluated using cyclic voltammetry in 1 M KOH electrolyte. Among all G/PdmMn1-mOx BACs, the G/Pd0.5Mn0.5Ox catalyst has shown much superior mass activity and incredible stability than that of pure Pd catalysts (G/Pd1Mn0Ox, Pd/C and Pt/C). The well dispersion, ultrafine size of NPs and higher degree of alloying are the key factor for enhanced and stable EOR electrocatalysis on G/Pd0.5Mn0.5Ox.

  17. Modeling and Comparison of Options for the Disposal of Excess Weapons Plutonium in Russia

    DTIC Science & Technology

    2002-04-01

    fuel LWR cooling time LWR Pu load rate LWR net destruction frac ~ LWR reactors op life mox core frac Excess Separated Pu HTGR Cycle Pu in Waste LWR MOX...reflecting the cycle used in this type of reactor. For the HTGR , the entire core consists of plutonium fuel , therefore a core fraction is not specified...cooling time Time spent fuel unloaded from HTGR reactor must cool before permanently stored 3 years Mox core fraction Fraction of

  18. Parallel-vector out-of-core equation solver for computational mechanics

    NASA Technical Reports Server (NTRS)

    Qin, J.; Agarwal, T. K.; Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.

    1993-01-01

    A parallel/vector out-of-core equation solver is developed for shared-memory computers, such as the Cray Y-MP machine. The input/ output (I/O) time is reduced by using the a synchronous BUFFER IN and BUFFER OUT, which can be executed simultaneously with the CPU instructions. The parallel and vector capability provided by the supercomputers is also exploited to enhance the performance. Numerical applications in large-scale structural analysis are given to demonstrate the efficiency of the present out-of-core solver.

  19. Bi2MoxW1-xO6 solid solutions with tunable band structure and enhanced visible-light photocatalytic activities

    NASA Astrophysics Data System (ADS)

    Li, Wenqi; Ding, Xingeng; Wu, Huating; Yang, Hui

    2018-07-01

    Semiconductor photocatalysis is an effective green way to combat water pollution. For the first time, this study reports a novel method to develop Bi2MoxW1-xO6 solid solution with microsphere structure through anion-exchange method. All Bi2MoxW1-xO6 samples exhibit an Aurivillius-type crystal structure without any secondary phase, confirming that in complete solid solutions as the value of x increases, the band gap energy of Bi2MoxW1-xO6 solid solutions decreases, while the optical absorption edge moves to longer wavelength. The Raman spectra research shows an increase in orthorhombic distortion with progressive replacement of W sites in Bi2WO6 with Mo6+ ions. Compared to Bi2MoO6 and Bi2WO6 samples, Bi2Mo0.4W0.6O6 sample displayed best photocatalytic activity and cycling stability for degradation of RhB dye. The enhanced photocatalytic activity of Bi2Mo0.4W0.6O6 sample can be synergetically linked to hierarchical hollow structure, enhanced light absorbance, and high carrier-separation efficiency. Additionally, the hollow Bi2MoxW1-xO6 microspheres formation can be attributed to the Kirkendall effect.

  20. Seed robustness of oriented relative fuzzy connectedness: core computation and its applications

    NASA Astrophysics Data System (ADS)

    Tavares, Anderson C. M.; Bejar, Hans H. C.; Miranda, Paulo A. V.

    2017-02-01

    In this work, we present a formal definition and an efficient algorithm to compute the cores of Oriented Relative Fuzzy Connectedness (ORFC), a recent seed-based segmentation technique. The core is a region where the seed can be moved without altering the segmentation, an important aspect for robust techniques and reduction of user effort. We show how ORFC cores can be used to build a powerful hybrid image segmentation approach. We also provide some new theoretical relations between ORFC and Oriented Image Foresting Transform (OIFT), as well as their cores. Experimental results among several methods show that the hybrid approach conserves high accuracy, avoids the shrinking problem and provides robustness to seed placement inside the desired object due to the cores properties.

  1. Multivariate estimation of the limit of detection by orthogonal partial least squares in temperature-modulated MOX sensors.

    PubMed

    Burgués, Javier; Marco, Santiago

    2018-08-17

    Metal oxide semiconductor (MOX) sensors are usually temperature-modulated and calibrated with multivariate models such as partial least squares (PLS) to increase the inherent low selectivity of this technology. The multivariate sensor response patterns exhibit heteroscedastic and correlated noise, which suggests that maximum likelihood methods should outperform PLS. One contribution of this paper is the comparison between PLS and maximum likelihood principal components regression (MLPCR) in MOX sensors. PLS is often criticized by the lack of interpretability when the model complexity increases beyond the chemical rank of the problem. This happens in MOX sensors due to cross-sensitivities to interferences, such as temperature or humidity and non-linearity. Additionally, the estimation of fundamental figures of merit, such as the limit of detection (LOD), is still not standardized in multivariate models. Orthogonalization methods, such as orthogonal projection to latent structures (O-PLS), have been successfully applied in other fields to reduce the complexity of PLS models. In this work, we propose a LOD estimation method based on applying the well-accepted univariate LOD formulas to the scores of the first component of an orthogonal PLS model. The resulting LOD is compared to the multivariate LOD range derived from error-propagation. The methodology is applied to data extracted from temperature-modulated MOX sensors (FIS SB-500-12 and Figaro TGS 3870-A04), aiming at the detection of low concentrations of carbon monoxide in the presence of uncontrolled humidity (chemical noise). We found that PLS models were simpler and more accurate than MLPCR models. Average LOD values of 0.79 ppm (FIS) and 1.06 ppm (Figaro) were found using the approach described in this paper. These values were contained within the LOD ranges obtained with the error-propagation approach. The mean LOD increased to 1.13 ppm (FIS) and 1.59 ppm (Figaro) when considering validation samples

  2. Transportation and storage of MOX and LEU assemblies at the Balakovo Nuclear Power Plant

    DOT National Transportation Integrated Search

    2001-01-01

    The VVER-1000-type Balakovo Nuclear Power Plant has been chosen to dispose of the : plutonium created as part of Russian weapons program. The plutonium will be converted to mixed-oxide : (MOX), fabricated into assemblies and loaded into the reactor. ...

  3. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmarkmore » the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.« less

  4. Discrimination of irradiated MOX fuel from UOX fuel by multivariate statistical analysis of simulated activities of gamma-emitting isotopes

    NASA Astrophysics Data System (ADS)

    Åberg Lindell, M.; Andersson, P.; Grape, S.; Hellesen, C.; Håkansson, A.; Thulin, M.

    2018-03-01

    This paper investigates how concentrations of certain fission products and their related gamma-ray emissions can be used to discriminate between uranium oxide (UOX) and mixed oxide (MOX) type fuel. Discrimination of irradiated MOX fuel from irradiated UOX fuel is important in nuclear facilities and for transport of nuclear fuel, for purposes of both criticality safety and nuclear safeguards. Although facility operators keep records on the identity and properties of each fuel, tools for nuclear safeguards inspectors that enable independent verification of the fuel are critical in the recovery of continuity of knowledge, should it be lost. A discrimination methodology for classification of UOX and MOX fuel, based on passive gamma-ray spectroscopy data and multivariate analysis methods, is presented. Nuclear fuels and their gamma-ray emissions were simulated in the Monte Carlo code Serpent, and the resulting data was used as input to train seven different multivariate classification techniques. The trained classifiers were subsequently implemented and evaluated with respect to their capabilities to correctly predict the classes of unknown fuel items. The best results concerning successful discrimination of UOX and MOX-fuel were acquired when using non-linear classification techniques, such as the k nearest neighbors method and the Gaussian kernel support vector machine. For fuel with cooling times up to 20 years, when it is considered that gamma-rays from the isotope 134Cs can still be efficiently measured, success rates of 100% were obtained. A sensitivity analysis indicated that these methods were also robust.

  5. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    NASA Astrophysics Data System (ADS)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  6. Fabrication of CeO2–MOx (M = Cu, Co, Ni) composite yolk–shell nanospheres with enhanced catalytic properties for CO oxidation

    PubMed Central

    Shi, Jingjing; Cao, Hongxia; Wang, Ruiyu

    2017-01-01

    CeO2–MOx (M = Cu, Co, Ni) composite yolk–shell nanospheres with uniform size were fabricated by a general wet-chemical approach. It involved a non-equilibrium heat-treatment of Ce coordination polymer colloidal spheres (Ce-CPCSs) with a proper heating rate to produce CeO2 yolk–shell nanospheres, followed by a solvothermal treatment of as-synthesized CeO2 with M(CH3COO)2 in ethanol solution. During the solvothermal process, highly dispersed MOx species were decorated on the surface of CeO2 yolk–shell nanospheres to form CeO2–MOx composites. As a CO oxidation catalyst, the CeO2–MOx composite yolk–shell nanospheres showed strikingly higher catalytic activity than naked CeO2 due to the strong synergistic interaction at the interface sites between MOx and CeO2. Cycling tests demonstrate the good cycle stability of these yolk–shell nanospheres. The initial concentration of M(CH3COO)2·xH2O in the synthesis process played a significant role in catalytic performance for CO oxidation. Impressively, complete CO conversion as reached at a relatively low temperature of 145 °C over the CeO2–CuOx-2 sample. Furthermore, the CeO2–CuOx catalyst is more active than the CeO2–CoOx and CeO2–NiO catalysts, indicating that the catalytic activity is correlates with the metal oxide. Additionally, this versatile synthesis approach can be expected to create other ceria-based composite oxide systems with various structures for a broad range of technical applications. PMID:29234577

  7. Computational Analysis of a Pylon-Chevron Core Nozzle Interaction

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Kinzie, Kevin W.; Pao, S. Paul

    2001-01-01

    In typical engine installations, the pylon of an engine creates a flow disturbance that interacts with the engine exhaust flow. This interaction of the pylon with the exhaust flow from a dual stream nozzle was studied computationally. The dual stream nozzle simulates an engine with a bypass ratio of five. A total of five configurations were simulated all at the take-off operating point. All computations were performed using the structured PAB3D code which solves the steady, compressible, Reynolds-averaged Navier-Stokes equations. These configurations included a core nozzle with eight chevron noise reduction devices built into the nozzle trailing edge. Baseline cases had no chevron devices and were run with a pylon and without a pylon. Cases with the chevron were also studied with and without the pylon. Another case was run with the chevron rotated relative to the pylon. The fan nozzle did not have chevron devices attached. Solutions showed that the effect of the pylon is to distort the round Jet plume and to destroy the symmetrical lobed pattern created by the core chevrons. Several overall flow field quantities were calculated that might be used in extensions of this work to find flow field parameters that correlate with changes in noise.

  8. A non-local mixing-length theory able to compute core overshooting

    NASA Astrophysics Data System (ADS)

    Gabriel, M.; Belkacem, K.

    2018-04-01

    Turbulent convection is certainly one of the most important and thorny issues in stellar physics. Our deficient knowledge of this crucial physical process introduces a fairly large uncertainty concerning the internal structure and evolution of stars. A striking example is overshoot at the edge of convective cores. Indeed, nearly all stellar evolutionary codes treat the overshooting zones in a very approximative way that considers both its extent and the profile of the temperature gradient as free parameters. There are only a few sophisticated theories of stellar convection such as Reynolds stress approaches, but they also require the adjustment of a non-negligible number of free parameters. We present here a theory, based on the plume theory as well as on the mean-field equations, but without relying on the usual Taylor's closure hypothesis. It leads us to a set of eight differential equations plus a few algebraic ones. Our theory is essentially a non-mixing length theory. It enables us to compute the temperature gradient in a shrinking convective core and its overshooting zone. The case of an expanding convective core is also discussed, though more briefly. Numerical simulations have quickly improved during recent years and enabling us to foresee that they will probably soon provide a model of convection adapted to the computation of 1D stellar models.

  9. 98. View of IBM digital computer model 7090 magnet core ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    98. View of IBM digital computer model 7090 magnet core installation. ITT Artic Services, Inc., Official photograph BMEWS Site II, Clear, AK, by unknown photographer, 17 September 1965. BMEWS, clear as negative no. A-6606. - Clear Air Force Station, Ballistic Missile Early Warning System Site II, One mile west of mile marker 293.5 on Parks Highway, 5 miles southwest of Anderson, Anderson, Denali Borough, AK

  10. BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

    1981-06-01

    This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

  11. fissioncore: A desktop-computer simulation of a fission-bomb core

    NASA Astrophysics Data System (ADS)

    Cameron Reed, B.; Rohe, Klaus

    2014-10-01

    A computer program, fissioncore, has been developed to deterministically simulate the growth of the number of neutrons within an exploding fission-bomb core. The program allows users to explore the dependence of criticality conditions on parameters such as nuclear cross-sections, core radius, number of secondary neutrons liberated per fission, and the distance between nuclei. Simulations clearly illustrate the existence of a critical radius given a particular set of parameter values, as well as how the exponential growth of the neutron population (the condition that characterizes criticality) depends on these parameters. No understanding of neutron diffusion theory is necessary to appreciate the logic of the program or the results. The code is freely available in FORTRAN, C, and Java and is configured so that modifications to accommodate more refined physical conditions are possible.

  12. 76 FR 22735 - Shaw AREVA MOX Services, Mixed Oxide Fuel Fabrication Facility; License Amendment Request, Notice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-22

    ... NUCLEAR REGULATORY COMMISSION [Docket No. 70-3098; NRC-2011-0081] Shaw AREVA MOX Services, Mixed... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for documents... publicly available documents related to this notice using the following methods: NRC's Public Document Room...

  13. Strength Loss in MA-MOX Green Pellets from Radiation Damage to Binders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul A. Lessing; W.R. Cannon; Gerald W. Egeland

    The fracture strength of green Minor Actinides (MA)-MOX pellets containing 75 wt.% DUO2, 20 wt. % PuO2, 3 wt. % AmO2 and 2 wt. % NpO2 was studied as a function of storage time, after mixing in the binder and before sintering, to test the effect of radiation damage on binders. Fracture strength degraded continuously over the 10 days of the study for all three binders studied: PEG binder (Carbowax 8000), microcrystalline wax (Mobilcer X) and Styrene-acrylic copolymer (Duramax B1022) but the fracture strength of Duramax B1022 degraded the least. For instance, for several hours after mixing Carbowax 8000 withmore » MA MOX, the fracture strength of a pellet was reasonably high and pellets were easily handled without breaking but the pellets were too weak to handle after 10 days. Strength measured using diametral compression test showed strength degradation was more rapid in pellets containing 1.0 wt. % Carbowax PEG 8000 compared to those containing only 0.2 wt. %, suggesting that irradiation not only left the binder less effective but also reduced the pellet strength. In contrast the strength of pellets containing Duramax B1022 degraded very little over the 10 day period. It was suggested that the styrene portion of the Duramax B1022 copolymer provided the radiation resistance.« less

  14. 78 FR 9431 - Shaw AREVA MOX Services, LLC (Mixed Oxide Fuel Fabrication Facility); Order Approving Indirect...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-08

    ... established pursuant to the policies duly authorized under the National Industrial Security Program. The proxy... Influence (FOCI) in order to maintain the Facility Security Clearance held by MOX Services. No physical... Facility Security Clearance, is in accordance with the provisions of the AEA of 1954, as amended. The...

  15. Automatic Quantification of X-ray Computed Tomography Images of Cores: Method and Application to Shimokita Cores (Northeast Coast of Honshu, Japan)

    NASA Astrophysics Data System (ADS)

    Gaillot, P.

    2007-12-01

    X-ray computed tomography (CT) of rock core provides nondestructive cross-sectional or three-dimensional core representations from the attenuation of electromagnetic radiation. Attenuation depends on the density and the atomic constituents of the rock material that is scanned. Since it has the potential to non-invasively measure phase distribution and species concentration, X-ray CT offers significant advantages to characterize both heterogeneous and apparently homogeneous lithologies. In particular, once empirically calibrated into 3D density images, this scanning technique is useful in the observation of density variation. In this paper, I present a procedure from which information contained in the 3D images can be quantitatively extracted and turned into very-high resolution core logs and core image logs including (1) the radial and angular distributions of density values, (2) the histogram of distribution of the density and its related statistical parameters (average, 10- 25- 50, 75 and 90 percentiles, and width at half maximum), and (3) the volume, the average density and the mass contribution of three core fractions defined by two user-defined density thresholds (voids and vugs < 1.01 g/cc ≤ damaged core material < 1.25 g/cc < non-damaged core material). In turn, these quantitative outputs (1) allow the recognition of bedding and sedimentary features, as well as natural and coring-induced fractures, (2) provide a high-resolution bulk density core log, and (3) provide quantitative estimates of core voids and core damaged zones that can further be used to characterize core quality and core disturbance, and apply, where appropriate, volume correction on core physical properties (gamma-ray attenuation density, magnetic susceptibility, natural gamma radiation, non-contact electrical resistivity, P-wave velocity) acquired via Multi- Sensors Core loggers (MSCL). The procedure is illustrated on core data (XR-CT images, continuous MSCL physical properties and

  16. Bi-Modal Model for Neutron Emissions from PuO{sub 2} and MOX Holdup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menlove, Howard; Lafleur, Adrienne

    2015-07-01

    The measurement of uranium and plutonium holdup in plants during process activity and for decommissioning is important for nuclear safeguards and material control. The amount of plutonium and uranium holdup in glove-boxes, pipes, ducts, and other containers has been measured for several decades using both neutron and gamma-ray techniques. For the larger containers such as hot cells and glove-boxes that contain processing equipment, the gamma-ray techniques are limited by self-shielding in the sample as well as gamma absorption in the equipment and associated shielding. The neutron emission is more penetrating and has been used extensively to measure the holdup formore » the large facilities such as the MOX processing and fabrication facilities in Japan and Europe. In some case the totals neutron emission rates are used to determine the holdup mass and in other cases the coincidence rates are used such as at the PFPF MOX fabrication plant in Japan. The neutron emission from plutonium and MOX has 3 primary source terms: 1) Spontaneous fission (SF) from the plutonium isotopes, 2) The (α,n) reactions from the plutonium alpha particle emission reacting with the oxygen and other impurities, and 3) Neutron multiplication (M) in the plutonium and uranium as a result of neutrons created by the first two sources. The spontaneous fission yield per gram is independent of thickness, whereas, the above sources 2) and 3) are very dependent on the thickness of the deposit. As the effective thickness of the deposit becomes thin relative to the alpha particle range, the (α,n) reactions and neutrons from multiplication (M) approach zero. In any glove-box, there will always be two primary modes of holdup accumulation, namely direct powder contact and non-contact by air dispersal. These regimes correspond to surfaces in the glove-box that have come into direct contact with the process MOX powder versus surface areas that have not had direct contact with the powder. The air

  17. 77 FR 70193 - Shaw Areva MOX Services (Mixed Oxide Fuel Fabrication Facility); Notice of Atomic Safety and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-23

    ... MOX Services (Mixed Oxide Fuel Fabrication Facility); Notice of Atomic Safety and Licensing Board Reconstitution Pursuant to 10 CFR 2.313(c) and 2.321(b), the Atomic Safety and Licensing Board (Board) in the... Rockville, Maryland this 16th day of November 2012. E. Roy Hawkens, Chief Administrative Judge, Atomic...

  18. Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures.

    PubMed

    Stamatakis, Alexandros; Ott, Michael

    2008-12-27

    The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.

  19. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    PubMed

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be

  20. Specific low temperature release of 131Xe from irradiated MOX fuel

    NASA Astrophysics Data System (ADS)

    Hiernaut, J.-P.; Wiss, T.; Rondinella, V. V.; Colle, J.-Y.; Sasahara, A.; Sonoda, T.; Konings, R. J. M.

    2009-08-01

    A particular low temperature behaviour of the 131Xe isotope was observed during release studies of fission gases from MOX fuel samples irradiated at 44.5 GWd/tHM. A reproducible release peak, representing 2.7% of the total release of the only 131Xe, was observed at ˜1000 K, the rest of the release curve being essentially identical for all the other xenon isotopes. The integral isotopic composition of the different xenon isotopes is in very good agreement with the inventory calculated using ORIGEN-2. The presence of this particular release is explained by the relation between the thermal diffusion and decay properties of the various iodine radioisotopes decaying all into xenon.

  1. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  2. Methods for computing comet core temperatures

    NASA Astrophysics Data System (ADS)

    McKay, C. P.; Squyres, S. W.; Reynolds, R. T.

    1986-06-01

    The temperature profile within the comet nucleus provides the key to an understanding of the history of the volatiles within a comet. Certain difficulties arise in connection with current cometary temperature models. It is shown that the constraint of zero net heat flow can be used to derive general analytical expressions which will allow for the determination of comet core temperature for a spherically symmetric comet, taking into account information about the surface temperature and the thermal conductivity. The obtained results are compared with the expression for comet core temperatures considered by Klinger (1981). Attention is given to analytical results, an example case, and numerical models. The formalization developed makes it possible to determine the core temperature on the basis of the numerical models of the surface temperature.

  3. Composite Cores

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Spang & Company's new configuration of converter transformer cores is a composite of gapped and ungapped cores assembled together in concentric relationship. The net effect of the composite design is to combine the protection from saturation offered by the gapped core with the lower magnetizing requirement of the ungapped core. The uncut core functions under normal operating conditions and the cut core takes over during abnormal operation to prevent power surges and their potentially destructive effect on transistors. Principal customers are aerospace and defense manufacturers. Cores also have applicability in commercial products where precise power regulation is required, as in the power supplies for large mainframe computers.

  4. Augmented switching linear dynamical system model for gas concentration estimation with MOX sensors in an open sampling system.

    PubMed

    Di Lello, Enrico; Trincavelli, Marco; Bruyninckx, Herman; De Laet, Tinne

    2014-07-11

    In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX) sensors in Open Sampling System (OSS). Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector.

  5. X-ray Photoelectron Spectroscopy study of CaV1-xMoxO3-δ

    NASA Astrophysics Data System (ADS)

    Belyakov, S. A.; Kuznetsov, M. V.; Shkerin, S. N.

    2018-06-01

    An investigation was carried out on perovskite-based derivatives of CaV1-xMoxO3-δ using X-ray Photoelectron Spectroscopy (XPS). According to the XRD pattern, the area of homogeneity covers the region from x = 0 to x = 0.6. Wide XPS-peaks of Ca, V, Mo and O are observed, signalling that elements are presented in multiple states. A model for explaining the large chemical shifts of XPS peaks due to different charging effects on different parts of the sample surface is proposed.

  6. Characterizing Facesheet/Core Disbonding in Honeycomb Core Sandwich Structure

    NASA Technical Reports Server (NTRS)

    Rinker, Martin; Ratcliffe, James G.; Adams, Daniel O.; Krueger, Ronald

    2013-01-01

    Results are presented from an experimental investigation into facesheet core disbonding in carbon fiber reinforced plastic/Nomex honeycomb sandwich structures using a Single Cantilever Beam test. Specimens with three, six and twelve-ply facesheets were tested. Specimens with different honeycomb cores consisting of four different cell sizes were also tested, in addition to specimens with three different widths. Three different data reduction methods were employed for computing apparent fracture toughness values from the test data, namely an area method, a compliance calibration technique and a modified beam theory method. The compliance calibration and modified beam theory approaches yielded comparable apparent fracture toughness values, which were generally lower than those computed using the area method. Disbonding in the three-ply facesheet specimens took place at the facesheet/core interface and yielded the lowest apparent fracture toughness values. Disbonding in the six and twelve-ply facesheet specimens took place within the core, near to the facesheet/core interface. Specimen width was not found to have a significant effect on apparent fracture toughness. The amount of scatter in the apparent fracture toughness data was found to increase with honeycomb core cell size.

  7. VORCOR: A computer program for calculating characteristics of wings with edge vortex separation by using a vortex-filament and-core model

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Mehrotra, S. C.; Lan, C. E.

    1982-01-01

    A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.

  8. A high converter concept for fuel management with blanket fuel assemblies in boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Frances, N.; Timm, W.; Rossbach, D.

    2012-07-01

    Studies on the natural Uranium saving and waste reduction potential of a multiple-plant BWR system were performed. The BWR High Converter system should enable a multiple recycling of MOX fuel in current BWR plants by introducing blanket fuel assemblies and burning Uranium and MOX fuel separately. The feasibility of Uranium cores with blankets and full-MOX cores with Plutonium qualities as low as 40% were studied. The power concentration due to blanket insertion is manageable with modern fuel and acceptable values for the thermal limits and reactivity coefficients were obtained. While challenges remain, full-MOX cores also complied with the main designmore » criteria. The combination of Uranium and Plutonium burners in appropriate proportions could enable obtaining as much as 40% more energy out of Uranium ore. Moreover, a proper adjustment of blanket average stay and Plutonium qualities could lead to a system with nearly no Plutonium left for final disposal. The achievement of such goals with current light water technology makes the BWR HC concept an attractive option to improve the fuel cycle until Gen-IV designs are mature. (authors)« less

  9. Development of an integrated, unattended assay system for LWR-MOX fuel pellet trays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J.E.; Hatcher, C.R.; Pollat, L.L.

    1994-08-01

    Four identical unattended plutonium assay systems have been developed for use at the new light-water-reactor mixed oxide (LWR-MOX) fuel fabrication facility at Hanau, Germany. The systems provide quantitative plutonium verification for all MOX pellet trays entering or leaving a large, intermediate store. Pellet-tray transport and storage systems are highly automated. Data from the ``I-Point`` (information point) assay systems will be shared by the Euratom and International Atomic Energy Agency (IAEA) Inspectorates. The I-Point system integrates, for the first time, passive neutron coincidence counting (NCC) with electro-mechanical sensing (EMS) in unattended mode. Also, provisions have been made for adding high-resolution gammamore » spectroscopy. The system accumulates data for every tray entering or leaving the store between inspector visits. During an inspection, data are analyzed and compared with operator declarations for the previous inspection period, nominally one month. Specification of the I-point system resulted from a collaboration between the IAEA, Euratom, Siemens, and Los Alamos. Hardware was developed by Siemens and Los Alamos through a bilateral agreement between the German Federal Ministry of Research and Technology (BMFT) and the US DOE. Siemens also provided the EMS subsystem, including software. Through the USSupport Program to the IAEA, Los Alamos developed the NCC software (NCC COLLECT) and also the software for merging and reviewing the EMS and NCC data (MERGE/REVIEW). This paper describes the overall I-Point system, but emphasizes the NCC subsystem, along with the NCC COLLECT and MERGE/REVIEW codes. We also summarize comprehensive testing results that define the quality of assay performance.« less

  10. The human homeobox genes MSX-1, MSX-2, and MOX-1 are differentially expressed in the dermis and epidermis in fetal and adult skin.

    PubMed

    Stelnicki, E J; Kömüves, L G; Holmes, D; Clavin, W; Harrison, M R; Adzick, N S; Largman, C

    1997-10-01

    In order to identify homeobox genes which may regulate skin development and possibly mediate scarless fetal wound healing we have screened amplified human fetal skin cDNAs by polymerase chain reaction (PCR) using degenerate oligonucleotide primers designed against highly conserved regions within the homeobox. We identified three non-HOX homeobox genes, MSX-1, MSX-2, and MOX-1, which were differentially expressed in fetal and adult human skin. MSX-1 and MSX-2 were detected in the epidermis, hair follicles, and fibroblasts of the developing fetal skin by in situ hybridization. In contrast, MSX-1 and MSX-2 expression in adult skin was confined to epithelially derived structures. Immunohistochemical analysis of these two genes suggested that their respective homeoproteins may be differentially regulated. While Msx-1 was detected in the cell nucleus of both fetal and adult skin; Msx-2 was detected as a diffuse cytoplasmic signal in fetal epidermis and portions of the hair follicle and dermis, but was localized to the nucleus in adult epidermis. MOX-1 was expressed in a pattern similar to MSX early in gestation but then was restricted exclusively to follicular cells in the innermost layer of the outer root sheath by 21 weeks of development. Furthermore, MOX-1 expression was completely absent in adult cutaneous tissue. These data imply that each of these homeobox genes plays a specific role in skin development.

  11. Augmented Switching Linear Dynamical System Model for Gas Concentration Estimation with MOX Sensors in an Open Sampling System

    PubMed Central

    Di Lello, Enrico; Trincavelli, Marco; Bruyninckx, Herman; De Laet, Tinne

    2014-01-01

    In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX) sensors in Open Sampling System (OSS). Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector. PMID:25019637

  12. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  13. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  14. Precision of computer-assisted core decompression drilling of the knee.

    PubMed

    Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-06-01

    Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.

  15. MEGA-CC: computing core of molecular evolutionary genetics analysis program for automated and iterative data analysis.

    PubMed

    Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro

    2012-10-15

    There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.

  16. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  17. SedCT: MATLAB™ tools for standardized and quantitative processing of sediment core computed tomography (CT) data collected using a medical CT scanner

    NASA Astrophysics Data System (ADS)

    Reilly, B. T.; Stoner, J. S.; Wiest, J.

    2017-08-01

    Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.

  18. The use of MOX caramel fuel mixed with 241Am, 242mAm and 243Am as burnable absorber actinides for the MTR research reactors.

    PubMed

    Shaaban, Ismail; Albarhoum, Mohamad

    2017-07-01

    The MOX (UO 2 &PuO 2 ) caramel fuel mixed with 241 Am, 242m Am and 243 Am as burnable absorber actinides was proposed as a fuel of the MTR-22MW reactor. The MCNP4C code was used to simulate the MTR-22MW reactor and estimate the criticality and the neutronic parameters, and the power peaking factors before and after replacing its original fuel (U 3 O 8 -Al) by the MOX caramel fuel mixed with 241 Am, 242m Am and 243 Am actinides. The obtained results of the criticality, the neutronic parameters, and the power peaking factors for the MOX caramel fuel mixed with 241 Am, 242m Am and 243 Am actinides were compared with the same parameters of the U 3 O 8 -Al original fuel and a maximum difference is -6.18% was found. Additionally, by recycling 2.65% and 2.71% plutonium and 241 Am, 242m Am and 243 Am actinides in the MTR-22MW reactor, the level of 235 U enrichment is reduced from 4.48% to 3% and 2.8%, respectively. This also results in the reduction of the 235 U loading by 32.75% and 37.22% for the 2.65%, the 2.71% plutonium and 241 Am, 242m Am and 243 Am actinides, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Constructing Smart Protocells with Built-In DNA Computational Core to Eliminate Exogenous Challenge.

    PubMed

    Lyu, Yifan; Wu, Cuichen; Heinke, Charles; Han, Da; Cai, Ren; Teng, I-Ting; Liu, Yuan; Liu, Hui; Zhang, Xiaobing; Liu, Qiaoling; Tan, Weihong

    2018-06-06

    A DNA reaction network is like a biological algorithm that can respond to "molecular input signals", such as biological molecules, while the artificial cell is like a microrobot whose function is powered by the encapsulated DNA reaction network. In this work, we describe the feasibility of using a DNA reaction network as the computational core of a protocell, which will perform an artificial immune response in a concise way to eliminate a mimicked pathogenic challenge. Such a DNA reaction network (RN)-powered protocell can realize the connection of logical computation and biological recognition due to the natural programmability and biological properties of DNA. Thus, the biological input molecules can be easily involved in the molecular computation and the computation process can be spatially isolated and protected by artificial bilayer membrane. We believe the strategy proposed in the current paper, i.e., using DNA RN to power artificial cells, will lay the groundwork for understanding the basic design principles of DNA algorithm-based nanodevices which will, in turn, inspire the construction of artificial cells, or protocells, that will find a place in future biomedical research.

  20. The computational core and fixed point organization in Boolean networks

    NASA Astrophysics Data System (ADS)

    Correale, L.; Leone, M.; Pagnani, A.; Weigt, M.; Zecchina, R.

    2006-03-01

    In this paper, we analyse large random Boolean networks in terms of a constraint satisfaction problem. We first develop an algorithmic scheme which allows us to prune simple logical cascades and underdetermined variables, returning thereby the computational core of the network. Second, we apply the cavity method to analyse the number and organization of fixed points. We find in particular a phase transition between an easy and a complex regulatory phase, the latter being characterized by the existence of an exponential number of macroscopically separated fixed point clusters. The different techniques developed are reinterpreted as algorithms for the analysis of single Boolean networks, and they are applied in the analysis of and in silico experiments on the gene regulatory networks of baker's yeast (Saccharomyces cerevisiae) and the segment-polarity genes of the fruitfly Drosophila melanogaster.

  1. Comparison of UWCC MOX fuel measurements to MCNP-REN calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.; Baker, M.; Jie, R.

    1998-12-31

    The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less

  2. Computed Tomography Scanning and Geophysical Measurements of Core from the Coldstream 1MH Well

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crandall, Dustin M.; Brown, Sarah; Moore, Johnathan E.

    The computed tomography (CT) facilities and the Multi-Sensor Core Logger (MSCL) at the National Energy Technology Laboratory (NETL) Morgantown, West Virginia site were used to characterize core of the Marcellus Shale from a vertical well, the Coldstream 1MH Well in Clearfield County, PA. The core is comprised primarily of the Marcellus Shale from a depth of 7,002 to 7,176 ft. The primary impetus of this work is a collaboration between West Virginia University (WVU) and NETL to characterize core from multiple wells to better understand the structure and variation of the Marcellus and Utica shale formations. As part of thismore » effort, bulk scans of core were obtained from the Coldstream 1MH well, provided by the Energy Corporation of America (now Greylock Energy). This report, and the associated scans, provide detailed datasets not typically available from unconventional shales for analysis. The resultant datasets are presented in this report, and can be accessed from NETL's Energy Data eXchange (EDX) online system using the following link: https://edx.netl.doe.gov/dataset/coldstream-1mh-well. All equipment and techniques used were non-destructive, enabling future examinations to be performed on these cores. None of the equipment used was suitable for direct visualization of the shale pore space, although fractures and discontinuities were detectable with the methods tested. Low resolution CT imagery with the NETL medical CT scanner was performed on the entire core. Qualitative analysis of the medical CT images, coupled with x-ray fluorescence (XRF), P-wave, and magnetic susceptibility measurements from the MSCL were useful in identifying zones of interest for more detailed analysis as well as fractured zones. En echelon fractures were observed at 7,100 ft and were CT scanned using NETL’s industrial CT scanner at higher resolution. The ability to quickly identify key areas for more detailed study with higher resolution will save time and resources in future

  3. ARCADIA{sup R} - A New Generation of Coupled Neutronics / Core Thermal- Hydraulics Code System at AREVA NP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas

    2007-07-01

    Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less

  4. Computing prokaryotic gene ubiquity: rescuing the core from extinction.

    PubMed

    Charlebois, Robert L; Doolittle, W Ford

    2004-12-01

    The genomic core concept has found several uses in comparative and evolutionary genomics. Defined as the set of all genes common to (ubiquitous among) all genomes in a phylogenetically coherent group, core size decreases as the number and phylogenetic diversity of the relevant group increases. Here, we focus on methods for defining the size and composition of the core of all genes shared by sequenced genomes of prokaryotes (Bacteria and Archaea). There are few (almost certainly less than 50) genes shared by all of the 147 genomes compared, surely insufficient to conduct all essential functions. Sequencing and annotation errors are responsible for the apparent absence of some genes, while very limited but genuine disappearances (from just one or a few genomes) can account for several others. Core size will continue to decrease as more genome sequences appear, unless the requirement for ubiquity is relaxed. Such relaxation seems consistent with any reasonable biological purpose for seeking a core, but it renders the problem of definition more problematic. We propose an alternative approach (the phylogenetically balanced core), which preserves some of the biological utility of the core concept. Cores, however delimited, preferentially contain informational rather than operational genes; we present a new hypothesis for why this might be so.

  5. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  6. Mars surface chemistry investigated with the MOx probe: A 1-kg optical microsensor-based chemical analysis instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricco, A.J.; Butler, M.A.; Grunthaner, F.J.

    The authors have designed and built the prototype of an instrument that will use fiber optic micromirror-based chemical sensors to investigate the surprising reactivity of martian soil reported by several Viking Lander Experiments in the mid 1970s. The MOx (Mars Oxidant Experiment) Instrument, which will probe the reactivity of the near-surface martian atmosphere as well as soil, utilizes an array of chemically sensitive thin films including metals, organometallics, and organic dyes to produce a pattern of reflectivity changes characteristic of the species interacting with these sensing layers. The 850-g system includes LED light sources, optical fiber light guides, silicon micromachinedmore » fixtures, a line-array CCD detector, control-and-measurement electronics, microprocessor, memory, interface, batteries, and housing. This instrument monitors real-time reflectivities from an array of {approximately}200 separate micromirrors. The unmanned Russian Mars 96 mission is slated to carry the MOx Instrument along with experiments from several other nations. The principles of the chemically sensitive micromirror upon which this instrument is based will be described and preliminary data for reactions of micromirrors with oxidant materials believed to be similar to those on Mars will be presented. The general design of the instrument, including Si micromachined components, as well as the range of coatings and the rationale for their selection, will be discussed as well.« less

  7. CoreTSAR: Core Task-Size Adapting Runtime

    DOE PAGES

    Scogland, Thomas R. W.; Feng, Wu-chun; Rountree, Barry; ...

    2014-10-27

    Heterogeneity continues to increase at all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other co-processors into everything from desktops to supercomputers. As a consequence, efficiently managing such disparate resources has become increasingly complex. CoreTSAR seeks to reduce this complexity by adaptively worksharing parallel-loop regions across compute resources without requiring any transformation of the code within the loop. Lastly, our results show performance improvements of up to three-fold over a current state-of-the-art heterogeneous task scheduler as well as linear performance scaling from a single GPU to four GPUs for many codes. In addition, CoreTSAR demonstratesmore » a robust ability to adapt to both a variety of workloads and underlying system configurations.« less

  8. Computer Simulation To Assess The Feasibility Of Coring Magma

    NASA Astrophysics Data System (ADS)

    Su, J.; Eichelberger, J. C.

    2017-12-01

    Lava lakes on Kilauea Volcano, Hawaii have been successfully cored many times, often with nearly complete recovery and at temperatures exceeding 1100oC. Water exiting nozzles on the diamond core bit face quenches melt to glass just ahead of the advancing bit. The bit readily cuts a clean annulus and the core, fully quenched lava, passes smoothly into the core barrel. The core remains intact after recovery, even when there are comparable amounts of glass and crystals with different coefficients of thermal expansion. The unique resulting data reveal the rate and sequence of crystal growth in cooling basaltic lava and the continuous liquid line of descent as a function of temperature from basalt to rhyolite. Now that magma bodies, rather than lava pooled at the surface, have been penetrated by geothermal drilling, the question arises as to whether similar coring could be conducted at depth, providing fundamentally new insights into behavior of magma. This situation is considerably more complex because the coring would be conducted at depths exceeding 2 km and drilling fluid pressures of 20 MPa or more. Criteria that must be satisfied include: 1) melt is quenched ahead of the bit and the core itself must be quenched before it enters the barrel; 2) circulating drilling fluid must keep the temperature of the coring assembling cooled to within operational limits; 3) the drilling fluid column must nowhere exceed the local boiling point. A fluid flow simulation was conducted to estimate the process parameters necessary to maintain workable temperatures during the coring operation. SolidWorks Flow Simulation was used to estimate the effect of process parameters on the temperature distribution of the magma immediately surrounding the borehole and of drilling fluid within the bottom-hole assembly (BHA). A solid model of the BHA was created in SolidWorks to capture the flow behavior around the BHA components. Process parameters used in the model include the fluid properties and

  9. ElemeNT: a computational tool for detecting core promoter elements.

    PubMed

    Sloutskin, Anna; Danino, Yehuda M; Orenstein, Yaron; Zehavi, Yonathan; Doniger, Tirza; Shamir, Ron; Juven-Gershon, Tamar

    2015-01-01

    Core promoter elements play a pivotal role in the transcriptional output, yet they are often detected manually within sequences of interest. Here, we present 2 contributions to the detection and curation of core promoter elements within given sequences. First, the Elements Navigation Tool (ElemeNT) is a user-friendly web-based, interactive tool for prediction and display of putative core promoter elements and their biologically-relevant combinations. Second, the CORE database summarizes ElemeNT-predicted core promoter elements near CAGE and RNA-seq-defined Drosophila melanogaster transcription start sites (TSSs). ElemeNT's predictions are based on biologically-functional core promoter elements, and can be used to infer core promoter compositions. ElemeNT does not assume prior knowledge of the actual TSS position, and can therefore assist in annotation of any given sequence. These resources, freely accessible at http://lifefaculty.biu.ac.il/gershon-tamar/index.php/resources, facilitate the identification of core promoter elements as active contributors to gene expression.

  10. Quantifying multiscale porosity and fracture aperture distribution in granite cores using computed tomography

    NASA Astrophysics Data System (ADS)

    Wenning, Quinn; Madonna, Claudio; Joss, Lisa; Pini, Ronny

    2017-04-01

    Knowledge of porosity and fracture (aperture) distribution is key towards a sound description of fluid transport in low-permeability rocks. In the context of geothermal energy development, the ability to quantify the transport properties of fractures is needed to in turn quantify the rate of heat transfer, and, accordingly, to optimize the engineering design of the operation. In this context, core-flooding experiments coupled with non-invasive imaging techniques (e.g., X-Ray Computed Tomography - X-Ray CT) represent a powerful tool for making direct observations of these properties under representative geologic conditions. This study focuses on quantifying porosity and fracture aperture distribution in a fractured westerly granite core by using two recently developed experimental protocols. The latter include the use of a highly attenuating gas [Vega et al., 2014] and the application of the so-called missing CT attenuation method [Huo et al., 2016] to produce multidimensional maps of the pore space and of the fractures. Prior to the imaging experiments, the westerly granite core (diameter: 5 cm, length: 10 cm) was thermally shocked to induce micro-fractured pore space; this was followed by the application of the so-called Brazilian method to induce a macroscopic fracture along the length of the core. The sample was then mounted in a high-pressure aluminum core-holder, exposed to a confining pressure and placed inside a medical CT scanner for imaging. An initial compressive pressure cycle was performed to remove weak asperities and reduce the hysteretic behavior of the fracture with respect to effective pressure. The CT scans were acquired at room temperature and 0.5, 5, 7, and 10 MPa effective pressure under loading and unloading conditions. During scanning the pore fluid pressure was undrained and constant, and the confining pressure was regulated at the desired pressure with a high precision pump. Highly transmissible krypton and helium gases were used as

  11. Minor Actinides-Loaded FBR Core Concept Suitable for the Introductory Period in Japan

    NASA Astrophysics Data System (ADS)

    Fujimura, Koji; Sasahira, Akira; Yamashita, Junichi; Fukasawa, Tetsuo; Hoshino, Kuniyoshi

    According to the Japan's Framework for Nuclear Energy Policy(1), a basic scenario for fast breeder reactors (FBRs) is that they will be introduced on a commercial basis starting around 2050 replacing light water reactors (LWRs). During the FBR introduction period, the Pu from LWR spent fuel is used for FBR startup. Howerver, the FBR core loaded with this Pu has a larger burnup reactivity due to its larger isotopic content of Pu-241 than a core loaded with Pu from an FBR multi-recycling core. The increased burnup reactivity may reduce the cycle length of an FBR. We investigated, an FBR transitional core concept to confront the issues of the FBR introductory period in Japan. Core specifications are based on the compact-type sodium-cooled mixed oxide (MOX)-fueled core designed from the Japanese FBR cycle feasibility studies, because lower Pu inventory should be better for the FBR introductory period in view of its flexibility for the required reprocessing amount of LWR spent fuel to start up FBRs. The reference specifications were selected as follows. Output of 1500MWe and average discharge fuel burnup of about 150GWd/t. Minor Actinides (MAs) recovered from LWR spent fuels which provide Pu to startup FBRs are loaded to the initial loading fuels and exchanged fuels during few cycles until equilibrium. We made the MA content of the initial loading fuel four kinds like 0%, 3%, 4%, 5%. The average of the initial loading fuel is assumed to be 3%, and that of the exchange fuel is set as 5%. This 5% maximum of the MA content is based on the irradiation results of the experimental fast reactor Joyo. We evaluated the core performances including burnup characteristics and the reactivity coefficient and confirmed that transitional core from initial loading until equilibrium cycle with loaded Pu from LWR spent fuel performs similary to an FBR multi-recycling core.

  12. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    PubMed

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  13. A Computational Fluid Dynamic and Heat Transfer Model for Gaseous Core and Gas Cooled Space Power and Propulsion Reactors

    NASA Technical Reports Server (NTRS)

    Anghaie, S.; Chen, G.

    1996-01-01

    A computational model based on the axisymmetric, thin-layer Navier-Stokes equations is developed to predict the convective, radiation and conductive heat transfer in high temperature space nuclear reactors. An implicit-explicit, finite volume, MacCormack method in conjunction with the Gauss-Seidel line iteration procedure is utilized to solve the thermal and fluid governing equations. Simulation of coolant and propellant flows in these reactors involves the subsonic and supersonic flows of hydrogen, helium and uranium tetrafluoride under variable boundary conditions. An enthalpy-rebalancing scheme is developed and implemented to enhance and accelerate the rate of convergence when a wall heat flux boundary condition is used. The model also incorporated the Baldwin and Lomax two-layer algebraic turbulence scheme for the calculation of the turbulent kinetic energy and eddy diffusivity of energy. The Rosseland diffusion approximation is used to simulate the radiative energy transfer in the optically thick environment of gas core reactors. The computational model is benchmarked with experimental data on flow separation angle and drag force acting on a suspended sphere in a cylindrical tube. The heat transfer is validated by comparing the computed results with the standard heat transfer correlations predictions. The model is used to simulate flow and heat transfer under a variety of design conditions. The effect of internal heat generation on the heat transfer in the gas core reactors is examined for a variety of power densities, 100 W/cc, 500 W/cc and 1000 W/cc. The maximum temperature, corresponding with the heat generation rates, are 2150 K, 2750 K and 3550 K, respectively. This analysis shows that the maximum temperature is strongly dependent on the value of heat generation rate. It also indicates that a heat generation rate higher than 1000 W/cc is necessary to maintain the gas temperature at about 3500 K, which is typical design temperature required to achieve high

  14. Opportunistic Computing with Lobster: Lessons Learned from Scaling up to 25k Non-Dedicated Cores

    NASA Astrophysics Data System (ADS)

    Wolf, Matthias; Woodard, Anna; Li, Wenzhao; Hurtado Anampa, Kenyi; Yannakopoulos, Anna; Tovar, Benjamin; Donnelly, Patrick; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2017-10-01

    We previously described Lobster, a workflow management tool for exploiting volatile opportunistic computing resources for computation in HEP. We will discuss the various challenges that have been encountered while scaling up the simultaneous CPU core utilization and the software improvements required to overcome these challenges. Categories: Workflows can now be divided into categories based on their required system resources. This allows the batch queueing system to optimize assignment of tasks to nodes with the appropriate capabilities. Within each category, limits can be specified for the number of running jobs to regulate the utilization of communication bandwidth. System resource specifications for a task category can now be modified while a project is running, avoiding the need to restart the project if resource requirements differ from the initial estimates. Lobster now implements time limits on each task category to voluntarily terminate tasks. This allows partially completed work to be recovered. Workflow dependency specification: One workflow often requires data from other workflows as input. Rather than waiting for earlier workflows to be completed before beginning later ones, Lobster now allows dependent tasks to begin as soon as sufficient input data has accumulated. Resource monitoring: Lobster utilizes a new capability in Work Queue to monitor the system resources each task requires in order to identify bottlenecks and optimally assign tasks. The capability of the Lobster opportunistic workflow management system for HEP computation has been significantly increased. We have demonstrated efficient utilization of 25 000 non-dedicated cores and achieved a data input rate of 30 Gb/s and an output rate of 500GB/h. This has required new capabilities in task categorization, workflow dependency specification, and resource monitoring.

  15. A case study of coupling upflow anaerobic sludge blanket (UASB) and ANITA™ Mox process to treat high-strength landfill leachate.

    PubMed

    Lu, Ting; George, Biju; Zhao, Hong; Liu, Wenjun

    2016-01-01

    A pilot study was conducted to study the treatability of high-strength landfill leachate by a combined process including upflow anaerobic sludge blanket (UASB), carbon removal (C-stage) moving bed biofilm reactor (MBBR) and ANITA™ Mox process. The major innovation on this pilot study is the patent-pending process invented by Veolia that integrates the above three unit processes with an effluent recycle stream, which not only maintains the low hydraulic retention time to enhance the treatment performance but also reduces inhibiting effect from chemicals present in the high-strength leachate. This pilot study has demonstrated that the combined process was capable of treating high-strength leachate with efficient chemical oxygen demand (COD) and nitrogen removals. The COD removal efficiency by the UASB was 93% (from 45,000 to 3,000 mg/L) at a loading rate of 10 kg/(m(3)·d). The C-stage MBBR removed an additional 500 to 1,000 mg/L of COD at a surface removal rate (SRR) of 5 g/(m(2)·d) and precipitated 400 mg/L of calcium. The total inorganic nitrogen removal efficiency by the ANITA Mox reactor was about 70% at SRR of 1.0 g/(m(2)·d).

  16. Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer (7th Annual SFAF Meeting, 2012)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copeland, Alex

    2012-06-01

    Alex Copeland on "Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.

  17. Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer (7th Annual SFAF Meeting, 2012)

    ScienceCinema

    Copeland, Alex [DOE JGI

    2017-12-09

    Alex Copeland on "Assembly of large metagenome data sets using a Convey HC-1 hybrid core computer" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.

  18. CHF considerations for highly moderated 100% MOX fuels PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saphier, D.; Raymond, P.

    1995-09-01

    A feasibility study on using 100% MOX fuel in a PWR with increased moderating ratio, RMA, was initiated. In the proposed design all the parameters were chosen identical to the French 1450MW PWR, except the fuel pin diameter which was reduced to achieve higher moderating ratios, V{sub M}/V{sub F}, where V{sub M} and V{sub F} are the moderator and fuel volume respectively. Moderating ratios from 2 to 4 were considered. In the present study the thermal-hydraulic feasibility of using fuel assemblies with smaller diameter fuel pins was investigated. The major design constrain in this study was the critical heat fluxmore » (CHF). In order to maintain the fuel pin integrity under nominal operating and transient conditions, the minimum DNBR, (Departure from Nucleate Boiling Ratio given by CHF/q{close_quotes}{sub local}, where q{close_quotes}{sub local} is the local heat flux), has to be above a given value. The limitations of the existing CHF correlations for the present study are outlined. Two designs based on the conventional 17x17 fuel assembly and on the advanced 19x19 assembly meeting the MDNBR criteria and satisfying the control margin requirements, are proposed.« less

  19. Oak Ridge National Laboratory Core Competencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberto, J.B.; Anderson, T.D.; Berven, B.A.

    1994-12-01

    A core competency is a distinguishing integration of capabilities which enables an organization to deliver mission results. Core competencies represent the collective learning of an organization and provide the capacity to perform present and future missions. Core competencies are distinguishing characteristics which offer comparative advantage and are difficult to reproduce. They exhibit customer focus, mission relevance, and vertical integration from research through applications. They are demonstrable by metrics such as level of investment, uniqueness of facilities and expertise, and national impact. The Oak Ridge National Laboratory (ORNL) has identified four core competencies which satisfy the above criteria. Each core competencymore » represents an annual investment of at least $100M and is characterized by an integration of Laboratory technical foundations in physical, chemical, and materials sciences; biological, environmental, and social sciences; engineering sciences; and computational sciences and informatics. The ability to integrate broad technical foundations to develop and sustain core competencies in support of national R&D goals is a distinguishing strength of the national laboratories. The ORNL core competencies are: 9 Energy Production and End-Use Technologies o Biological and Environmental Sciences and Technology o Advanced Materials Synthesis, Processing, and Characterization & Neutron-Based Science and Technology. The distinguishing characteristics of each ORNL core competency are described. In addition, written material is provided for two emerging competencies: Manufacturing Technologies and Computational Science and Advanced Computing. Distinguishing institutional competencies in the Development and Operation of National Research Facilities, R&D Integration and Partnerships, Technology Transfer, and Science Education are also described. Finally, financial data for the ORNL core competencies are summarized in the appendices.« less

  20. IMPACT OF FISSION PRODUCTS IMPURITY ON THE PLUTONIUM CONTENT IN PWR MOX FUELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilles Youinou; Andrea Alfonsi

    2012-03-01

    This report presents the results of a neutronics analysis done in response to the charter IFCA-SAT-2 entitled 'Fuel impurity physics calculations'. This charter specifies that the separation of the fission products (FP) during the reprocessing of UOX spent nuclear fuel assemblies (UOX SNF) is not perfect and that, consequently, a certain amount of FP goes into the Pu stream used to fabricate PWR MOX fuel assemblies. Only non-gaseous FP have been considered (see the list of 176 isotopes considered in the calculations in Appendix 1). This mixture of Pu and FP is called PuFP. Note that, in this preliminary analysis,more » the FP losses are considered element-independent, i.e., for example, 1% of FP losses mean that 1% of all non-gaseous FP leak into the Pu stream.« less

  1. Determining Reactor Fuel Type from Continuous Antineutrino Monitoring

    NASA Astrophysics Data System (ADS)

    Jaffke, Patrick; Huber, Patrick

    2017-09-01

    We investigate the ability of an antineutrino detector to determine the fuel type of a reactor. A hypothetical 5-ton antineutrino detector is placed 25 m from the core and measures the spectral shape and rate of antineutrinos emitted by fission fragments in the core for a number of 90-d periods. Our results indicate that four major fuel types can be differentiated from the variation of fission fractions over the irradiation time with a true positive probability of detection at approximately 95%. In addition, we demonstrate that antineutrinos can identify the burnup at which weapons-grade mixed-oxide (MOX) fuel would be reduced to reactor-grade MOX, on average, providing assurance that plutonium-disposition goals are met. We also investigate removal scenarios where plutonium is purposefully diverted from a mixture of MOX and low-enriched uranium fuel. Finally, we discuss how our analysis is impacted by a spectral distortion around 6 MeV observed in the antineutrino spectrum measured from commercial power reactors.

  2. Neural simulations on multi-core architectures.

    PubMed

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing.

  3. Neural Simulations on Multi-Core Architectures

    PubMed Central

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393

  4. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  5. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    NASA Astrophysics Data System (ADS)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  6. Remarkable support effect on the reactivity of Pt/In2O3/MOx catalysts for methanol steam reforming

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Men, Yong; Wang, Jinguo; He, Rong; Wang, Yuanqiang

    2017-10-01

    Effects of supports over Pt/In2O3/MOx catalysts with extremely low loading of Pt (1 wt%) and In2O3 loadings (3 wt%) are investigated for the hydrogen production of methanol steam reforming (MSR) in the temperature range of 250-400 °C. Under practical conditions without the pre-reduction, the 1Pt/3In2O3/CeO2 catalyst shows the highly efficient catalytic performance, achieving almost complete methanol conversion (98.7%) and very low CO selectivity of 2.6% at 325 °C. The supported Pt/In2O3 catalysts are characterized by means of Brunauer-Emmett-Teller (BET) surface area, X-ray diffraction (XRD), high-resolution transmission microscopy (HRTEM), temperature programmed reduction with hydrogen (H2-TPR), CO pulse chemisorption, temperature programmed desorption of methanol and water (CH3OH-TPD and H2O-TPD). These demonstrate that the nature of catalyst support of Pt/In2O3/MOx plays crucial roles in the Pt dispersion associated by the strong interaction among Pt, In2O3 and supporting materials and the surface redox properties at low temperature, and thus affects their capability to activate the reactants and determines the catalytic activity of methanol steam reforming. The superior 1Pt/3In2O3/CeO2 catalyst, exhibiting a remarkable reactivity and stability for 32 h on stream, demonstrates its potential for efficient hydrogen production of methanol steam reforming in mobile and de-centralized H2-fueled PEMFC systems.

  7. Reactors as a Source of Antineutrinos: Effects of Fuel Loading and Burnup for Mixed-Oxide Fuels

    NASA Astrophysics Data System (ADS)

    Bernstein, Adam; Bowden, Nathaniel S.; Erickson, Anna S.

    2018-01-01

    In a conventional light-water reactor loaded with a range of uranium and plutonium-based fuel mixtures, the variation in antineutrino production over the cycle reflects both the initial core fissile inventory and its evolution. Under an assumption of constant thermal power, we calculate the rate at which antineutrinos are emitted from variously fueled cores, and the evolution of that rate as measured by a representative ton-scale antineutrino detector. We find that antineutrino flux decreases with burnup for low-enriched uranium cores, increases for full mixed-oxide (MOX) cores, and does not appreciably change for cores with a MOX fraction of approximately 75%. Accounting for uncertainties in the fission yields in the emitted antineutrino spectra and the detector response function, we show that the difference in corewide MOX fractions at least as small as 8% can be distinguished using a hypothesis test. The test compares the evolution of the antineutrino rate relative to an initial value over part or all of the cycle. The use of relative rates reduces the sensitivity of the test to an independent thermal power measurement, making the result more robust against possible countermeasures. This rate-only approach also offers the potential advantage of reducing the cost and complexity of the antineutrino detectors used to verify the diversion, compared to methods that depend on the use of the antineutrino spectrum. A possible application is the verification of the disposition of surplus plutonium in nuclear reactors.

  8. From the molecular structure to spectroscopic and material properties: computational investigation of a bent-core nematic liquid crystal.

    PubMed

    Greco, Cristina; Marini, Alberto; Frezza, Elisa; Ferrarini, Alberta

    2014-05-19

    We present a computational investigation of the nematic phase of the bent-core liquid crystal A131. We use an integrated approach that bridges density functional theory calculations of molecular geometry and torsional potentials to elastic properties through the molecular conformational and orientational distribution function. This unique capability to simultaneously access different length scales enables us to consistently describe molecular and material properties. We can reassign (13)C NMR chemical shifts and analyze the dependence of phase properties on molecular shape. Focusing on the elastic constants we can draw some general conclusions on the unconventional behavior of bent-core nematics and highlight the crucial role of a properly-bent shape. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Cores Of Recurrent Events (CORE) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    CORE is a statistically supported computational method for finding recurrently targeted regions in massive collections of genomic intervals, such as those arising from DNA copy number analysis of single tumor cells or bulk tumor tissues.

  10. Theoretical surface core-level shifts for Be(0001)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feibelman, P.J.

    1994-05-15

    Core-ionization potentials (CIP's) are computed for Be(0001). Three core features are observed in corresponding photoelectron spectra, with CIP's shifted relative to the bulk core level by [minus]0.825, [minus]0.570, and [minus]0.265 eV. The computed CIP shifts for the outer and subsurface layers, [minus]0.60 and [minus]0.29 eV, respectively, agree with the latter two of these. It is surmised that the [minus]0.825-eV shift is associated with a surface defect. The negative signs of the Be(0001) surface core-level shifts do not fit into the thermochemical picture widely used to explain CIP shifts. The reason is that a core-ionized Be atom is too small tomore » bond effectively to the remainder of the unrelaxed Be lattice.« less

  11. ParallelStructure: A R Package to Distribute Parallel Runs of the Population Genetics Program STRUCTURE on Multi-Core Computers

    PubMed Central

    Besnier, Francois; Glover, Kevin A.

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012

  12. Simulating an Exploding Fission-Bomb Core

    NASA Astrophysics Data System (ADS)

    Reed, Cameron

    2016-03-01

    A time-dependent desktop-computer simulation of the core of an exploding fission bomb (nuclear weapon) has been developed. The simulation models a core comprising a mixture of two isotopes: a fissile one (such as U-235) and an inert one (such as U-238) that captures neutrons and removes them from circulation. The user sets the enrichment percentage and scattering and fission cross-sections of the fissile isotope, the capture cross-section of the inert isotope, the number of neutrons liberated per fission, the number of ``initiator'' neutrons, the radius of the core, and the neutron-reflection efficiency of a surrounding tamper. The simulation, which is predicated on ordinary kinematics, follows the three-dimensional motions and fates of neutrons as they travel through the core. Limitations of time and computer memory render it impossible to model a real-life core, but results of numerous runs clearly demonstrate the existence of a critical mass for a given set of parameters and the dramatic effects of enrichment and tamper efficiency on the growth (or decay) of the neutron population. The logic of the simulation will be described and results of typical runs will be presented and discussed.

  13. DIODE STEERED MANGETIC-CORE MEMORY

    DOEpatents

    Melmed, A.S.; Shevlin, R.T.; Laupheimer, R.

    1962-09-18

    A word-arranged magnetic-core memory is designed for use in a digital computer utilizing the reverse or back current property of the semi-conductor diodes to restore the information in the memory after read-out. In order to ob tain a read-out signal from a magnetic core storage unit, it is necessary to change the states of some of the magnetic cores. In order to retain the information in the memory after read-out it is then necessary to provide a means to return the switched cores to their states before read-out. A rewrite driver passes a pulse back through each row of cores in which some switching has taken place. This pulse combines with the reverse current pulses of diodes for each column in which a core is switched during read-out to cause the particular cores to be switched back into their states prior to read-out. (AEC)

  14. CMS Readiness for Multi-Core Workload Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less

  15. CMS readiness for multi-core workload scheduling

    NASA Astrophysics Data System (ADS)

    Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.

    2017-10-01

    In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.

  16. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    NASA Astrophysics Data System (ADS)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  17. Proliferation resistance of small modular reactors fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polidoro, F.; Parozzi, F.; Fassnacht, F.

    2013-07-01

    In this paper the proliferation resistance of different types of Small Modular Reactors (SMRs) has been examined and classified with criteria available in the literature. In the first part of the study, the level of proliferation attractiveness of traditional low-enriched UO{sub 2} and MOX fuels to be used in SMRs based on pressurized water technology has been analyzed. On the basis of numerical simulations both cores show significant proliferation risks. Although the MOX core is less proliferation prone in comparison to the UO{sub 2} core, it still can be highly attractive for diversion or undeclared production of nuclear material. Inmore » the second part of the paper, calculations to assess the proliferation attractiveness of fuel in typical small sodium cooled fast reactor show that proliferation risks from spent fuel cannot be neglected. The core contains a highly attractive plutonium composition during the whole life cycle. Despite some aspects of the design like the sealed core that enables easy detection of unauthorized withdrawal of fissile material and enhances proliferation resistance, in case of open Non-Proliferation Treaty break-out, weapon-grade plutonium in sufficient quantities could be extracted from the reactor core.« less

  18. Blue straggler formation at core collapse

    NASA Astrophysics Data System (ADS)

    Banerjee, Sambaran

    Among the most striking feature of blue straggler stars (BSS) in globular clusters is the presence of multiple sequences of BSSs in the colour-magnitude diagrams (CMDs) of several globular clusters. It is often envisaged that such a multiple BSS sequence would arise due a recent core collapse of the host cluster, triggering a number of stellar collisions and binary mass transfers simultaneously over a brief episode of time. Here we examine this scenario using direct N-body computations of moderately-massive star clusters (of order 104 {M⊙). As a preliminary attempt, these models are initiated with ≈8-10 Gyr old stellar population and King profiles of high concentrations, being ``tuned'' to undergo core collapse quickly. BSSs are indeed found to form in a ``burst'' at the onset of the core collapse and several of such BS-bursts occur during the post-core-collapse phase. In those models that include a few percent primordial binaries, both collisional and binary BSSs form after the onset of the (near) core-collapse. However, there is as such no clear discrimination between the two types of BSSs in the corresponding computed CMDs. We note that this may be due to the less number of BSSs formed in these less massive models than that in actual globular clusters.

  19. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  20. CT Scans of Cores Metadata, Barrow, Alaska 2015

    DOE Data Explorer

    Katie McKnight; Tim Kneafsey; Craig Ulrich

    2015-03-11

    Individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, throughout 2013 and 2014. Cores were drilled along different transects to sample polygonal features (i.e. the trough, center and rim of high, transitional and low center polygons). Most cores were drilled around 1 meter in depth and a few deep cores were drilled around 3 meters in depth. Three-dimensional images of the frozen cores were constructed using a medical X-ray computed tomography (CT) scanner. TIFF files can be uploaded to ImageJ (an open-source imaging software) to examine soil structure and densities within each core.

  1. Multi-Group Formulation of the Temperature-Dependent Resonance Scattering Model and its Impact on Reactor Core Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed

    2014-01-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less

  2. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  3. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-07-09

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: establishing, for each node, a plurality of logical rings, each ring including a different set of at least one core on that node, each ring including the cores on at least two of the nodes; iteratively for each node: assigning each core of that node to one of the rings established for that node to which the core has not previously been assigned, and performing, for each ring for that node, a global allreduce operation using contribution data for the cores assigned to that ring or any global allreduce results from previous global allreduce operations, yielding current global allreduce results for each core; and performing, for each node, a local allreduce operation using the global allreduce results.

  4. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  5. A benchmarking tool to evaluate computer tomography perfusion infarct core predictions against a DWI standard.

    PubMed

    Cereda, Carlo W; Christensen, Søren; Campbell, Bruce Cv; Mishra, Nishant K; Mlynash, Michael; Levi, Christopher; Straka, Matus; Wintermark, Max; Bammer, Roland; Albers, Gregory W; Parsons, Mark W; Lansberg, Maarten G

    2016-10-01

    Differences in research methodology have hampered the optimization of Computer Tomography Perfusion (CTP) for identification of the ischemic core. We aim to optimize CTP core identification using a novel benchmarking tool. The benchmarking tool consists of an imaging library and a statistical analysis algorithm to evaluate the performance of CTP. The tool was used to optimize and evaluate an in-house developed CTP-software algorithm. Imaging data of 103 acute stroke patients were included in the benchmarking tool. Median time from stroke onset to CT was 185 min (IQR 180-238), and the median time between completion of CT and start of MRI was 36 min (IQR 25-79). Volumetric accuracy of the CTP-ROIs was optimal at an rCBF threshold of <38%; at this threshold, the mean difference was 0.3 ml (SD 19.8 ml), the mean absolute difference was 14.3 (SD 13.7) ml, and CTP was 67% sensitive and 87% specific for identification of DWI positive tissue voxels. The benchmarking tool can play an important role in optimizing CTP software as it provides investigators with a novel method to directly compare the performance of alternative CTP software packages. © The Author(s) 2015.

  6. Precision of computer-assisted core decompression drilling of the femoral head.

    PubMed

    Beckmann, J; Goetz, J; Baethis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-08-01

    Osteonecrosis of the femoral head is a local destructive disease with progression into devastating stages. Left untreated it mostly leads to severe secondary osteoarthrosis and early endoprosthetic joint replacement. Core decompression by exact drilling into the ischemic areas can be performed in early stages according to Ficat or ARCO. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. Twenty sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany) and 10 sawbones by fluoroscopic control only. No gypsum sphere was missed. There was a statistically significant difference regarding the three-dimensional deviation (Euclidian norm) as well as maximum deviation in x-, y- or z-direction (maximum norm) to the desired mid-point of the lesion, with a mean of 0.51 and 0.4 mm in the navigated group and 1.1 and 0.88 mm in the control group, respectively. Furthermore, significant difference was found in the number of drilling corrections as well as the radiation time needed: no second drilling or correction of drilling direction was necessary in the navigated group compared to 1.4 in the control group. The radiation time needed was less than 1 s compared to 3.1 s, respectively. The fluoroscopy-based VectorVision navigation system shows a high feasibility of computer-guided drilling with a clear reduction of radiation exposure time and can therefore be integrated into clinical routine. The additional time needed is acceptable regarding the simultaneous reduction of radiation time.

  7. Pyrazinamidase, CR-MOX agar, salicin fermentation-esculin hydrolysis, and D-xylose fermentation for identifying pathogenic serotypes of Yersinia enterocolitica.

    PubMed Central

    Farmer, J J; Carter, G P; Miller, V L; Falkow, S; Wachsmuth, I K

    1992-01-01

    We evaluated several simple laboratory tests that have been used to identify pathogenic serotypes of Yersinia enterocolitica or to indicate the pathogenic potential of individual strains. A total of 100 strains of Y. enterocolitica were studied, including 25 isolated during five outbreak investigations, 63 from sporadic cases, and 12 from stock cultures. The pyrazinamidase test, which does not depend on the Yersinia virulence plasmid, correctly identified 60 of 63 (95% sensitivity) strains of pathogenic serotypes and 34 of 37 (92% specificity) strains of nonpathogenic serotypes. Salicin fermentation-esculin hydrolysis (25 degrees C, 48 h) correctly identified all 63 (100% sensitivity) strains of the pathogenic serotypes and 34 of 37 (92% specificity) strains of the nonpathogenic serotypes. The results of the pyrazinamidase and salicin-esculin tests disagreed for only 7 of the 100 strains of Y. enterocolitica, and these would require additional testing. Congo red-magnesium oxalate (CR-MOX) agar determines Congo red dye uptake and calcium-dependent growth at 36 degrees C, and small red colonies are present only if the strain contains the Yersinia virulence plasmid. This test has proven to be extremely useful for freshly isolated cultures, but only 15 of 62 strains of pathogenic serotypes that had been stored for 1 to 10 years were CR-MOX positive. None of the 16 strains of Y. enterocolitica serotype O3 fermented D-xylose, so this test easily differentiated strains of this serotype, which now appears to be the most common in the United States. Although antisera that can actually be used to serotype strains of Y. enterocolitica are not readily available, the four simple tests described above can be used to screen for pathogenic serotypes. Images PMID:1400958

  8. Girls and Computing: Female Participation in Computing in Schools

    ERIC Educational Resources Information Center

    Zagami, Jason; Boden, Marie; Keane, Therese; Moreton, Bronwyn; Schulz, Karsten

    2015-01-01

    Computer education, with a focus on Computer Science, has become a core subject in the Australian Curriculum and the focus of national innovation initiatives. Equal participation by girls, however, remains unlikely based on their engagement with computing in recent decades. In seeking to understand why this may be the case, a Delphi consensus…

  9. SrFe1‑xMoxO2+δ : parasitic ferromagnetism in an infinite-layer iron oxide with defect structures induced by interlayer oxygen

    NASA Astrophysics Data System (ADS)

    Guo, Jianhui; Shi, Lei; Zhao, Jiyin; Wang, Yang; Yuan, Xueyou; Li, Yang; Wu, Liang

    2018-04-01

    The recent discovered compound SrFeO2 is an infinite-layer-structure iron oxide with unusual square-planar coordination of Fe2+ ions. In this study, SrFe1‑xMoxO2+δ (x < 0.12) is obtained by crystal transformation from SrFe1‑xMoxO3‑δ perovskite via low-temperature (≤380 °C) topotactic reduction. The parasitic ferromagnetism of the compound and its relationship to the defect structures are investigated. It is found that substitution of high-valent Mo6+ for Fe2+ results in excess oxygen anions O2‑ inserted at the interlayer sites for charge compensation, which further causes large atomic displacements along the c-axis. Due to the robust but flexible Fe-O-Fe framework, the samples are well crystallized within the ab-plane, but are significantly poorer crystallized along the c-axis. Defect structures including local lattice distortions and edge dislocations responsible for the lowered crystallinity are observed by high resolution transmission electron microscopy. Both the magnetic measurements and electron spin resonance spectra provide the evidence of a parasitic ferromagnetism (FM). The week FM interaction originated from the imperfect antiferromagnetic (AFM) ordering could be ascribed to the introduction of uncompensated magnetic moments due to substitution of Mo6+ (S = 0) for Fe2+ (S = 2) and the canted/frustrated spins resulted from defect structures.

  10. Self-Aware Computing

    DTIC Science & Technology

    2009-06-01

    to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores

  11. Thorium-based mixed oxide fuel in a pressurized water reactor: A feasibility analysis with MCNP

    NASA Astrophysics Data System (ADS)

    Tucker, Lucas Powelson

    This dissertation investigates techniques for spent fuel monitoring, and assesses the feasibility of using a thorium-based mixed oxide fuel in a conventional pressurized water reactor for plutonium disposition. Both non-paralyzing and paralyzing dead-time calculations were performed for the Portable Spectroscopic Fast Neutron Probe (N-Probe), which can be used for spent fuel interrogation. Also, a Canberra 3He neutron detector's dead-time was estimated using a combination of subcritical assembly measurements and MCNP simulations. Next, a multitude of fission products were identified as candidates for burnup and spent fuel analysis of irradiated mixed oxide fuel. The best isotopes for these applications were identified by investigating half-life, photon energy, fission yield, branching ratios, production modes, thermal neutron absorption cross section and fuel matrix diffusivity. 132I and 97Nb were identified as good candidates for MOX fuel on-line burnup analysis. In the second, and most important, part of this work, the feasibility of utilizing ThMOX fuel in a pressurized water reactor (PWR) was first examined under steady-state, beginning of life conditions. Using a three-dimensional MCNP model of a Westinghouse-type 17x17 PWR, several fuel compositions and configurations of a one-third ThMOX core were compared to a 100% UO2 core. A blanket-type arrangement of 5.5 wt% PuO2 was determined to be the best candidate for further analysis. Next, the safety of the ThMOX configuration was evaluated through three cycles of burnup at several using the following metrics: axial and radial nuclear hot channel factors, moderator and fuel temperature coefficients, delayed neutron fraction, and shutdown margin. Additionally, the performance of the ThMOX configuration was assessed by tracking cycle length, plutonium destroyed, and fission product poison concentration.

  12. Laser Heating of the Core-Shell Nanowires

    NASA Astrophysics Data System (ADS)

    Astefanoaei, Iordana; Dumitru, Ioan; Stancu, Alexandru

    2016-12-01

    The induced thermal stress in a heating process is an important parameter to be known and controlled in the magnetization process of core-shell nanowires. This paper analyses the stress produced by a laser heating source placed at one end of a core-shell type structure. The thermal field was computed with the non-Fourier heat transport equation using a finite element method (FEM) implemented in Comsol Multiphysics. The internal stresses are essentially due to thermal gradients and different expansion characteristics of core and shell materials. The stress values were computed using the thermo elastic formalism and are depending on the laser beam parameters (spot size, power etc.) and system characteristics (dimensions, thermal characteristics). Stresses in the GPa range were estimated and consequently we find that the magnetic state of the system can be influenced significantly. A shell material as the glass which is a good thermal insulator induces in the magnetic core, the smaller stresses and consequently the smaller magnetoelastic energy. These results lead to a better understanding of the switching process in the magnetic materials.

  13. MOX fuel assembly design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reese, A.P.; Crowther, R.L. Jr.

    1992-02-18

    This patent describes improvement in a boiling water reactor core having a plurality of vertically upstanding fuel bundles; each fuel bundle containing longitudinally extending sealed rods with fissile material therein; the improvement comprises the fissile material including a mixture of uranium and recovered plutonium in rods of the fuel bundle at locations other than the corners of the fuel bundle; and, neutron absorbing material being located in rods of the fuel bundle at rod locations adjacent the corners of the fuel bundles whereby the neutron absorbing material has decreased shielding from the plutonium and maximum exposure to thermal neutrons formore » shaping the cold reactivity shutdown zone in the fuel bundle.« less

  14. Time cycle analysis and simulation of material flow in MOX process layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Saraswat, A.; Danny, K.M.

    The (U,Pu)O{sub 2} MOX fuel is the driver fuel for the upcoming PFBR (Prototype Fast Breeder Reactor). The fuel has around 30% PuO{sub 2}. The presence of high percentages of reprocessed PuO{sub 2} necessitates the design of optimized fuel fabrication process line which will address both production need as well as meet regulatory norms regarding radiological safety criteria. The powder pellet route has highly unbalanced time cycle. This difficulty can be overcome by optimizing process layout in terms of equipment redundancy and scheduling of input powder batches. Different schemes are tested before implementing in the process line with the helpmore » of a software. This software simulates the material movement through the optimized process layout. The different material processing schemes have been devised and validity of the schemes are tested with the software. Schemes in which production batches are meeting at any glove box location are considered invalid. A valid scheme ensures adequate spacing between the production batches and at the same time it meets the production target. This software can be further improved by accurately calculating material movement time through glove box train. One important factor is considering material handling time with automation systems in place.« less

  15. Efficient provisioning for multi-core applications with LSF

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano

    2015-12-01

    Tier-1 sites providing computing power for HEP experiments are usually tightly designed for high throughput performances. This is pursued by reducing the variety of supported use cases and tuning for performances those ones, the most important of which have been that of singlecore jobs. Moreover, the usual workload is saturation: each available core in the farm is in use and there are queued jobs waiting for their turn to run. Enabling multi-core jobs thus requires dedicating a number of hosts where to run, and waiting for them to free the needed number of cores. This drain-time introduces a loss of computing power driven by the number of unusable empty cores. As an increasing demand for multi-core capable resources have emerged, a Task Force have been constituted in WLCG, with the goal to define a simple and efficient multi-core resource provisioning model. This paper details the work done at the INFN Tier-1 to enable multi-core support for the LSF batch system, with the intent of reducing to the minimum the average number of unused cores. The adopted strategy has been that of dedicating to multi-core a dynamic set of nodes, whose dimension is mainly driven by the number of pending multi-core requests and fair-share priority of the submitting user. The node status transition, from single to multi core et vice versa, is driven by a finite state machine which is implemented in a custom multi-core director script, running in the cluster. After describing and motivating both the implementation and the details specific to the LSF batch system, results about performance are reported. Factors having positive and negative impact on the overall efficiency are discussed and solutions to reduce at most the negative ones are proposed.

  16. Diagnostic Yield of Computed Tomography-Guided Coaxial Core Biopsy of Undetermined Masses in the Free Retroperitoneal Space: Single-Center Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stattaus, Joerg, E-mail: joerg.stattaus@uni-due.de; Kalkmann, Janine, E-mail: janine.kalkmann@uk-essen.de; Kuehl, Hilmar, E-mail: hilmar.kuehl@uni-due.d

    2008-09-15

    The purpose of this study was to evaluate the diagnostic yield of core biopsy in coaxial technique under guidance of computed tomography (CT) for retroperitoneal masses. We performed a retrospective analysis of CT-guided coaxial core biopsies of undetermined masses in the non-organ-bound retroperitoneal space in 49 patients. In 37 cases a 15-G guidance needle with a 16-G semiautomated core biopsy system, and in 12 cases a 16-G guidance needle with an 18-G biopsy system, was used. All biopsies were technically successful. A small hematoma was seen in one case, but no relevant complication occurred. With the coaxial technique, up tomore » 4 specimens were obtained from each lesion (mean, 2.8). Diagnostic accuracy in differentiation between malignant and benign diseases was 95.9%. A specific histological diagnosis could be established in 39 of 42 malignant lesions (92.9%). Correct subtyping of malignant lymphoma according to the WHO classification was possible in 87.0%. Benign lesions were correctly identified in seven cases, although a specific diagnosis could only be made in conjunction with clinical and radiological information. In conclusion, CT-guided coaxial core biopsy provides safe and accurate diagnosis of retroperitoneal masses. A specific histological diagnosis, which is essential for choosing the appropriate therapy, could be established in most cases of malignancy.« less

  17. Computational Thinking Concepts for Grade School

    ERIC Educational Resources Information Center

    Sanford, John F.; Naidu, Jaideep T.

    2016-01-01

    Early education has classically introduced reading, writing, and mathematics. Recent literature discusses the importance of adding "computational thinking" as a core ability that every child must learn. The goal is to develop students by making them equally comfortable with computational thinking as they are with other core areas of…

  18. The Transition to a Many-core World

    NASA Astrophysics Data System (ADS)

    Mattson, T. G.

    2012-12-01

    The need to increase performance within a fixed energy budget has pushed the computer industry to many core processors. This is grounded in the physics of computing and is not a trend that will just go away. It is hard to overestimate the profound impact of many-core processors on software developers. Virtually every facet of the software development process will need to change to adapt to these new processors. In this talk, we will look at many-core hardware and consider its evolution from a perspective grounded in the CPU. We will show that the number of cores will inevitably increase, but in addition, a quest to maximize performance per watt will push these cores to be heterogeneous. We will show that the inevitable result of these changes is a computing landscape where the distinction between the CPU and the GPU is blurred. We will then consider the much more pressing problem of software in a many core world. Writing software for heterogeneous many core processors is well beyond the ability of current programmers. One solution is to support a software development process where programmer teams are split into two distinct groups: a large group of domain-expert productivity programmers and much smaller team of computer-scientist efficiency programmers. The productivity programmers work in terms of high level frameworks to express the concurrency in their problems while avoiding any details for how that concurrency is exploited. The second group, the efficiency programmers, map applications expressed in terms of these frameworks onto the target many-core system. In other words, we can solve the many-core software problem by creating a software infrastructure that only requires a small subset of programmers to become master parallel programmers. This is different from the discredited dream of automatic parallelism. Note that productivity programmers still need to define the architecture of their software in a way that exposes the concurrency inherent in their

  19. Mission: Define Computer Literacy. The Illinois-Wisconsin ISACS Computer Coordinators' Committee on Computer Literacy Report (May 1985).

    ERIC Educational Resources Information Center

    Computing Teacher, 1985

    1985-01-01

    Defines computer literacy and describes a computer literacy course which stresses ethics, hardware, and disk operating systems throughout. Core units on keyboarding, word processing, graphics, database management, problem solving, algorithmic thinking, and programing are outlined, together with additional units on spreadsheets, simulations,…

  20. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farbin, Amir

    2015-07-15

    This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

  1. Core-core and core-valence correlation

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    The effect of (1s) core correlation on properties and energy separations was analyzed using full configuration-interaction (FCI) calculations. The Be 1 S - 1 P, the C 3 P - 5 S and CH+ 1 Sigma + or - 1 Pi separations, and CH+ spectroscopic constants, dipole moment and 1 Sigma + - 1 Pi transition dipole moment were studied. The results of the FCI calculations are compared to those obtained using approximate methods. In addition, the generation of atomic natural orbital (ANO) basis sets, as a method for contracting a primitive basis set for both valence and core correlation, is discussed. When both core-core and core-valence correlation are included in the calculation, no suitable truncated CI approach consistently reproduces the FCI, and contraction of the basis set is very difficult. If the (nearly constant) core-core correlation is eliminated, and only the core-valence correlation is included, CASSCF/MRCI approached reproduce the FCI results and basis set contraction is significantly easier.

  2. Ultrasound phase rotation beamforming on multi-core DSP.

    PubMed

    Ma, Jieming; Karadayi, Kerem; Ali, Murtaza; Kim, Yongmin

    2014-01-01

    Phase rotation beamforming (PRBF) is a commonly-used digital receive beamforming technique. However, due to its high computational requirement, it has traditionally been supported by hardwired architectures, e.g., application-specific integrated circuits (ASICs) or more recently field-programmable gate arrays (FPGAs). In this study, we investigated the feasibility of supporting software-based PRBF on a multi-core DSP. To alleviate the high computing requirement, the analog front-end (AFE) chips integrating quadrature demodulation in addition to analog-to-digital conversion were defined and used. With these new AFE chips, only delay alignment and phase rotation need to be performed by DSP, substantially reducing the computational load. We implemented the delay alignment and phase rotation modules on a Texas Instruments C6678 DSP with 8 cores. We found it takes 200 μs to beamform 2048 samples from 64 channels using 2 cores. With 4 cores, 20 million samples can be beamformed in one second. Therefore, ADC frequencies up to 40 MHz with 2:1 decimation in AFE chips or up to 20 MHz with no decimation can be supported as long as the ADC-to-DSP I/O requirement can be met. The remaining 4 cores can work on back-end processing tasks and applications, e.g., color Doppler or ultrasound elastography. One DSP being able to handle both beamforming and back-end processing could lead to low-power and low-cost ultrasound machines, benefiting ultrasound imaging in general, particularly portable ultrasound machines. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. CANOPEN Controller IP Core

    NASA Astrophysics Data System (ADS)

    Caramia, Maurizio; Montagna, Mario; Furano, Gianluca; Winton, Alistair

    2010-08-01

    This paper will describe the activities performed by Thales Alenia Space Italia supported by the European Space Agency in the definition of a CAN bus interface to be used on Exomars. The final goal of this activity is the development of an IP core, to be used in a slave node, able to manage both the CAN bus Data Link and Application Layer totally in hardware. The activity has been focused on the needs of the EXOMARS mission where devices with different computational performances are all managed by the onboard computer through the CAN bus.

  4. High-temperature Mechanical Properties and Microstructure of ZrTiHfNbMox (x=0.5, 1.0, 1.5) Refractory High Entropy Alloys

    NASA Astrophysics Data System (ADS)

    Chen, Y. W.; Li, Y. K.; Cheng, X. W.; Wu, C.; Cheng, B.

    2018-05-01

    Refractory high entropy alloys (RHEAs), with excellent properties at high temperature, have several applications. In this work, the ZrTiHfNbMox (x=0.5, 1.0, 1.5) alloys were prepared by arc melting. All these alloys form body centered cubic (BCC) structure without other intermediate phases. The Mo element contributes to the strength of alloys at high temperature, but too much of Mo decreases the plasticity severely and enhances the strength. The ZrTiHfNbMo alloy, whose compressive stress is 1099 MPa at 800° C, is a promising material for high-temperature applications.

  5. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  6. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  7. A method for modeling finite-core vortices in wake-flow calculations

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1984-01-01

    A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.

  8. TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors

    PubMed Central

    Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco

    2013-01-01

    Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853

  9. Site-Mutation of Hydrophobic Core Residues Synchronically Poise Super Interleukin 2 for Signaling: Identifying Distant Structural Effects through Affordable Computations.

    PubMed

    Mei, Longcan; Zhou, Yanping; Zhu, Lizhe; Liu, Changlin; Wu, Zhuo; Wang, Fangkui; Hao, Gefei; Yu, Di; Yuan, Hong; Cui, Yanfang

    2018-03-20

    A superkine variant of interleukin-2 with six site mutations away from the binding interface developed from the yeast display technique has been previously characterized as undergoing a distal structure alteration which is responsible for its super-potency and provides an elegant case study with which to get insight about how to utilize allosteric effect to achieve desirable protein functions. By examining the dynamic network and the allosteric pathways related to those mutated residues using various computational approaches, we found that nanosecond time scale all-atom molecular dynamics simulations can identify the dynamic network as efficient as an ensemble algorithm. The differentiated pathways for the six core residues form a dynamic network that outlines the area of structure alteration. The results offer potentials of using affordable computing power to predict allosteric structure of mutants in knowledge-based mutagenesis.

  10. Tidal disruption of fuzzy dark matter subhalo cores

    NASA Astrophysics Data System (ADS)

    Du, Xiaolong; Schwabe, Bodo; Niemeyer, Jens C.; Bürger, David

    2018-03-01

    We study tidal stripping of fuzzy dark matter (FDM) subhalo cores using simulations of the Schrödinger-Poisson equations and analyze the dynamics of tidal disruption, highlighting the differences with standard cold dark matter. Mass loss outside of the tidal radius forces the core to relax into a less compact configuration, lowering the tidal radius. As the characteristic radius of a solitonic core scales inversely with its mass, tidal stripping results in a runaway effect and rapid tidal disruption of the core once its central density drops below 4.5 times the average density of the host within the orbital radius. Additionally, we find that the core is deformed into a tidally locked ellipsoid with increasing eccentricities until it is completely disrupted. Using the core mass loss rate, we compute the minimum mass of cores that can survive several orbits for different FDM particle masses and compare it with observed masses of satellite galaxies in the Milky Way.

  11. Single crystal structure analyses of scheelite-powellite CaW1-xMoxO4 solidsolutions and unique occurrence in Jisyakuyama skarn deposits

    NASA Astrophysics Data System (ADS)

    Yamashita, K.; Yoshiasa, A.; Miyazaki, H.; Tokuda, M.; Tobase, T.; Isobe, H.; Nishiyama, T.; Sugiyama, K.; Miyawaki, R.

    2017-12-01

    Jisyakuyama skarn deposit, Fukuchi, Fukuoka, Japan, shows a simple occurrenceformed by penetration of hot water into limestone cracks. A unique occurrence of scheelite-powellite CaW1-xMoxO4 minerals is observed in the skarn deposit. Many syntheticexperiments for scheelite-powellite solid solutions have been reported as research onfluorescent materials. In this system it is known that a complete continuous solid solution isformed even at room temperature. In this study, we have carried out the chemical analyses,crystal structural refinements and detail description of occurrence on scheelite-powelliteminerals. We have also attempted synthesis of single crystal of solid solution in a widecomposition range. The chemical compositions were determined by JEOL scanningelectron microscope and EDS, INCA system. We have performed the crystal structurerefinements of the scheelite-powellite CaW1-xMoxO4 solid solutions (x=0.0-1.0) byRIGAKU single-crystal structure analysis system RAPID. The R and S values are around0.0s and 1.03. As the result of structural refinements of natural products and many solidsolutions, we confirm that most large natural single crystals have compositions at bothendmembers, and large solid solution crystals are rare. The lattice constants, interatomicdistances and other crystallographic parameters for the solid solution change uniquely withcomposition and it was confirmed as a continuous solid solution. Single crystals of scheeliteendmember + powellite endmember + solid solution with various compositions form anaggregate in the deposit (Figure 1). Crystal shapes of powellite and scheelite arehypidiomorphic and allotriomorphic, respectively. Many solid solution crystals areaccompanied by scheelite endmember and a compositional gap is observed betweenpowellite and solid-solution crystals. The presence of several penetration solutions withsignificantly different W and Mo contents may be assumed. This research can be expectedto lead to giving restrictive

  12. Neutronics calculation of RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Zin, Muhammad Rawi B. Mohamed; Karim, Julia Bt. Abdul; Bayar, Abi Muttaqin B. Jalal; Usang, Mark Dennis Anak; Mustafa, Muhammad Khairul Ariff B.; Hamzah, Na'im Syauqi B.; Said, Norfarizan Bt. Mohd; Jalil, Muhammad Husamuddin B.

    2017-01-01

    Reactor calculation and simulation are significantly important to ensure safety and better utilization of a research reactor. The Malaysian's PUSPATI TRIGA Reactor (RTP) achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. Since early 90s, neutronics modelling were used as part of its routine in-core fuel management activities. The are several computer codes have been used in RTP since then, based on 1D neutron diffusion, 2D neutron diffusion and 3D Monte Carlo neutron transport method. This paper describes current progress and overview on neutronics modelling development in RTP. Several important parameters were analysed such as keff, reactivity, neutron flux, power distribution and fission product build-up for the latest core configuration. The developed core neutronics model was validated by means of comparison with experimental and measurement data. Along with the RTP core model, the calculation procedure also developed to establish better prediction capability of RTP's behaviour.

  13. Experimental and computational studies on the femoral fracture risk for advanced core decompression.

    PubMed

    Tran, T N; Warwas, S; Haversath, M; Classen, T; Hohn, H P; Jäger, M; Kowalczyk, W; Landgraeber, S

    2014-04-01

    Two questions are often addressed by orthopedists relating to core decompression procedure: 1) Is the core decompression procedure associated with a considerable lack of structural support of the bone? and 2) Is there an optimal region for the surgical entrance point for which the fracture risk would be lowest? As bioresorbable bone substitutes become more and more common and core decompression has been described in combination with them, the current study takes this into account. Finite element model of a femur treated by core decompression with bone substitute was simulated and analyzed. In-vitro compression testing of femora was used to confirm finite element results. The results showed that for core decompression with standard drilling in combination with artificial bone substitute refilling, daily activities (normal walking and walking downstairs) are not risky for femoral fracture. The femoral fracture risk increased successively when the entrance point is located further distal. The critical value of the deviation of the entrance point to a more distal part is about 20mm. The study findings demonstrate that optimal entrance point should locate on the proximal subtrochanteric region in order to reduce the subtrochanteric fracture risk. Furthermore the consistent results of finite element and in-vitro testing imply that the simulations are sufficient. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Transfluxor circuit amplifies sensing current for computer memories

    NASA Technical Reports Server (NTRS)

    Milligan, G. C.

    1964-01-01

    To transfer data from the magnetic memory core to an independent core, a reliable sensing amplifier has been developed. Later the data in the independent core is transferred to the arithmetical section of the computer.

  15. Chemical Reduction of SIM MOX in Molten Lithium Chloride Using Lithium Metal Reductant

    NASA Astrophysics Data System (ADS)

    Kato, Tetsuya; Usami, Tsuyoshi; Kurata, Masaki; Inoue, Tadashi; Sims, Howard E.; Jenkins, Jan A.

    2007-09-01

    A simulated spent oxide fuel in a sintered pellet form, which contained the twelve elements U, Pu, Am, Np, Cm, Ce, Nd, Sm, Ba, Zr,Mo, and Pd, was reduced with Li metal in a molten LiCl bath at 923 K. More than 90% of U and Pu were reduced to metal to form a porous alloy without significant change in the Pu/U ratio. Small fractions of Pu were also combined with Pd to form stable alloys. In the gap of the porous U-Pu alloy, the aggregation of the rare-earth (RE) oxide was observed. Some amount of the RE elements and the actinoides leached from the pellet. The leaching ratio of Am to the initially loaded amount was only several percent, which was far from about 80% obtained in the previous ones on simple MOX including U, Pu, and Am. The difference suggests that a large part of Am existed in the RE oxide rather than in the U-Pu alloy. The detection of the RE elements and actinoides in the molten LiCl bath seemed to indicate that they dissolved into the molten LiCl bath containing the oxide ion, which is the by-product of the reduction, as solubility of RE elements was measured in the molten LiCl-Li2O previously.

  16. Oxidizing dissolution mechanism of an irradiated MOX fuel in underwater aerated conditions at slightly acidic pH

    NASA Astrophysics Data System (ADS)

    Magnin, M.; Jégou, C.; Caraballo, R.; Broudic, V.; Tribet, M.; Peuget, S.; Talip, Z.

    2015-07-01

    The (U,Pu)O2 matrix behavior of an irradiated MIMAS-type (MIcronized MASter blend) MOX fuel, under radiolytic oxidation in aerated pure water at pH 5-5.5 was studied by combining chemical and radiochemical analyses of the alteration solution with Raman spectroscopy characterizations of the surface state. Two leaching experiments were performed on segments of irradiated fuel under different conditions: with or without an external γ irradiation field, over long periods (222 and 604 days, respectively). The gamma irradiation field was intended to be representative of the irradiation conditions for a fuel assembly in an underwater interim storage situation. The data acquired enabled an alteration mechanism to be established, characterized by uranium (UO22+) release mainly controlled by solubility of studtite over the long-term. The massive precipitation of this phase was observed for the two experiments based on high uranium oversaturation indexes of the solution and the kinetics involved depended on the irradiation conditions. External gamma irradiation accelerated the precipitation kinetics and the uranium concentrations (2.9 × 10-7 mol/l) were lower than for the non-irradiated reference experiment (1.4 × 10-5 mol/l), as the quantity of hydrogen peroxide was higher. Under slightly acidic pH conditions, the formation of an oxidized UO2+x phase was not observed on the surface and did not occur in the radiolysis dissolution mechanism of the fuel matrix. The Raman spectroscopy performed on the heterogeneous MOX fuel matrix surface, showed that the fluorite structure of the mainly UO2 phase surrounding the Pu-enriched aggregates had not been particularly impacted by any major structural change compared to the data obtained prior to leaching. For the plutonium, its behavior in solution involved a continuous release up to concentrations of approximately 3 × 10-6 mol L-1 with negligible colloid formation. This data appears to support a predominance of the +V oxidation

  17. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  18. Generating unstructured nuclear reactor core meshes in parallel

    DOE PAGES

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  19. Segregating the core computational faculty of human language from working memory.

    PubMed

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D

    2009-05-19

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically.

  20. Segregating the core computational faculty of human language from working memory

    PubMed Central

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D.

    2009-01-01

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically. PMID:19416819

  1. Sputnik: ad hoc distributed computation.

    PubMed

    Völkel, Gunnar; Lausser, Ludwig; Schmid, Florian; Kraus, Johann M; Kestler, Hans A

    2015-04-15

    In bioinformatic applications, computationally demanding algorithms are often parallelized to speed up computation. Nevertheless, setting up computational environments for distributed computation is often tedious. Aim of this project were the lightweight ad hoc set up and fault-tolerant computation requiring only a Java runtime, no administrator rights, while utilizing all CPU cores most effectively. The Sputnik framework provides ad hoc distributed computation on the Java Virtual Machine which uses all supplied CPU cores fully. It provides a graphical user interface for deployment setup and a web user interface displaying the current status of current computation jobs. Neither a permanent setup nor administrator privileges are required. We demonstrate the utility of our approach on feature selection of microarray data. The Sputnik framework is available on Github http://github.com/sysbio-bioinf/sputnik under the Eclipse Public License. hkestler@fli-leibniz.de or hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. An evaluation of MPI message rate on hybrid-core processors

    DOE PAGES

    Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...

    2014-11-01

    Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less

  3. The importance of actions and the worth of an object: dissociable neural systems representing core value and economic value.

    PubMed

    Brosch, Tobias; Coppin, Géraldine; Schwartz, Sophie; Sander, David

    2012-06-01

    Neuroeconomic research has delineated neural regions involved in the computation of value, referring to a currency for concrete choices and decisions ('economic value'). Research in psychology and sociology, on the other hand, uses the term 'value' to describe motivational constructs that guide choices and behaviors across situations ('core value'). As a first step towards an integration of these literatures, we compared the neural regions computing economic value and core value. Replicating previous work, economic value computations activated a network centered on medial orbitofrontal cortex. Core value computations activated medial prefrontal cortex, a region involved in the processing of self-relevant information and dorsal striatum, involved in action selection. Core value ratings correlated with activity in precuneus and anterior prefrontal cortex, potentially reflecting the degree to which a core value is perceived as internalized part of one's self-concept. Distributed activation pattern in insula and ACC allowed differentiating individual core value types. These patterns may represent evaluation profiles reflecting prototypical fundamental concerns expressed in the core value types. Our findings suggest mechanisms by which core values, as motivationally important long-term goals anchored in the self-schema, may have the behavioral power to drive decisions and behaviors in the absence of immediately rewarding behavioral options.

  4. Development of an extensible dual-core wireless sensing node for cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Kane, Michael; Zhu, Dapeng; Hirose, Mitsuhito; Dong, Xinjun; Winter, Benjamin; Häckell, Mortiz; Lynch, Jerome P.; Wang, Yang; Swartz, A.

    2014-04-01

    The introduction of wireless telemetry into the design of monitoring and control systems has been shown to reduce system costs while simplifying installations. To date, wireless nodes proposed for sensing and actuation in cyberphysical systems have been designed using microcontrollers with one computational pipeline (i.e., single-core microcontrollers). While concurrent code execution can be implemented on single-core microcontrollers, concurrency is emulated by splitting the pipeline's resources to support multiple threads of code execution. For many applications, this approach to multi-threading is acceptable in terms of speed and function. However, some applications such as feedback controls demand deterministic timing of code execution and maximum computational throughput. For these applications, the adoption of multi-core processor architectures represents one effective solution. Multi-core microcontrollers have multiple computational pipelines that can execute embedded code in parallel and can be interrupted independent of one another. In this study, a new wireless platform named Martlet is introduced with a dual-core microcontroller adopted in its design. The dual-core microcontroller design allows Martlet to dedicate one core to standard wireless sensor operations while the other core is reserved for embedded data processing and real-time feedback control law execution. Another distinct feature of Martlet is a standardized hardware interface that allows specialized daughter boards (termed wing boards) to be interfaced to the Martlet baseboard. This extensibility opens opportunity to encapsulate specialized sensing and actuation functions in a wing board without altering the design of Martlet. In addition to describing the design of Martlet, a few example wings are detailed, along with experiments showing the Martlet's ability to monitor and control physical systems such as wind turbines and buildings.

  5. Modelling the core magnetic field of the earth

    NASA Technical Reports Server (NTRS)

    Harrison, C. G. A.; Carle, H. M.

    1982-01-01

    It is suggested that radial off-center dipoles located within the core of the earth be used instead of spherical harmonics of the magnetic potential in modeling the core magnetic field. The off-center dipoles, in addition to more realistically modeling the physical current systems within the core, are if located deep within the core more effective at removing long wavelength signals of either potential or field. Their disadvantage is that their positions and strengths are more difficult to compute, and such effects as upward and downward continuation are more difficult to manipulate. It is nevertheless agreed with Cox (1975) and Alldredge and Hurwitz (1964) that physical realism in models is more important than mathematical convenience. A radial dipole model is presented which agrees with observations of secular variation and excursions.

  6. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmack, Jon; Hayes, Steven; Walters, L. C.

    This document explores startup fuel options for a proposed test/demonstration fast reactor. The fuel options considered are the metallic fuels U-Zr and U-Pu-Zr and the ceramic fuels UO 2 and UO 2-PuO 2 (MOX). Attributes of the candidate fuel choices considered were feedstock availability, fabrication feasibility, rough order of magnitude cost and schedule, and the existing irradiation performance database. The reactor-grade plutonium bearing fuels (U-Pu-Zr and MOX) were eliminated from consideration as the initial startup fuels because the availability and isotopics of domestic plutonium feedstock is uncertain. There are international sources of reactor grade plutonium feedstock but isotopics and availabilitymore » are also uncertain. Weapons grade plutonium is the only possible source of Pu feedstock in sufficient quantities needed to fuel a startup core. Currently, the available U.S. source of (excess) weapons-grade plutonium is designated for irradiation in commercial light water reactors (LWR) to a level that would preclude diversion. Weapons-grade plutonium also contains a significant concentration of gallium. Gallium presents a potential issue for both the fabrication of MOX fuel as well as possible performance issues for metallic fuel. Also, the construction of a fuel fabrication line for plutonium fuels, with or without a line to remove gallium, is expected to be considerably more expensive than for uranium fuels. In the case of U-Pu-Zr, a relatively small number of fuel pins have been irradiated to high burnup, and in no case has a full assembly been irradiated to high burnup without disassembly and re-constitution. For MOX fuel, the irradiation database from the Fast Flux Test Facility (FFTF) is extensive. If a significant source of either weapons-grade or reactor-grade Pu became available (i.e., from an international source), a startup core based on Pu could be reconsidered.« less

  8. Multi-core processing and scheduling performance in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, J. M.; Evans, D.; Foulkes, S.

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less

  9. Multi-core processing and scheduling performance in CMS

    NASA Astrophysics Data System (ADS)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  10. De novo design of the hydrophobic core of ubiquitin.

    PubMed Central

    Lazar, G. A.; Desjarlais, J. R.; Handel, T. M.

    1997-01-01

    We have previously reported the development and evaluation of a computational program to assist in the design of hydrophobic cores of proteins. In an effort to investigate the role of core packing in protein structure, we have used this program, referred to as Repacking of Cores (ROC), to design several variants of the protein ubiquitin. Nine ubiquitin variants containing from three to eight hydrophobic core mutations were constructed, purified, and characterized in terms of their stability and their ability to adopt a uniquely folded native-like conformation. In general, designed ubiquitin variants are more stable than control variants in which the hydrophobic core was chosen randomly. However, in contrast to previous results with 434 cro, all designs are destabilized relative to the wild-type (WT) protein. This raises the possibility that beta-sheet structures have more stringent packing requirements than alpha-helical proteins. A more striking observation is that all variants, including random controls, adopt fairly well-defined conformations, regardless of their stability. This result supports conclusions from the cro studies that non-core residues contribute significantly to the conformational uniqueness of these proteins while core packing largely affects protein stability and has less impact on the nature or uniqueness of the fold. Concurrent with the above work, we used stability data on the nine ubiquitin variants to evaluate and improve the predictive ability of our core packing algorithm. Additional versions of the program were generated that differ in potential function parameters and sampling of side chain conformers. Reasonable correlations between experimental and predicted stabilities suggest the program will be useful in future studies to design variants with stabilities closer to that of the native protein. Taken together, the present study provides further clarification of the role of specific packing interactions in protein structure and

  11. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE PAGES

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.; ...

    2018-03-01

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  12. Extending Moore's Law via Computationally Error Tolerant Computing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Bobin; Srikanth, Sriseshan; Hein, Eric R.

    Dennard scaling has ended. Lowering the voltage supply (V dd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower V dd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower V dd. One promising approach is the Redundant Residue Number System (RRNS)more » representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. Finally, from the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.« less

  13. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  14. Performance of the NASA Digitizing Core-Loss Instrumentation

    NASA Technical Reports Server (NTRS)

    Schwarze, Gene E. (Technical Monitor); Niedra, Janis M.

    2003-01-01

    The standard method of magnetic core loss measurement was implemented on a high frequency digitizing oscilloscope in order to explore the limits to accuracy when characterizing high Q cores at frequencies up to 1 MHz. This method computes core loss from the cycle mean of the product of the exciting current in a primary winding and induced voltage in a separate flux sensing winding. It is pointed out that just 20 percent accuracy for a Q of 100 core material requires a phase angle accuracy of 0.1 between the voltage and current measurements. Experiment shows that at 1 MHz, even high quality, high frequency current sensing transformers can introduce phase errors of a degree or more. Due to the fact that the Q of some quasilinear core materials can exceed 300 at frequencies below 100 kHz, phase angle errors can be a problem even at 50 kHz. Hence great care is necessary with current sensing and ground loops when measuring high Q cores. Best high frequency current sensing accuracy was obtained from a fabricated 0.1-ohm coaxial resistor, differentially sensed. Sample high frequency core loss data taken with the setup for a permeability-14 MPP core is presented.

  15. Non-destructive Analysis of Oil-Contaminated Soil Core Samples by X-ray Computed Tomography and Low-Field Nuclear Magnetic Resonance Relaxometry: a Case Study

    PubMed Central

    Mitsuhata, Yuji; Nishiwaki, Junko; Kawabe, Yoshishige; Utsuzawa, Shin; Jinguuji, Motoharu

    2010-01-01

    Non-destructive measurements of contaminated soil core samples are desirable prior to destructive measurements because they allow obtaining gross information from the core samples without touching harmful chemical species. Medical X-ray computed tomography (CT) and time-domain low-field nuclear magnetic resonance (NMR) relaxometry were applied to non-destructive measurements of sandy soil core samples from a real site contaminated with heavy oil. The medical CT visualized the spatial distribution of the bulk density averaged over the voxel of 0.31 × 0.31 × 2 mm3. The obtained CT images clearly showed an increase in the bulk density with increasing depth. Coupled analysis with in situ time-domain reflectometry logging suggests that this increase is derived from an increase in the water volume fraction of soils with depth (i.e., unsaturated to saturated transition). This was confirmed by supplementary analysis using high-resolution micro-focus X-ray CT at a resolution of ∼10 μm, which directly imaged the increase in pore water with depth. NMR transverse relaxation waveforms of protons were acquired non-destructively at 2.7 MHz by the Carr–Purcell–Meiboom–Gill (CPMG) pulse sequence. The nature of viscous petroleum molecules having short transverse relaxation times (T2) compared to water molecules enabled us to distinguish the water-saturated portion from the oil-contaminated portion in the core sample using an M0–T2 plot, where M0 is the initial amplitude of the CPMG signal. The present study demonstrates that non-destructive core measurements by medical X-ray CT and low-field NMR provide information on the groundwater saturation level and oil-contaminated intervals, which is useful for constructing an adequate plan for subsequent destructive laboratory measurements of cores. PMID:21258437

  16. HB-LINE ANION EXCHANGE PURIFICATION OF AFS-2 PLUTONIUM FOR MOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyser, E. A.; King, W. D.

    2012-07-31

    Non-radioactive cerium (Ce) and radioactive plutonium (Pu) anion exchange column experiments using scaled HB-Line designs were performed to investigate the feasibility of using either gadolinium nitrate (Gd) or boric acid (B as H{sub 3}BO{sub 3}) as a neutron poison in the H-Canyon dissolution process. Expected typical concentrations of probable impurities were tested and the removal of these impurities by a decontamination wash was measured. Impurity concentrations are compared to two specifications - designated as Column A or Column B (most restrictive) - proposed for plutonium oxide (PuO{sub 2}) product shipped to the Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF). Usemore » of Gd as a neutron poison requires a larger volume of wash for the proposed Column A specification. Since boron (B) has a higher proposed specification and is more easily removed by washing, it appears to be the better candidate for use in the H-Canyon dissolution process. Some difficulty was observed in achieving the Column A specification due to the limited effectiveness that the wash step has in removing the residual B after ~4 BV's wash. However a combination of the experimental 10 BV's wash results and a calculated DF from the oxalate precipitation process yields an overall DF sufficient to meet the Column A specification. For those impurities (other than B) not removed by 10 BV's of wash, the impurity is either not expected to be present in the feedstock or process, or recommendations have been provided for improvement in the analytical detection/method or validation of calculated results. In summary, boron is recommended as the appropriate neutron poison for H-Canyon dissolution and impurities are expected to meet the Column A specification limits for oxide production in HB-Line.« less

  17. HB-LINE ANION EXCHANGE PURIFICATION OF AFS-2 PLUTONIUM FOR MOX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyser, E.; King, W.

    2012-04-25

    Non-radioactive cerium (Ce) and radioactive plutonium (Pu) anion exchange column experiments using scaled HB-Line designs were performed to investigate the feasibility of using either gadolinium nitrate (Gd) or boric acid (B as H{sub 3}BO{sub 3}) as a neutron poison in the H-Canyon dissolution process. Expected typical concentrations of probable impurities were tested and the removal of these impurities by a decontamination wash was measured. Impurity concentrations are compared to two specifications - designated as Column A or Column B (most restrictive) - proposed for plutonium oxide (PuO{sub 2}) product shipped to the Mixed Oxide (MOX) Fuel Fabrication Facility (MFFF). Usemore » of Gd as a neutron poison requires a larger volume of wash for the proposed Column A specification. Since boron (B) has a higher proposed specification and is more easily removed by washing, it appears to be the better candidate for use in the H-Canyon dissolution process. Some difficulty was observed in achieving the Column A specification due to the limited effectiveness that the wash step has in removing the residual B after {approx}4 BV's wash. However a combination of the experimental 10 BV's wash results and a calculated DF from the oxalate precipitation process yields an overall DF sufficient to meet the Column A specification. For those impurities (other than B) not removed by 10 BV's of wash, the impurity is either not expected to be present in the feedstock or process, or recommendations have been provided for improvement in the analytical detection/method or validation of calculated results. In summary, boron is recommended as the appropriate neutron poison for H-Canyon dissolution and impurities are expected to meet the Column A specification limits for oxide production in HB-Line.« less

  18. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  19. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  20. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  1. Core Collapse: The Race Between Stellar Evolution and Binary Heating

    NASA Astrophysics Data System (ADS)

    Converse, Joseph M.; Chandar, R.

    2012-01-01

    The dynamical formation of binary stars can dramatically affect the evolution of their host star clusters. In relatively small clusters (M < 6000 Msun) the most massive stars rapidly form binaries, heating the cluster and preventing any significant contraction of the core. The situation in much larger globular clusters (M 105 Msun) is quite different, with many showing collapsed cores, implying that binary formation did not affect them as severely as lower mass clusters. More massive clusters, however, should take longer to form their binaries, allowing stellar evolution more time to prevent the heating by causing the larger stars to die off. Here, we simulate the evolution of clusters between those of open and globular clusters in order to find at what size a star cluster is able to experience true core collapse. Our simulations make use of a new GPU-based computing cluster recently purchased at the University of Toledo. We also present some benchmarks of this new computational resource.

  2. Initial results on computational performance of Intel Many Integrated Core (MIC) architecture: implementation of the Weather and Research Forecasting (WRF) Purdue-Lin microphysics scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.

    2014-10-01

    Purdue-Lin scheme is a relatively sophisticated microphysics scheme in the Weather Research and Forecasting (WRF) model. The scheme includes six classes of hydro meteors: water vapor, cloud water, raid, cloud ice, snow and graupel. The scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. In this paper, we accelerate the Purdue Lin scheme using Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi is a high performance coprocessor consists of up to 61 cores. The Xeon Phi is connected to a CPU via the PCI Express (PICe) bus. In this paper, we will discuss in detail the code optimization issues encountered while tuning the Purdue-Lin microphysics Fortran code for Xeon Phi. In particularly, getting a good performance required utilizing multiple cores, the wide vector operations and make efficient use of memory. The results show that the optimizations improved performance of the original code on Xeon Phi 5110P by a factor of 4.2x. Furthermore, the same optimizations improved performance on Intel Xeon E5-2603 CPU by a factor of 1.2x compared to the original code.

  3. Enhanced efficiency of luminescence with stoichiometry control in LiGd(W(1-x)MoxO4)2:Eu3+ red phosphors

    NASA Astrophysics Data System (ADS)

    Kavi Rasu, K.; Balaji, D.; Moorthy Babu, S.

    2017-06-01

    A series of LiGd(W(1-x)MoxO4)2 [hereafter LGWM]:Eu3+(x=0.00 to 1.00) red-emitting phosphors were synthesized by sol-gel method. Metal nitrates were used as starting materials with citric acid as chelator and ethylene glycol as binder. Synthesized gel was pre-fired at 523 K and calcined at 1073 K using resistive furnace in air atmosphere. The crystallinity, surface morphology and luminescent properties of the phosphors were investigated using powder X-ray diffraction (XRD), scanning electron microscope (SEM) and fluorescence spectrophotometry respectively. The intensity of the red emission at 615 nm for 5D0→7F2 electric dipole transition increases as the content of Mo6+ was increased and reach a maximum, when the relative ratio of W/Mo is 1:1 under 396 nm excitation.

  4. The Core Avionics System for the DLR Compact-Satellite Series

    NASA Astrophysics Data System (ADS)

    Montenegro, S.; Dittrich, L.

    2008-08-01

    The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.

  5. Using Multi-Core Systems for Rover Autonomy

    NASA Technical Reports Server (NTRS)

    Clement, Brad; Estlin, Tara; Bornstein, Benjamin; Springer, Paul; Anderson, Robert C.

    2010-01-01

    Task Objectives are: (1) Develop and demonstrate key capabilities for rover long-range science operations using multi-core computing, (a) Adapt three rover technologies to execute on SOA multi-core processor (b) Illustrate performance improvements achieved (c) Demonstrate adapted capabilities with rover hardware, (2) Targeting three high-level autonomy technologies (a) Two for onboard data analysis (b) One for onboard command sequencing/planning, (3) Technologies identified as enabling for future missions, (4)Benefits will be measured along several metrics: (a) Execution time / Power requirements (b) Number of data products processed per unit time (c) Solution quality

  6. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  7. The new landscape of parallel computer architecture

    NASA Astrophysics Data System (ADS)

    Shalf, John

    2007-07-01

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.

  8. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  9. The ab initio simulation of the Earth's core.

    PubMed

    Alfè, D; Gillan, M J; Vocadlo, L; Brodholt, J; Price, G D

    2002-06-15

    The Earth has a liquid outer and solid inner core. It is predominantly composed of Fe, alloyed with small amounts of light elements, such as S, O and Si. The detailed chemical and thermal structure of the core is poorly constrained, and it is difficult to perform experiments to establish the properties of core-forming phases at the pressures (ca. 300 GPa) and temperatures (ca. 5000-6000 K) to be found in the core. Here we present some major advances that have been made in using quantum mechanical methods to simulate the high-P/T properties of Fe alloys, which have been made possible by recent developments in high-performance computing. Specifically, we outline how we have calculated the Gibbs free energies of the crystalline and liquid forms of Fe alloys, and so conclude that the inner core of the Earth is composed of hexagonal close packed Fe containing ca. 8.5% S (or Si) and 0.2% O in equilibrium at 5600 K at the boundary between the inner and outer cores with a liquid Fe containing ca. 10% S (or Si) and 8% O.

  10. Replication of Space-Shuttle Computers in FPGAs and ASICs

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.

    2008-01-01

    A document discusses the replication of the functionality of the onboard space-shuttle general-purpose computers (GPCs) in field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The purpose of the replication effort is to enable utilization of proven space-shuttle flight software and software-development facilities to the extent possible during development of software for flight computers for a new generation of launch vehicles derived from the space shuttles. The replication involves specifying the instruction set of the central processing unit and the input/output processor (IOP) of the space-shuttle GPC in a hardware description language (HDL). The HDL is synthesized to form a "core" processor in an FPGA or, less preferably, in an ASIC. The core processor can be used to create a flight-control card to be inserted into a new avionics computer. The IOP of the GPC as implemented in the core processor could be designed to support data-bus protocols other than that of a multiplexer interface adapter (MIA) used in the space shuttle. Hence, a computer containing the core processor could be tailored to communicate via the space-shuttle GPC bus and/or one or more other buses.

  11. Has First-Grade Core Reading Program Text Complexity Changed across Six Decades?

    ERIC Educational Resources Information Center

    Fitzgerald, Jill; Elmore, Jeff; Relyea, Jackie Eunjung; Hiebert, Elfrieda H.; Stenner, A. Jackson

    2016-01-01

    The purpose of the study was to address possible text complexity shifts across the past six decades for a continually best-selling first-grade core reading program. The anthologies of one publisher's seven first-grade core reading programs were examined using computer-based analytics, dating from 1962 to 2013. Variables were Overall Text…

  12. Variable stiffness sandwich panels using electrostatic interlocking core

    NASA Astrophysics Data System (ADS)

    Heath, Callum J. C.; Bond, Ian P.; Potter, Kevin D.

    2016-04-01

    Structural topology has a large impact on the flexural stiffness of a beam structure. Reversible attachment between discrete substructures allows for control of shear stress transfer between structural elements, thus stiffness modulation. Electrostatic adhesion has shown promise for providing a reversible latching mechanism for controllable internal connectivity. Building on previous research, a thin film copper polyimide laminate has been used to incorporate high voltage electrodes to Fibre Reinforced Polymer (FRP) sandwich structures. The level of electrostatic holding force across the electrode interface is key to the achievable level of stiffness modulation. The use of non-flat interlocking core structures can allow for a significant increase in electrode contact area for a given core geometry, thus a greater electrostatic holding force. Interlocking core geometries based on cosine waves can be Computer Numerical Control (CNC) machined from Rohacell IGF 110 Foam core. These Interlocking Core structures could allow for enhanced variable stiffness functionality compared to basic planar electrodes. This novel concept could open up potential new applications for electrostatically induced variable stiffness structures.

  13. 2nd Generation QUATARA Flight Computer Project

    NASA Technical Reports Server (NTRS)

    Falker, Jay; Keys, Andrew; Fraticelli, Jose Molina; Capo-Iugo, Pedro; Peeples, Steven

    2015-01-01

    Single core flight computer boards have been designed, developed, and tested (DD&T) to be flown in small satellites for the last few years. In this project, a prototype flight computer will be designed as a distributed multi-core system containing four microprocessors running code in parallel. This flight computer will be capable of performing multiple computationally intensive tasks such as processing digital and/or analog data, controlling actuator systems, managing cameras, operating robotic manipulators and transmitting/receiving from/to a ground station. In addition, this flight computer will be designed to be fault tolerant by creating both a robust physical hardware connection and by using a software voting scheme to determine the processor's performance. This voting scheme will leverage on the work done for the Space Launch System (SLS) flight software. The prototype flight computer will be constructed with Commercial Off-The-Shelf (COTS) components which are estimated to survive for two years in a low-Earth orbit.

  14. Campus Computing, 1998. The Ninth National Survey of Desktop Computing and Information Technology in American Higher Education.

    ERIC Educational Resources Information Center

    Green, Kenneth C.

    This report presents findings of a June 1998 survey of computing officials at 1,623 two- and four-year U.S. colleges and universities concerning the use of computer technology. The survey found that computing and information technology (IT) are now core components of the campus environment and classroom experience. However, key aspects of IT…

  15. Computer Based Satellite Design

    DTIC Science & Technology

    1992-06-01

    CELL WIDTH := 2.0; -- cm CELLLENGTH := 2.0; -- cm CELLTHICKNESS := 0.02; -- cm when 7 = > VIDEO. CLEARSCREEN; PUTLINE(C...34 SECURI-Y CLASS-LCATO% Oý Ti-S PAGE REPORT DOCUMENTATION PAGE Mb %. ý,o4 ,78P la REPORT SEC;R;TY CLASSF;CA,O% ’ RES’ - V y’,’ 7 MAR V: NV ’ S...NAME 0O mOx" "Op’t 2- -. (If applicable) Naval Postgraduate School Naval Postgraduate School 6c ADDRESS City State, and ZIPCode) 7 t ADD-ESS ’Cty Stare

  16. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  17. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  18. Featured Image: The Simulated Collapse of a Core

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-11-01

    This stunning snapshot (click for a closer look!) is from a simulation of a core-collapse supernova. Despite having been studied for many decades, the mechanism driving the explosions of core-collapse supernovae is still an area of active research. Extremely complex simulations such as this one represent best efforts to include as many realistic physical processes as is currently computationally feasible. In this study led by Luke Roberts (a NASA Einstein Postdoctoral Fellow at Caltech at the time), a core-collapse supernova is modeled long-term in fully 3D simulations that include the effects of general relativity, radiation hydrodynamics, and even neutrino physics. The authors use these simulations to examine the evolution of a supernova after its core bounce. To read more about the teams findings (and see more awesome images from their simulations), check out the paper below!CitationLuke F. Roberts et al 2016 ApJ 831 98. doi:10.3847/0004-637X/831/1/98

  19. Event Reconstruction for Many-core Architectures using Java

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Norman A.; /SLAC

    Although Moore's Law remains technically valid, the performance enhancements in computing which traditionally resulted from increased CPU speeds ended years ago. Chip manufacturers have chosen to increase the number of core CPUs per chip instead of increasing clock speed. Unfortunately, these extra CPUs do not automatically result in improvements in simulation or reconstruction times. To take advantage of this extra computing power requires changing how software is written. Event reconstruction is globally serial, in the sense that raw data has to be unpacked first, channels have to be clustered to produce hits before those hits are identified as belonging tomore » a track or shower, tracks have to be found and fit before they are vertexed, etc. However, many of the individual procedures along the reconstruction chain are intrinsically independent and are perfect candidates for optimization using multi-core architecture. Threading is perhaps the simplest approach to parallelizing a program and Java includes a powerful threading facility built into the language. We have developed a fast and flexible reconstruction package (org.lcsim) written in Java that has been used for numerous physics and detector optimization studies. In this paper we present the results of our studies on optimizing the performance of this toolkit using multiple threads on many-core architectures.« less

  20. BNL program in support of LWR degraded-core accident analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginsberg, T.; Greene, G.A.

    1982-01-01

    Two major sources of loading on dry watr reactor containments are steam generatin from core debris water thermal interactions and molten core-concrete interactions. Experiments are in progress at BNL in support of analytical model development related to aspects of the above containment loading mechanisms. The work supports development and evaluation of the CORCON (Muir, 1981) and MARCH (Wooton, 1980) computer codes. Progress in the two programs is described in this paper. 8 figures.

  1. Release and disposal of materials during decommissioning of Siemens MOX fuel fabrication plant at Hanau, Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koenig, Werner; Baumann, Roland

    2007-07-01

    In September 2006, decommissioning and dismantling of the Siemens MOX Fuel Fabrication Plant in Hanau were completed. The process equipment and the fabrication buildings were completely decommissioned and dismantled. The other buildings were emptied in whole or in part, although they were not demolished. Overall, the decommissioning process produced approximately 8500 Mg of radioactive waste (including inactive matrix material); clearance measurements were also performed for approximately 5400 Mg of material covering a wide range of types. All the equipment in which nuclear fuels had been handled was disposed of as radioactive waste. The radioactive waste was conditioned on the basismore » of the requirements specified for the projected German final disposal site 'Schachtanlage Konrad'. During the pre-conditioning, familiar processes such as incineration, compacting and melting were used. It has been shown that on account of consistently applied activity containment (barrier concept) during operation and dismantling, there has been no significant unexpected contamination of the plant. Therefore almost all the materials that were not a priori destined for radioactive waste were released without restriction on the basis of the applicable legal regulations (chap. 29 of the Radiation Protection Ordinance), along with the buildings and the plant site. (authors)« less

  2. Statistics Online Computational Resource for Education

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  3. A New Dynamical Core Based on the Prediction of the Curl of the Horizontal Vorticity

    NASA Astrophysics Data System (ADS)

    Konor, C. S.; Randall, D. A.; Heikes, R. P.

    2015-12-01

    The Vector-Vorticity Dynamical core (VVM) developed by Jung and Arakawa (2008) has important advantages for the use with the anelastic and unified systems of equations. The VVM predicts the horizontal vorticity vector (HVV) at each interface and the vertical vorticity at the top layer of the model. To guarantee that the three-dimensional vorticity is nondivergent, the vertical vorticity at the interior layers is diagnosed from the horizontal divergence of the HVV through a vertical integral from the top to down. To our knowledge, this is the only dynamical core that guarantees the nondivergence of the three-dimensional vorticity. The VVM uses a C-type horizontal grid, which allows a computational mode. While the computational mode does not seem to be serious in the Cartesian grid applications, it may be serious in the icosahedral grid applications because of the extra degree of freedom in such grids. Although there are special filters to minimize the effects of this computational mode, we prefer to eliminate it altogether. We have developed a new dynamical core, which uses a Z-grid to avoid the computational mode mentioned above. The dynamical core predicts the curl of the HVV and diagnoses the horizontal divergence of the HVV from the predicted vertical vorticity. The three-dimensional vorticity is guaranteed to be nondivergent as in the VVM. In this presentation, we will introduce the new dynamical core and show results obtained by using Cartesian and hexagonal grids. We will also compare the solutions to that obtained by the VVM.

  4. Stability Estimation of ABWR on the Basis of Noise Analysis

    NASA Astrophysics Data System (ADS)

    Furuya, Masahiro; Fukahori, Takanori; Mizokami, Shinya; Yokoya, Jun

    In order to investigate the stability of a nuclear reactor core with an oxide mixture of uranium and plutonium (MOX) fuel installed, channel stability and regional stability tests were conducted with the SIRIUS-F facility. The SIRIUS-F facility was designed and constructed to provide a highly accurate simulation of thermal-hydraulic (channel) instabilities and coupled thermalhydraulics-neutronics instabilities of the Advanced Boiling Water Reactors (ABWRs). A real-time simulation was performed by modal point kinetics of reactor neutronics and fuel-rod thermal conduction on the basis of a measured void fraction in a reactor core section of the facility. A time series analysis was performed to calculate decay ratio and resonance frequency from a dominant pole of a transfer function by applying auto regressive (AR) methods to the time-series of the core inlet flow rate. Experiments were conducted with the SIRIUS-F facility, which simulates ABWR with MOX fuel installed. The variations in the decay ratio and resonance frequency among the five common AR methods are within 0.03 and 0.01 Hz, respectively. In this system, the appropriate decay ratio and resonance frequency can be estimated on the basis of the Yule-Walker method with the model order of 30.

  5. Computer-Assisted Exposure Treatment for Flight Phobia

    ERIC Educational Resources Information Center

    Tortella-Feliu, Miguel; Bornas, Xavier; Llabres, Jordi

    2008-01-01

    This review introduces the state of the art in computer-assisted treatment for behavioural disorders. The core of the paper is devoted to describe one of these interventions providing computer-assisted exposure for flight phobia treatment, the Computer-Assisted Fear of Flying Treatment (CAFFT). The rationale, contents and structure of the CAFFT…

  6. Investigation of the Performance of D 2O-Cooled High-Conversion Reactors for Fuel Cycle Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiruta, Hikaru; Youinou, Gilles

    2013-09-01

    This report presents FY13 activities for the analysis of D 2O cooled tight-pitch High-Conversion PWRs (HCPWRs) with U-Pu and Th-U fueled cores aiming at break-even or near breeder conditions while retaining the negative void reactivity. The analyses are carried out from several aspects which could not be covered in FY12 activities. SCALE 6.1 code system is utilized, and a series of simple 3D fuel pin-cell models are developed in order to perform Monte Carlo based criticality and burnup calculations. The performance of U-Pu fueled cores with axial and internal blankets is analyzed in terms of their impact on the relativemore » fissile Pu mass balance, initial Pu enrichment, and void coefficient. In FY12, Pu conversion performances of D 2O-cooled HCPWRs fueled with MOX were evaluated with small sized axial/internal DU blankets (approximately 4cm of axial length) in order to ensure the negative void reactivity, which evidently limits the conversion performance of HCPWRs. In this fiscal year report, the axial sizes of DU blankets are extended up to 30 cm in order to evaluate the amount of DU necessary to reach break-even and/or breeding conditions. Several attempts are made in order to attain the milestone of the HCPWR designs (i.e., break-even condition and negative void reactivity) by modeling of HCPWRs under different conditions such as boiling of D 2O coolant, MOX with different 235U enrichment, and different target burnups. A similar set of analyses are performed for Th-U fueled cores. Several promising characteristics of 233U over other fissile like 239Pu and 235U, most notably its higher fission neutrons per absorption in thermal and epithermal ranges combined with lower ___ in the fast range than 239Pu allows Th-U cores to be taller than MOX ones. Such an advantage results in 4% higher relative fissile mass balance than that of U-Pu fueled cores while retaining the negative void reactivity until the target burnup of 51 GWd/t. Several other distinctions

  7. Validation of Core Temperature Estimation Algorithm

    DTIC Science & Technology

    2016-01-29

    plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.

  8. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  9. Computer Training for Staff and Patrons.

    ERIC Educational Resources Information Center

    Krissoff, Alan; Konrad, Lee

    1998-01-01

    Describes a pilot computer training program for library staff and patrons at the University of Wisconsin-Madison. Reviews components of effective training programs and highlights core computer competencies: operating systems, hardware and software basics and troubleshooting, and search concepts and techniques. Includes an instructional outline and…

  10. Defining Computational Thinking for Mathematics and Science Classrooms

    ERIC Educational Resources Information Center

    Weintrop, David; Beheshti, Elham; Horn, Michael; Orton, Kai; Jona, Kemi; Trouille, Laura; Wilensky, Uri

    2016-01-01

    Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include "computational thinking" as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new…

  11. Diametral and compressive strength of dental core materials.

    PubMed

    Cho, G C; Kaneko, L M; Donovan, T E; White, S N

    1999-09-01

    Strength greatly influences the selection of core materials. Many disparate material types are now recommended for use as cores. Cores must withstand forces due to mastication and parafunction for many years. This study compared the compressive and diametral tensile strengths of 8 core materials of various material classes and formulations (light-cured hybrid composite, autocured titanium containing composite, amalgam, glass ionomer, glass ionomer cermet, resin-modified glass ionomer, and polyurethane). Materials were manipulated according to manufacturers' instructions for use as cores. Mean compressive and diametral strengths with associated standard errors were calculated for each material (n = 10). Analyses of variance were computed (P <.0001) and multiple comparisons tests discerned many differences among materials. Compressive strengths varied widely from 61.1 MPa for a polyurethane to 250 MPa for a resin composite. Diametral tensile strengths ranged widely from 18.3 MPa for a glass ionomer cermet to 55.1 MPa for a resin composite. Some resin composites had compressive and tensile strengths equal to those of amalgam. Light-cured hybrid resin composites were stronger than autocured titanium containing composites. The strengths of glass ionomer-based materials and of a polyurethane material were considerably lower than for resin composites or amalgam.

  12. Computational Psychosomatics and Computational Psychiatry: Toward a Joint Framework for Differential Diagnosis.

    PubMed

    Petzschner, Frederike H; Weber, Lilian A E; Gard, Tim; Stephan, Klaas E

    2017-09-15

    This article outlines how a core concept from theories of homeostasis and cybernetics, the inference-control loop, may be used to guide differential diagnosis in computational psychiatry and computational psychosomatics. In particular, we discuss 1) how conceptualizing perception and action as inference-control loops yields a joint computational perspective on brain-world and brain-body interactions and 2) how the concrete formulation of this loop as a hierarchical Bayesian model points to key computational quantities that inform a taxonomy of potential disease mechanisms. We consider the utility of this perspective for differential diagnosis in concrete clinical applications. Copyright © 2017 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    PubMed Central

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  14. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David W. Nigg; Devin A. Steuhm

    2011-09-01

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance and, to some extent, experiment management are obsolete, inconsistent with the state of modern nuclear engineering practice, and are becoming increasingly difficult to properly verify and validate (V&V). Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In 2009 the Idaho National Laboratory (INL) initiated a focused effort to address this situation through the introduction of modern high-fidelitymore » computational software and protocols, with appropriate V&V, within the next 3-4 years via the ATR Core Modeling and Simulation and V&V Update (or 'Core Modeling Update') Project. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the anticipated ATR Core Internals Changeout (CIC) in the 2014 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its first full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (SCALE, KENO-6, HELIOS, NEWT, and ATTILA) have been installed at the INL under various permanent sitewide license agreements and corresponding baseline models of the ATR and ATRC are now operational, demonstrating the basic feasibility of these code packages for their intended purpose. Furthermore

  15. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  16. Physiological gas exchange mapping of hyperpolarized 129 Xe using spiral-IDEAL and MOXE in a model of regional radiation-induced lung injury.

    PubMed

    Zanette, Brandon; Stirrat, Elaine; Jelveh, Salomeh; Hope, Andrew; Santyr, Giles

    2018-02-01

    To map physiological gas exchange parameters using dissolved hyperpolarized (HP) 129 Xe in a rat model of regional radiation-induced lung injury (RILI) with spiral-IDEAL and the model of xenon exchange (MOXE). Results are compared to quantitative histology of pulmonary tissue and red blood cell (RBC) distribution. Two cohorts (n = 6 each) of age-matched rats were used. One was irradiated in the right-medial lung, producing regional injury. Gas exchange was mapped 4 weeks postirradiation by imaging dissolved-phase HP 129 Xe using spiral-IDEAL at five gas exchange timepoints using a clinical 1.5 T scanner. Physiological lung parameters were extracted regionally on a voxel-wise basis using MOXE. Mean gas exchange parameters, specifically air-capillary barrier thickness (δ) and hematocrit (HCT) in the right-medial lung were compared to the contralateral lung as well as nonirradiated control animals. Whole-lung spectroscopic analysis of gas exchange was also performed. δ was significantly increased (1.43 ± 0.12 μm from 1.07 ± 0.09 μm) and HCT was significantly decreased (17.2 ± 1.2% from 23.6 ± 1.9%) in the right-medial lung (i.e., irradiated region) compared to the contralateral lung of the irradiated rats. These changes were not observed in healthy controls. δ and HCT correlated with histologically measured increases in pulmonary tissue heterogeneity (r = 0.77) and decreases in RBC distribution (r = 0.91), respectively. No changes were observed using whole-lung analysis. This work demonstrates the feasibility of mapping gas exchange using HP 129 Xe in an animal model of RILI 4 weeks postirradiation. Spatially resolved gas exchange mapping is sensitive to regional injury between cohorts that was undetected with whole-lung gas exchange analysis, in agreement with histology. Gas exchange mapping holds promise for assessing regional lung function in RILI and other pulmonary diseases. © 2017 The Authors. Medical Physics published by Wiley

  17. Core Hunter 3: flexible core subset selection.

    PubMed

    De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle

    2018-05-31

    Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the

  18. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  19. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  20. PHASE EVOLUTION AND MICROWAVE DIELECTRIC PROPERTIES OF (Li0.5Bi0.5)(W1-xMox)O4(0.0 ≤ x ≤ 1.0) CERAMICS WITH ULTRA-LOW SINTERING TEMPERATURES

    NASA Astrophysics Data System (ADS)

    Zhou, Di; Guo, Jing; Yao, Xi; Pang, Li-Xia; Qi, Ze-Ming; Shao, Tao

    2012-11-01

    The (Li0.5Bi0.5)(W1-xMox)O4(0.0 ≤ x ≤ 1.0) ceramics were prepared via the solid state reaction method. The sintering temperature decreased almost linearly from 755°C for (Li0.5Bi0.5)WO4 to 560°C for (Li0.5Bi0.5)MoO4. When the x≤0.3, a wolframite solid solution can be formed. For x = 0.4 and x = 0.6 compositions, both the wolframite and scheelite phases can be formed from the X-ray diffraction analysis, while two different kinds of grains can be revealed from the scanning electron microscopy and energy-dispersive X-ray spectrometer results. High performance of microwave dielectric properties were obtained in the (Li0.5Bi0.5)(W0.6Mo0.4)O4 ceramic sintered at 620°C with a relative permittivity of 31.5, a Qf value of 8500 GHz (at 8.2 GHz), and a temperature coefficient value of +20 ppm/°C. Complex dielectric spectra of pure (Li0.5Bi0.5)WO4 ceramic gained from the infrared spectra were extrapolated down to microwave range, and they were in good agreement with the measured values. The (Li0.5Bi0.5)(W1-xMox)O4(0.0 ≤ x ≤ 1.0) ceramics might be promising for low temperature co-fired ceramic technology.

  1. Radionuclide inventories : ORIGEN2.2 isotopic depletion calculation for high burnup low-enriched uranium and weapons-grade mixed-oxide pressurized-water reactor fuel assemblies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Ross, Kyle W.; Smith, James Dean

    2010-04-01

    The Oak Ridge National Laboratory computer code, ORIGEN2.2 (CCC-371, 2002), was used to obtain the elemental composition of irradiated low-enriched uranium (LEU)/mixed-oxide (MOX) pressurized-water reactor fuel assemblies. Described in this report are the input parameters for the ORIGEN2.2 calculations. The rationale for performing the ORIGEN2.2 calculation was to generate inventories to be used to populate MELCOR radionuclide classes. Therefore the ORIGEN2.2 output was subsequently manipulated. The procedures performed in this data reduction process are also described herein. A listing of the ORIGEN2.2 input deck for two-cycle MOX is provided in the appendix. The final output from this data reduction processmore » was three tables containing the radionuclide inventories for LEU/MOX in elemental form. Masses, thermal powers, and activities were reported for each category.« less

  2. Computational and Experimental Investigations of the Coolant Flow in the Cassette Fissile Core of a KLT-40S Reactor

    NASA Astrophysics Data System (ADS)

    Dmitriev, S. M.; Varentsov, A. V.; Dobrov, A. A.; Doronkov, D. V.; Pronin, A. N.; Sorokin, V. D.; Khrobostov, A. E.

    2017-07-01

    Results of experimental investigations of the local hydrodynamic and mass-exchange characteristics of a coolant flowing through the cells in the characteristic zones of a fuel assembly of a KLT-40S reactor plant downstream of a plate-type spacer grid by the method of diffusion of a gas tracer in the coolant flow with measurement of its velocity by a five-channel pneumometric probe are presented. An analysis of the concentration distribution of the tracer in the coolant flow downstream of a plate-type spacer grid in the fuel assembly of the KLT-40S reactor plant and its velocity field made it possible to obtain a detailed pattern of this flow and to determine its main mechanisms and features. Results of measurement of the hydraulic-resistance coefficient of a plate-type spacer grid depending on the Reynolds number are presented. On the basis of the experimental data obtained, recommendations for improvement of the method of calculating the flow rate of a coolant in the cells of the fissile core of a KLT-40S reactor were developed. The results of investigations of the local hydrodynamic and mass-exchange characteristics of the coolant flow in the fuel assembly of the KLT-40S reactor plant were accepted for estimating the thermal and technical reliability of the fissile cores of KLT-40S reactors and were included in the database for verification of computational hydrodynamics programs (CFD codes).

  3. Computational Science at the Argonne Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  4. Magnetic core mesoporous silica nanoparticles doped with dacarbazine and labelled with 99mTc for early and differential detection of metastatic melanoma by single photon emission computed tomography.

    PubMed

    Portilho, Filipe Leal; Helal-Neto, Edward; Cabezas, Santiago Sánchez; Pinto, Suyene Rocha; Dos Santos, Sofia Nascimento; Pozzo, Lorena; Sancenón, Félix; Martínez-Máñez, Ramón; Santos-Oliveira, Ralph

    2018-02-27

    Cancer is responsible for more than 12% of all causes of death in the world, with an annual death rate of more than 7 million people. In this scenario melanoma is one of the most aggressive ones with serious limitation in early detection and therapy. In this direction we developed, characterized and tested in vivo a new drug delivery system based on magnetic core-mesoporous silica nanoparticle that has been doped with dacarbazine and labelled with technetium 99 m to be used as nano-imaging agent (nanoradiopharmaceutical) for early and differential diagnosis and melanoma by single photon emission computed tomography. The results demonstrated the ability of the magnetic core-mesoporous silica to be efficiently (>98%) doped with dacarbazine and also efficiently labelled with 99mTc (technetium 99 m) (>99%). The in vivo test, using inducted mice with melanoma, demonstrated the EPR effect of the magnetic core-mesoporous silica nanoparticles doped with dacarbazine and labelled with technetium 99 metastable when injected intratumorally and the possibility to be used as systemic injection too. In both cases, magnetic core-mesoporous silica nanoparticles doped with dacarbazine and labelled with technetium 99 metastable showed to be a reliable and efficient nano-imaging agent for melanoma.

  5. Computational multicore on two-layer 1D shallow water equations for erodible dambreak

    NASA Astrophysics Data System (ADS)

    Simanjuntak, C. A.; Bagustara, B. A. R. H.; Gunawan, P. H.

    2018-03-01

    The simulation of erodible dambreak using two-layer shallow water equations and SCHR scheme are elaborated in this paper. The results show that the two-layer SWE model in a good agreement with the data experiment which is performed by Louvain-la-Neuve Université Catholique de Louvain. Moreover, the parallel algorithm with multicore architecture are given in the results. The results show that Computer I with processor Intel(R) Core(TM) i5-2500 CPU Quad-Core has the best performance to accelerate the computational time. Moreover, Computer III with processor AMD A6-5200 APU Quad-Core is observed has higher speedup and efficiency. The speedup and efficiency of Computer III with number of grids 3200 are 3.716050530 times and 92.9% respectively.

  6. The LPO Iron Pattern beneath the Earth's Inner Core Boundary

    NASA Astrophysics Data System (ADS)

    Mattesini, Maurizio; Belonoshko, Anatoly; Tkalčić, Hrvoje

    2017-04-01

    An Earth's inner core surface pattern for the iron Lattice Preferred Orientation (LPO) has been addressed for various iron crystal polymorphs. The geographical distribution of the amount of crystal alienation was achieved by bridging high-quality inner core probing seismic data [PKP(bc-df)] together with ab initio computed elastic constants. We show that the proposed topographic crystal alignment may be used as a boundary condition for dynamo simulations, providing an additional way to discriminate in between different and, often controversial, geodynamical scenarios.

  7. The LPO Iron Pattern beneath the Earth's Inner Core Boundary

    NASA Astrophysics Data System (ADS)

    Mattesini, M.; Tkalcic, H.; Belonoshko, A. B.; Buforn, E.; Udias, A.

    2015-12-01

    An Earth's inner core surface pattern for the iron Lattice Preferred Orientation (LPO) has been addressed for various iron crystal polymorphs. The geographical distribution of the amount of crystal alienation was achieved by bridging high-quality inner core probing seismic data [PKP(bc-df)] together with ab initio computed elastic constants. We show that the proposed topographic crystal alignment may be used as a boundary condition for dynamo simulations, providing an additional way to discriminate in between different and, often controversial, geodynamical scenarios.

  8. Cloud Computing: An Overview

    NASA Astrophysics Data System (ADS)

    Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao

    In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.

  9. A nanocomposite of Au-AgI core/shell dimer as a dual-modality contrast agent for x-ray computed tomography and photoacoustic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orza, Anamaria; Wu, Hui; Li, Yuancheng

    Purpose: To develop a core/shell nanodimer of gold (core) and silver iodine (shell) as a dual-modal contrast-enhancing agent for biomarker targeted x-ray computed tomography (CT) and photoacoustic imaging (PAI) applications. Methods: The gold and silver iodine core/shell nanodimer (Au/AgICSD) was prepared by fusing together components of gold, silver, and iodine. The physicochemical properties of Au/AgICSD were then characterized using different optical and imaging techniques (e.g., HR- transmission electron microscope, scanning transmission electron microscope, x-ray photoelectron spectroscopy, energy-dispersive x-ray spectroscopy, Z-potential, and UV-vis). The CT and PAI contrast-enhancing effects were tested and then compared with a clinically used CT contrast agentmore » and Au nanoparticles. To confer biocompatibility and the capability for efficient biomarker targeting, the surface of the Au/AgICSD nanodimer was modified with the amphiphilic diblock polymer and then functionalized with transferrin for targeting transferrin receptor that is overexpressed in various cancer cells. Cytotoxicity of the prepared Au/AgICSD nanodimer was also tested with both normal and cancer cell lines. Results: The characterizations of prepared Au/AgI core/shell nanostructure confirmed the formation of Au/AgICSD nanodimers. Au/AgICSD nanodimer is stable in physiological conditions for in vivo applications. Au/AgICSD nanodimer exhibited higher contrast enhancement in both CT and PAI for dual-modality imaging. Moreover, transferrin functionalized Au/AgICSD nanodimer showed specific binding to the tumor cells that have a high level of expression of the transferrin receptor. Conclusions: The developed Au/AgICSD nanodimer can be used as a potential biomarker targeted dual-modal contrast agent for both or combined CT and PAI molecular imaging.« less

  10. New World Vistas: New Models of Computation Lattice Based Quantum Computation

    DTIC Science & Technology

    1996-07-25

    ro ns Eniac (18,000 vacuum tubes) UNIVAC II (core memory) Digital Devices magnetostrictive delay line Intel 1103 integrated circuit IBM 3340 disk...in areal size of a bit for the last fifty years since the 1946 Eniac computer. 1 Planned Research I propose to consider the feasibility of implement...tech- nology. Fiqure 1 is a log-linear plot of data for the areal size of a bit over the last fifty years (from 18,000 bits in the 1946 Eniac computer

  11. MIC-SVM: Designing A Highly Efficient Support Vector Machine For Advanced Modern Multi-Core and Many-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Song, Shuaiwen; Fu, Haohuan

    2014-08-16

    Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less

  12. Estimating the spatial distribution of soil organic matter density and geochemical properties in a polygonal shaped Arctic Tundra using core sample analysis and X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Soom, F.; Ulrich, C.; Dafflon, B.; Wu, Y.; Kneafsey, T. J.; López, R. D.; Peterson, J.; Hubbard, S. S.

    2016-12-01

    The Arctic tundra with its permafrost dominated soils is one of the regions most affected by global climate change, and in turn, can also influence the changing climate through biogeochemical processes, including greenhouse gas release or storage. Characterization of shallow permafrost distribution and characteristics are required for predicting ecosystem feedbacks to a changing climate over decadal to century timescales, because they can drive active layer deepening and land surface deformation, which in turn can significantly affect hydrological and biogeochemical responses, including greenhouse gas dynamics. In this study, part of the Next-Generation Ecosystem Experiment (NGEE-Arctic), we use X-ray computed tomography (CT) to estimate wet bulk density of cores extracted from a field site near Barrow AK, which extend 2-3m through the active layer into the permafrost. We use multi-dimensional relationships inferred from destructive core sample analysis to infer organic matter density, dry bulk density and ice content, along with some geochemical properties from nondestructive CT-scans along the entire length of the cores, which was not obtained by the spatially limited destructive laboratory analysis. Multi-parameter cross-correlations showed good agreement between soil properties estimated from CT scans versus properties obtained through destructive sampling. Soil properties estimated from cores located in different types of polygons provide valuable information about the vertical distribution of soil and permafrost properties as a function of geomorphology.

  13. The change of radial power factor distribution due to RCCA insertion at the first cycle core of AP1000

    NASA Astrophysics Data System (ADS)

    Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.

  14. Cloud Computing as a Core Discipline in a Technology Entrepreneurship Program

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2012-01-01

    Education in entrepreneurship continues to be a developing area of curricula for computer science and information systems students. Entrepreneurship is enabled frequently by cloud computing methods that furnish benefits to especially medium and small-sized firms. Expanding upon an earlier foundation paper, the authors of this paper present an…

  15. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  16. Core-to-core uniformity improvement in multi-core fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Lindley, Emma; Min, Seong-Sik; Leon-Saval, Sergio; Cvetojevic, Nick; Jovanovic, Nemanja; Bland-Hawthorn, Joss; Lawrence, Jon; Gris-Sanchez, Itandehui; Birks, Tim; Haynes, Roger; Haynes, Dionne

    2014-07-01

    Multi-core fiber Bragg gratings (MCFBGs) will be a valuable tool not only in communications but also various astronomical, sensing and industry applications. In this paper we address some of the technical challenges of fabricating effective multi-core gratings by simulating improvements to the writing method. These methods allow a system designed for inscribing single-core fibers to cope with MCFBG fabrication with only minor, passive changes to the writing process. Using a capillary tube that was polished on one side, the field entering the fiber was flattened which improved the coverage and uniformity of all cores.

  17. Investigation on the Core Bypass Flow in a Very High Temperature Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Yassin

    2013-10-22

    Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are

  18. Thermal Hydraulics Design and Analysis Methodology for a Solid-Core Nuclear Thermal Rocket Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Chen, Yen-Sen; Cheng, Gary; Ito, Yasushi

    2013-01-01

    Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions. This chapter describes a thermal hydraulics design and analysis methodology developed at the NASA Marshall Space Flight Center, in support of the nuclear thermal propulsion development effort. The objective of this campaign is to bridge the design methods in the Rover/NERVA era, with a modern computational fluid dynamics and heat transfer methodology, to predict thermal, fluid, and hydrogen environments of a hypothetical solid-core, nuclear thermal engine the Small Engine, designed in the 1960s. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics and heat transfer platform, while formulations of flow and heat transfer through porous and solid media were implemented to describe those of hydrogen flow channels inside the solid24 core. Design analyses of a single flow element and the entire solid-core thrust chamber of the Small Engine were performed and the results are presented herein

  19. Recommendations for an Undergraduate Program in Computational Mathematics.

    ERIC Educational Resources Information Center

    Committee on the Undergraduate Program in Mathematics, Berkeley, CA.

    This report describes an undergraduate program designed to produce mathematicians who will know how to use and to apply computers. There is a core of 12 one-semester courses: five in mathematics, four in computational mathematics and three in computer science, leaving the senior year for electives. The content and spirit of these courses are…

  20. Isolated core vs. superficial cooling effects on virtual maze navigation.

    PubMed

    Payne, Jennifer; Cheung, Stephen S

    2007-07-01

    Cold impairs cognitive performance and is a common occurrence in many survival situations. Altered behavior patterns due to impaired navigation abilities in cold environments are potential problems in lost-person situations. We investigated the separate effects of low core temperature and superficial cooling on a spatially demanding virtual navigation task. There were 12 healthy men who were passively cooled via 15 degrees C water immersion to a core temperature of 36.0 degrees C, then transferred to a warm (40 degrees C) water bath to eliminate superficial shivering while completing a series of 20 virtual computer mazes. In a control condition, subjects rested in a thermoneutral (approximately 35 degrees C) bath for a time-matched period before being transferred to a warm bath for testing. Superficial cooling and distraction were achieved by whole-body immersion in 35 degree water for a time-matched period, followed by lower leg immersion in 10 degree C water for the duration of the navigational tests. Mean completion time and mean error scores for the mazes were not significantly different (p > 0.05) across the core cooling (16.59 +/- 11.54 s, 0.91 +/- 1.86 errors), control (15.40 +/- 8.85 s, 0.82 +/- 1.76 errors), and superficial cooling (15.19 +/- 7.80 s, 0.77 +/- 1.40 errors) conditions. Separately reducing core temperature or increasing cold sensation in the lower extremities did not influence performance on virtual computer mazes, suggesting that navigation is more resistive to cooling than other, simpler cognitive tasks. Further research is warranted to explore navigational ability at progressively lower core and skin temperatures, and in different populations.

  1. Core-core and core-valence correlation energy atomic and molecular benchmarks for Li through Ar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranasinghe, Duminda S.; Frisch, Michael J.; Petersson, George A., E-mail: gpetersson@wesleyan.edu

    2015-12-07

    We have established benchmark core-core, core-valence, and valence-valence absolute coupled-cluster single double (triple) correlation energies (±0.1%) for 210 species covering the first- and second-rows of the periodic table. These species provide 194 energy differences (±0.03 mE{sub h}) including ionization potentials, electron affinities, and total atomization energies. These results can be used for calibration of less expensive methodologies for practical routine determination of core-core and core-valence correlation energies.

  2. Accelerating Astronomy & Astrophysics in the New Era of Parallel Computing: GPUs, Phi and Cloud Computing

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.; Dindar, Saleh; Peters, Jorg

    2015-08-01

    The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer

  3. Achieving High Performance with FPGA-Based Computing

    PubMed Central

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  4. Results of comparative RBMK neutron computation using VNIIEF codes (cell computation, 3D statics, 3D kinetics). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.

    1995-12-31

    In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less

  5. Making Connections: Integrating Computer Applications with the Academic Core

    ERIC Educational Resources Information Center

    Harter, Christi

    2011-01-01

    In order to improve the quality of technology instruction, the Career and Technical Education (CTE) Business Department in the Spokane Public School district has aligned its Computer Applications (CA) course to the district's ninth-grade Springboard (Language Arts) curriculum, Algebra I curriculum, and the Culminating Project (senior project)…

  6. Seismic, side-scan survey, diving, and coring data analyzed by a Macintosh II sup TM computer and inexpensive software provide answers to a possible offshore extension of landslides at Palos Verdes Peninsula, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dill, R.F.; Slosson, J.E.; McEachen, D.B.

    1990-05-01

    A Macintosh II{sup TM} computer and commercially available software were used to analyze and depict the topography, construct an isopach sediment thickness map, plot core positions, and locate the geology of an offshore area facing an active landslide on the southern side of Palos Verdes Peninsula California. Profile data from side scan sonar, 3.5 kHz, and Boomer subbottom, high-resolution seismic, diving, echo sounder traverses, and cores - all controlled with a mini Ranger II navigation system - were placed in MacGridzo{sup TM} and WingZ{sup TM} software programs. The computer-plotted data from seven sources were used to construct maps with overlaysmore » for evaluating the possibility of a shoreside landslide extending offshore. The poster session describes the offshore survey system and demonstrates the development of the computer data base, its placement into the MacGridzo{sup TM} gridding program, and transfer of gridded navigational locations to the WingZ{sup TM} data base and graphics program. Data will be manipulated to show how sea-floor features are enhanced and how isopach data were used to interpret the possibility of landslide displacement and Holocene sea level rise. The software permits rapid assessment of data using computerized overlays and a simple, inexpensive means of constructing and evaluating information in map form and the preparation of final written reports. This system could be useful in many other areas where seismic profiles, precision navigational locations, soundings, diver observations, and core provide a great volume of information that must be compared on regional plots to develop of field maps for geological evaluation and reports.« less

  7. Development INTERDATA 8/32 computer system

    NASA Technical Reports Server (NTRS)

    Sonett, C. P.

    1983-01-01

    The capabilities of the Interdata 8/32 minicomputer were examined regarding data and word processing, editing, retrieval, and budgeting as well as data management demands of the user groups in the network. Based on four projected needs: (1) a hands on (open shop) computer for data analysis with large core and disc capability; (2) the expected requirements of the NASA data networks; (3) the need for intermittent large core capacity for theoretical modeling; (4) the ability to access data rapidly either directly from tape or from core onto hard copy, the system proved useful and adequate for the planned requirements.

  8. Nurturing a growing field: Computers & Geosciences

    NASA Astrophysics Data System (ADS)

    Mariethoz, Gregoire; Pebesma, Edzer

    2017-10-01

    Computational issues are becoming increasingly critical for virtually all fields of geoscience. This includes the development of improved algorithms and models, strategies for implementing high-performance computing, or the management and visualization of the large datasets provided by an ever-growing number of environmental sensors. Such issues are central to scientific fields as diverse as geological modeling, Earth observation, geophysics or climatology, to name just a few. Related computational advances, across a range of geoscience disciplines, are the core focus of Computers & Geosciences, which is thus a truly multidisciplinary journal.

  9. Assessing Mathematics Automatically Using Computer Algebra and the Internet

    ERIC Educational Resources Information Center

    Sangwin, Chris

    2004-01-01

    This paper reports some recent developments in mathematical computer-aided assessment which employs computer algebra to evaluate students' work using the Internet. Technical and educational issues raised by this use of computer algebra are addressed. Working examples from core calculus and algebra which have been used with first year university…

  10. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    NASA Astrophysics Data System (ADS)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  11. Optimizing performance by improving core stability and core strength.

    PubMed

    Hibbs, Angela E; Thompson, Kevin G; French, Duncan; Wrigley, Allan; Spears, Iain

    2008-01-01

    Core stability and core strength have been subject to research since the early 1980s. Research has highlighted benefits of training these processes for people with back pain and for carrying out everyday activities. However, less research has been performed on the benefits of core training for elite athletes and how this training should be carried out to optimize sporting performance. Many elite athletes undertake core stability and core strength training as part of their training programme, despite contradictory findings and conclusions as to their efficacy. This is mainly due to the lack of a gold standard method for measuring core stability and strength when performing everyday tasks and sporting movements. A further confounding factor is that because of the differing demands on the core musculature during everyday activities (low load, slow movements) and sporting activities (high load, resisted, dynamic movements), research performed in the rehabilitation sector cannot be applied to the sporting environment and, subsequently, data regarding core training programmes and their effectiveness on sporting performance are lacking. There are many articles in the literature that promote core training programmes and exercises for performance enhancement without providing a strong scientific rationale of their effectiveness, especially in the sporting sector. In the rehabilitation sector, improvements in lower back injuries have been reported by improving core stability. Few studies have observed any performance enhancement in sporting activities despite observing improvements in core stability and core strength following a core training programme. A clearer understanding of the roles that specific muscles have during core stability and core strength exercises would enable more functional training programmes to be implemented, which may result in a more effective transfer of these skills to actual sporting activities.

  12. Accelerating Climate Simulations Through Hybrid Computing

    NASA Technical Reports Server (NTRS)

    Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark

    2009-01-01

    Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.

  13. Measurement and simulation of thermal neutron flux distribution in the RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.

    2018-01-01

    The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.

  14. Rydberg atoms in hollow-core photonic crystal fibres.

    PubMed

    Epple, G; Kleinbach, K S; Euser, T G; Joly, N Y; Pfau, T; Russell, P St J; Löw, R

    2014-06-19

    The exceptionally large polarizability of highly excited Rydberg atoms-six orders of magnitude higher than ground-state atoms--makes them of great interest in fields such as quantum optics, quantum computing, quantum simulation and metrology. However, if they are to be used routinely in applications, a major requirement is their integration into technically feasible, miniaturized devices. Here we show that a Rydberg medium based on room temperature caesium vapour can be confined in broadband-guiding kagome-style hollow-core photonic crystal fibres. Three-photon spectroscopy performed on a caesium-filled fibre detects Rydberg states up to a principal quantum number of n=40. Besides small energy-level shifts we observe narrow lines confirming the coherence of the Rydberg excitation. Using different Rydberg states and core diameters we study the influence of confinement within the fibre core after different exposure times. Understanding these effects is essential for the successful future development of novel applications based on integrated room temperature Rydberg systems.

  15. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  16. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  17. Research on OpenStack of open source cloud computing in colleges and universities’ computer room

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Zhang, Dandan

    2017-06-01

    In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.

  18. Simulating the Dynamics of Earth's Core: Using NCCS Supercomputers Speeds Calculations

    NASA Technical Reports Server (NTRS)

    2002-01-01

    If one wanted to study Earth's core directly, one would have to drill through about 1,800 miles of solid rock to reach liquid core-keeping the tunnel from collapsing under pressures that are more than 1 million atmospheres and then sink an instrument package to the bottom that could operate at 8,000 F with 10,000 tons of force crushing every square inch of its surface. Even then, several of these tunnels would probably be needed to obtain enough data. Faced with difficult or impossible tasks such as these, scientists use other available sources of information - such as seismology, mineralogy, geomagnetism, geodesy, and, above all, physical principles - to derive a model of the core and, study it by running computer simulations. One NASA researcher is doing just that on NCCS computers. Physicist and applied mathematician Weijia Kuang, of the Space Geodesy Branch, and his collaborators at Goddard have what he calls the,"second - ever" working, usable, self-consistent, fully dynamic, three-dimensional geodynamic model (see "The Geodynamic Theory"). Kuang runs his model simulations on the supercomputers at the NCCS. He and Jeremy Bloxham, of Harvard University, developed the original version, written in Fortran 77, in 1996.

  19. News on Seeking Gaia's Astrometric Core Solution with AGIS

    NASA Astrophysics Data System (ADS)

    Lammers, U.; Lindegren, L.

    We report on recent new developments around the Astrometric Global Iterative Solution system. This includes the availability of an efficient Conjugate Gradient solver and the Generic Astrometric Calibration scheme that had been proposed a while ago. The number of primary stars to be included in the core solution is now believed to be significantly higher than the 100 Million that served as baseline until now. Cloud computing services are being studied as a possible cost-effective alternative to running AGIS on dedicated computing hardware at ESAC during the operational phase.

  20. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  1. CORAL: aligning conserved core regions across domain families.

    PubMed

    Fong, Jessica H; Marchler-Bauer, Aron

    2009-08-01

    Homologous protein families share highly conserved sequence and structure regions that are frequent targets for comparative analysis of related proteins and families. Many protein families, such as the curated domain families in the Conserved Domain Database (CDD), exhibit similar structural cores. To improve accuracy in aligning such protein families, we propose a profile-profile method CORAL that aligns individual core regions as gap-free units. CORAL computes optimal local alignment of two profiles with heuristics to preserve continuity within core regions. We benchmarked its performance on curated domains in CDD, which have pre-defined core regions, against COMPASS, HHalign and PSI-BLAST, using structure superpositions and comprehensive curator-optimized alignments as standards of truth. CORAL improves alignment accuracy on core regions over general profile methods, returning a balanced score of 0.57 for over 80% of all domain families in CDD, compared with the highest balanced score of 0.45 from other methods. Further, CORAL provides E-values to aid in detecting homologous protein families and, by respecting block boundaries, produces alignments with improved 'readability' that facilitate manual refinement. CORAL will be included in future versions of the NCBI Cn3D/CDTree software, which can be downloaded at http://www.ncbi.nlm.nih.gov/Structure/cdtree/cdtree.shtml. Supplementary data are available at Bioinformatics online.

  2. Phase and crystallite size analysis of (Ti1-xMox)C-(Ni,Cr) cermet obtained by mechanical alloying

    NASA Astrophysics Data System (ADS)

    Suryana, Anis, Muhammad; Manaf, Azwar

    2018-04-01

    In this paper, we report the phase and crystallite size analysis of (Ti1-xMox)C-(Ni,Cr) with x = 0-0.5 cermet obtained by mechanical alloying of Ti, Mo, Ni, Cr and C elemental powders using a high-energy shaker ball mill under wet condition for 10 hours. The process used toluene as process control agent and the ball to mass ratio was 10:1. The mechanically milled powder was then consolidated and subsequently heated at a temperature 850°C for 2 hours under an argon flow to prevent oxidation. The product was characterized by X-ray diffraction (XRD) and scanning electron microscope equipped with energy dispersive analyzer. Results shown that, by the selection of appropriate condition during the mechanical alloying process, a metastable Ti-Ni-Cr-C powders could be obtained. The powder then allowed the in situ synthesis of TiC-(Ni,Cr) cermet which took place during exposure time at a high temperature that applied in reactive sintering step. Addition to molybdenum has caused shifting the TiC XRD peaks to a slightly higher angle which indicated that molybdenum dissolved in TiC phase. The crystallite size distribution of TiC is discussed in the report, which showing that the mean size decreased with the addition of molybdenum.

  3. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic

  4. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  5. Comparison between measured and computed magnetic flux density distribution of simulated transformer core joints assembled from grain-oriented and non-oriented electrical steel

    NASA Astrophysics Data System (ADS)

    Shahrouzi, Hamid; Moses, Anthony J.; Anderson, Philip I.; Li, Guobao; Hu, Zhuochao

    2018-04-01

    The flux distribution in an overlapped linear joint constructed in the central region of an Epstein Square was studied experimentally and results compared with those obtained using a computational magnetic field solver. High permeability grain-oriented (GO) and low permeability non-oriented (NO) electrical steels were compared at a nominal core flux density of 1.60 T at 50 Hz. It was found that the experimental results only agreed well at flux densities at which the reluctance of different paths of the flux are similar. Also it was revealed that the flux becomes more uniform when the working point of the electrical steel is close to the knee point of the B-H curve of the steel.

  6. Evaluating Multi-core Architectures through Accelerating the Three-Dimensional Lax–Wendroff Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2014-07-18

    Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less

  7. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  8. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as

  9. An Assessment of the Attractiveness of Material Associated with a MOX Fuel Cycle from a Safeguards Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bathke, Charles G; Wallace, Richard K; Ireland, John R

    2009-01-01

    This paper is an extension to earlier studies that examined the attractiveness of materials mixtures containing special nuclear materials (SNM) and alternate nuclear materials (ANM) associated with the PUREX, UREX, coextraction, THOREX, and PYROX reprocessing schemes. This study extends the figure of merit (FOM) for evaluating attractiveness to cover a broad range of proliferant State and sub-national group capabilities. This study also considers those materials that will be recycled and burned, possibly multiple times, in LWRs [e.g., plutonium in the form of mixed oxide (MOX) fuel]. The primary conclusion of this study is that all fissile material needs to bemore » rigorously safeguarded to detect diversion by a State and provided the highest levels of physical protection to prevent theft by sub-national groups; no 'silver bullet' has been found that will permit the relaxation of current international safeguards or national physical security protection levels. This series of studies has been performed at the request of the United States Department of Energy (DOE) and is based on the calculation of 'attractiveness levels' that are expressed in terms consistent with, but normally reserved for nuclear materials in DOE nuclear facilities. The expanded methodology and updated findings are presented. Additionally, how these attractiveness levels relate to proliferation resistance and physical security are discussed.« less

  10. AN ASSESSMENT OF THE ATTRACTIVENESS OF MATERIAL ASSOCIATED WITH A MOX FUEL CYCLE FROM A SAFEGUARDS PERSPECTIVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bathke, C. G.; Ebbinghaus, B. B.; Sleaford, Brad W.

    2009-07-09

    This paper is an extension to earlier studies [1,2] that examined the attractiveness of materials mixtures containing special nuclear materials (SNM) and alternate nuclear materials (ANM) associated with the PUREX, UREX, coextraction, THOREX, and PYROX reprocessing schemes. This study extends the figure of merit (FOM) for evaluating attractiveness to cover a broad range of proliferant State and sub-national group capabilities. This study also considers those materials that will be recycled and burned, possibly multiple times, in LWRs [e.g., plutonium in the form of mixed oxide (MOX) fuel]. The primary conclusion of this study is that all fissile material needs tomore » be rigorously safeguarded to detect diversion by a State and provided the highest levels of physical protection to prevent theft by sub-national groups; no “silver bullet” has been found that will permit the relaxation of current international safeguards or national physical security protection levels. This series of studies has been performed at the request of the United States Department of Energy (DOE) and is based on the calculation of "attractiveness levels" that are expressed in terms consistent with, but normally reserved for nuclear materials in DOE nuclear facilities [3]. The expanded methodology and updated findings are presented. Additionally, how these attractiveness levels relate to proliferation resistance and physical security are discussed.« less

  11. CARMA observations of Galactic cold cores: searching for spinning dust emission

    NASA Astrophysics Data System (ADS)

    Tibbs, C. T.; Paladini, R.; Cleary, K.; Muchovej, S. J. C.; Scaife, A. M. M.; Stevenson, M. A.; Laureijs, R. J.; Ysard, N.; Grainge, K. J. B.; Perrott, Y. C.; Rumsey, C.; Villadsen, J.

    2015-11-01

    We present the first search for spinning dust emission from a sample of 34 Galactic cold cores, performed using the CARMA interferometer. For each of our cores, we use photometric data from the Herschel Space Observatory to constrain bar{N}H, bar{T}d, bar{n}H, and bar{G}0. By computing the mass of the cores and comparing it to the Bonnor-Ebert mass, we determined that 29 of the 34 cores are gravitationally unstable and undergoing collapse. In fact, we found that six cores are associated with at least one young stellar object, suggestive of their protostellar nature. By investigating the physical conditions within each core, we can shed light on the cm emission revealed (or not) by our CARMA observations. Indeed, we find that only three of our cores have any significant detectable cm emission. Using a spinning dust model, we predict the expected level of spinning dust emission in each core and find that for all 34 cores, the predicted level of emission is larger than the observed cm emission constrained by the CARMA observations. Moreover, even in the cores for which we do detect cm emission, we cannot, at this stage, discriminate between free-free emission from young stellar objects and spinning dust emission. We emphasize that although the CARMA observations described in this analysis place important constraints on the presence of spinning dust in cold, dense environments, the source sample targeted by these observations is not statistically representative of the entire population of Galactic cores.

  12. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  13. EUV patterning using CAR or MOX photoresist at low dose exposure for sub 36nm pitch

    NASA Astrophysics Data System (ADS)

    Thibaut, Sophie; Raley, Angélique; Lazarrino, Frederic; Mao, Ming; De Simone, Danilo; Piumi, Daniele; Barla, Kathy; Ko, Akiteru; Metz, Andrew; Kumar, Kaushik; Biolsi, Peter

    2018-04-01

    The semiconductor industry has been pushing the limits of scalability by combining 193nm immersion lithography with multi-patterning techniques for several years. Those integrations have been declined in a wide variety of options to lower their cost but retain their inherent variability and process complexity. EUV lithography offers a much desired path that allows for direct print of line and space at 36nm pitch and below and effectively addresses issues like cycle time, intra-level overlay and mask count costs associated with multi-patterning. However it also brings its own sets of challenges. One of the major barrier to high volume manufacturing implementation has been hitting the 250W power exposure required for adequate throughput [1]. Enabling patterning using a lower dose resist could help move us closer to the HVM throughput targets assuming required performance for roughness and pattern transfer can be met. As plasma etching is known to reduce line edge roughness on 193nm lithography printed features [2], we investigate in this paper the level of roughness that can be achieved on EUV photoresist exposed at a lower dose through etch process optimization into a typical back end of line film stack. We will study 16nm lines printed at 32 and 34nm pitch. MOX and CAR photoresist performance will be compared. We will review step by step etch chemistry development to reach adequate selectivity and roughness reduction to successfully pattern the target layer.

  14. Basis sets for the calculation of core-electron binding energies

    NASA Astrophysics Data System (ADS)

    Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.

    2018-05-01

    Core-electron binding energies (CEBEs) computed within a Δ self-consistent field approach require large basis sets to achieve convergence with respect to the basis set limit. It is shown that supplementing a basis set with basis functions from the corresponding basis set for the element with the next highest nuclear charge (Z + 1) provides basis sets that give CEBEs close to the basis set limit. This simple procedure provides relatively small basis sets that are well suited for calculations where the description of a core-ionised state is important, such as time-dependent density functional theory calculations of X-ray emission spectroscopy.

  15. Palaeomagnetic field intensity variations suggest Mesoproterozoic inner-core nucleation

    NASA Astrophysics Data System (ADS)

    Biggin, A. J.; Piispa, E. J.; Pesonen, L. J.; Holme, R.; Paterson, G. A.; Veikkolainen, T.; Tauxe, L.

    2015-10-01

    The Earth's inner core grows by the freezing of liquid iron at its surface. The point in history at which this process initiated marks a step-change in the thermal evolution of the planet. Recent computational and experimental studies have presented radically differing estimates of the thermal conductivity of the Earth's core, resulting in estimates of the timing of inner-core nucleation ranging from less than half a billion to nearly two billion years ago. Recent inner-core nucleation (high thermal conductivity) requires high outer-core temperatures in the early Earth that complicate models of thermal evolution. The nucleation of the core leads to a different convective regime and potentially different magnetic field structures that produce an observable signal in the palaeomagnetic record and allow the date of inner-core nucleation to be estimated directly. Previous studies searching for this signature have been hampered by the paucity of palaeomagnetic intensity measurements, by the lack of an effective means of assessing their reliability, and by shorter-timescale geomagnetic variations. Here we examine results from an expanded Precambrian database of palaeomagnetic intensity measurements selected using a new set of reliability criteria. Our analysis provides intensity-based support for the dominant dipolarity of the time-averaged Precambrian field, a crucial requirement for palaeomagnetic reconstructions of continents. We also present firm evidence for the existence of very long-term variations in geomagnetic strength. The most prominent and robust transition in the record is an increase in both average field strength and variability that is observed to occur between a billion and 1.5 billion years ago. This observation is most readily explained by the nucleation of the inner core occurring during this interval; the timing would tend to favour a modest value of core thermal conductivity and supports a simple thermal evolution model for the Earth.

  16. A computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors

    DOE PAGES

    Hu, Rui; Yu, Yiqi

    2016-09-08

    For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneouslymore » in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. In addition, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.« less

  17. Mechanical Behavior of CFRP Lattice Core Sandwich Bolted Corner Joints

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaolei; Liu, Yang; Wang, Yana; Lu, Xiaofeng; Zhu, Lingxue

    2017-12-01

    The lattice core sandwich structures have drawn more attention for the integration of load capacity and multifunctional applications. However, the connection of carbon fibers reinforced polymer composite (CFRP) lattice core sandwich structure hinders its application. In this paper, a typical connection of two lattice core sandwich panels, named as corner joint or L-joint, was investigated by experiment and finite element method (FEM). The mechanical behavior and failure mode of the corner joints were discussed. The results showed that the main deformation pattern and failure mode of the lattice core sandwich bolted corner joints structure were the deformation of metal connector and indentation of the face sheet in the bolt holes. The metal connectors played an important role in bolted corner joints structure. In order to save the calculation resource, a continuum model of pyramid lattice core was used to replace the exact structure. The computation results were consistent with experiment, and the maximum error was 19%. The FEM demonstrated the deflection process of the bolted corner joints structure visually. So the simplified FEM can be used for further analysis of the bolted corner joints structure in engineering.

  18. Real-time three-dimensional optical coherence tomography image-guided core-needle biopsy system.

    PubMed

    Kuo, Wei-Cheng; Kim, Jongsik; Shemonski, Nathan D; Chaney, Eric J; Spillman, Darold R; Boppart, Stephen A

    2012-06-01

    Advances in optical imaging modalities, such as optical coherence tomography (OCT), enable us to observe tissue microstructure at high resolution and in real time. Currently, core-needle biopsies are guided by external imaging modalities such as ultrasound imaging and x-ray computed tomography (CT) for breast and lung masses, respectively. These image-guided procedures are frequently limited by spatial resolution when using ultrasound imaging, or by temporal resolution (rapid real-time feedback capabilities) when using x-ray CT. One feasible approach is to perform OCT within small gauge needles to optically image tissue microstructure. However, to date, no system or core-needle device has been developed that incorporates both three-dimensional OCT imaging and tissue biopsy within the same needle for true OCT-guided core-needle biopsy. We have developed and demonstrate an integrated core-needle biopsy system that utilizes catheter-based 3-D OCT for real-time image-guidance for target tissue localization, imaging of tissue immediately prior to physical biopsy, and subsequent OCT imaging of the biopsied specimen for immediate assessment at the point-of-care. OCT images of biopsied ex vivo tumor specimens acquired during core-needle placement are correlated with corresponding histology, and computational visualization of arbitrary planes within the 3-D OCT volumes enables feedback on specimen tissue type and biopsy quality. These results demonstrate the potential for using real-time 3-D OCT for needle biopsy guidance by imaging within the needle and tissue during biopsy procedures.

  19. Occurrence of coring after needle insertion through a rubber stopper: study with prednisolone acetate.

    PubMed

    Campagna, Raphael; Pessis, Eric; Guerini, Henri; Feydy, Antoine; Drapé, Jean-Luc

    2013-02-01

    To evaluate the occurrence of coring after needle insertion through the rubber stopper of prednisolone acetate vials. Two-hundred vials of prednisolone acetate were randomly distributed to two radiologists. Prednisolone acetate was drawn up through the rubber bung of the vials with an 18-gauge cutting bevelled needle and aspirated with a 5-ml syringe. The presence of coring was noted visually. We systematically put each core in a syringe refilled with 3 ml prednisolone acetate, and injected the medication through a 20-gauge spine needle. Computed tomography was performed to measure the size of each coring. Coring occurred in 21 out of 200 samples (10.5 %), and was visually detected in the syringe filled up with prednisolone in 11 of the 21 cases. Ten more occult cores were detected only after the syringes and needles were taken apart and rinsed. The core size ranged from 0.6 to 1.1 mm, and 1 of the 21 (4.7 %) cores was ejected through the 20-gauge needle. Coring can occur after the insertion of a needle through the rubber stopper of a vial of prednisolone acetate, and the resultant core can then be aspirated into the syringe.

  20. Patient-specific core decompression surgery for early-stage ischemic necrosis of the femoral head

    PubMed Central

    Wang, Wei; Hu, Wei; Yang, Pei; Dang, Xiao Qian; Li, Xiao Hui; Wang, Kun Zheng

    2017-01-01

    Introduction Core decompression is an efficient treatment for early stage ischemic necrosis of the femoral head. In conventional procedures, the pre-operative X-ray only shows one plane of the ischemic area, which often results in inaccurate drilling. This paper introduces a new method that uses computer-assisted technology and rapid prototyping to enhance drilling accuracy during core decompression surgeries and presents a validation study of cadaveric tests. Methods Twelve cadaveric human femurs were used to simulate early-stage ischemic necrosis. The core decompression target at the anterolateral femoral head was simulated using an embedded glass ball (target). Three positioning Kirschner wires were drilled into the top and bottom of the large rotor. The specimen was then subjected to computed tomography (CT). A CT image of the specimen was imported into the Mimics software to construct a three-dimensional model including the target. The best core decompression channel was then designed using the 3D model. A navigational template for the specimen was designed using the Pro/E software and manufactured by rapid prototyping technology to guide the drilling channel. The specimen-specific navigation template was installed on the specimen using positioning Kirschner wires. Drilling was performed using a guide needle through the guiding hole on the templates. The distance between the end point of the guide needle and the target was measured to validate the patient-specific surgical accuracy. Results The average distance between the tip of the guide needle drilled through the guiding template and the target was 1.92±0.071 mm. Conclusions Core decompression using a computer-rapid prototyping template is a reliable and accurate technique that could provide a new method of precision decompression for early-stage ischemic necrosis. PMID:28464029

  1. Patient-specific core decompression surgery for early-stage ischemic necrosis of the femoral head.

    PubMed

    Wang, Wei; Hu, Wei; Yang, Pei; Dang, Xiao Qian; Li, Xiao Hui; Wang, Kun Zheng

    2017-01-01

    Core decompression is an efficient treatment for early stage ischemic necrosis of the femoral head. In conventional procedures, the pre-operative X-ray only shows one plane of the ischemic area, which often results in inaccurate drilling. This paper introduces a new method that uses computer-assisted technology and rapid prototyping to enhance drilling accuracy during core decompression surgeries and presents a validation study of cadaveric tests. Twelve cadaveric human femurs were used to simulate early-stage ischemic necrosis. The core decompression target at the anterolateral femoral head was simulated using an embedded glass ball (target). Three positioning Kirschner wires were drilled into the top and bottom of the large rotor. The specimen was then subjected to computed tomography (CT). A CT image of the specimen was imported into the Mimics software to construct a three-dimensional model including the target. The best core decompression channel was then designed using the 3D model. A navigational template for the specimen was designed using the Pro/E software and manufactured by rapid prototyping technology to guide the drilling channel. The specimen-specific navigation template was installed on the specimen using positioning Kirschner wires. Drilling was performed using a guide needle through the guiding hole on the templates. The distance between the end point of the guide needle and the target was measured to validate the patient-specific surgical accuracy. The average distance between the tip of the guide needle drilled through the guiding template and the target was 1.92±0.071 mm. Core decompression using a computer-rapid prototyping template is a reliable and accurate technique that could provide a new method of precision decompression for early-stage ischemic necrosis.

  2. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  3. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  4. Underwater Threat Source Localization: Processing Sensor Network TDOAs with a Terascale Optical Core Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Imam, Neena

    2007-01-01

    Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near-term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront time-difference-of-arrival (TDOA). The corresponding algorithms are implemented on the EnLight processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimizedmore » for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight 64a prototype processor as compared to a dual Intel XeonTM processor.« less

  5. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  6. A fast CT reconstruction scheme for a general multi-core PC.

    PubMed

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors.

  7. A Fast CT Reconstruction Scheme for a General Multi-Core PC

    PubMed Central

    Zeng, Kai; Bai, Erwei; Wang, Ge

    2007-01-01

    Expensive computational cost is a severe limitation in CT reconstruction for clinical applications that need real-time feedback. A primary example is bolus-chasing computed tomography (CT) angiography (BCA) that we have been developing for the past several years. To accelerate the reconstruction process using the filtered backprojection (FBP) method, specialized hardware or graphics cards can be used. However, specialized hardware is expensive and not flexible. The graphics processing unit (GPU) in a current graphic card can only reconstruct images in a reduced precision and is not easy to program. In this paper, an acceleration scheme is proposed based on a multi-core PC. In the proposed scheme, several techniques are integrated, including utilization of geometric symmetry, optimization of data structures, single-instruction multiple-data (SIMD) processing, multithreaded computation, and an Intel C++ compilier. Our scheme maintains the original precision and involves no data exchange between the GPU and CPU. The merits of our scheme are demonstrated in numerical experiments against the traditional implementation. Our scheme achieves a speedup of about 40, which can be further improved by several folds using the latest quad-core processors. PMID:18256731

  8. Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena

    2010-09-30

    Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less

  9. 34. DESPATCH CORE OVENS, GREY IRON FOUNDRY CORE ROOM, BAKES ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. DESPATCH CORE OVENS, GREY IRON FOUNDRY CORE ROOM, BAKES CORES THAT ARE NOT MADE ON HEATED OR COLD BOX CORE MACHINES, TO SET BINDING AGENTS MIXED WITH THE SAND CREATING CORES HARD ENOUGH TO WITHSTAND THE FLOW OF MOLTEN IRON INSIDE A MOLD. - Stockham Pipe & Fittings Company, Grey Iron Foundry, 4000 Tenth Avenue North, Birmingham, Jefferson County, AL

  10. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  11. Designing Computer-Based Assessments: Multidisciplinary Findings and Student Perspectives

    ERIC Educational Resources Information Center

    Dembitzer, Leah; Zelikovitz, Sarah; Kettler, Ryan J.

    2017-01-01

    A partnership was created between psychologists and computer programmers to develop a computer-based assessment program. Psychometric concerns of accessibility, reliability, and validity were juxtaposed with core development concepts of usability and user-centric design. Phases of development were iterative, with evaluation phases alternating with…

  12. Core Formation Process and Light Elements in the Planetary Core

    NASA Astrophysics Data System (ADS)

    Ohtani, E.; Sakairi, T.; Watanabe, K.; Kamada, S.; Sakamaki, T.; Hirao, N.

    2015-12-01

    Si, O, and S are major candidates for light elements in the planetary core. In the early stage of the planetary formation, the core formation started by percolation of the metallic liquid though silicate matrix because Fe-S-O and Fe-S-Si eutectic temperatures are significantly lower than the solidus of the silicates. Therefore, in the early stage of accretion of the planets, the eutectic liquid with S enrichment was formed and separated into the core by percolation. The major light element in the core at this stage will be sulfur. The internal pressure and temperature increased with the growth of the planets, and the metal component depleted in S was molten. The metallic melt contained both Si and O at high pressure in the deep magma ocean in the later stage. Thus, the core contains S, Si, and O in this stage of core formation. Partitioning experiments between solid and liquid metals indicate that S is partitioned into the liquid metal, whereas O is weakly into the liquid. Partitioning of Si changes with the metallic iron phases, i.e., fcc iron-alloy coexisting with the metallic liquid below 30 GPa is depleted in Si. Whereas hcp-Fe alloy above 30 GPa coexisting with the liquid favors Si. This contrast of Si partitioning provides remarkable difference in compositions of the solid inner core and liquid outer core among different terrestrial planets. Our melting experiments of the Fe-S-Si and Fe-O-S systems at high pressure indicate the core-adiabats in small planets, Mercury and Mars, are greater than the slope of the solidus and liquidus curves of these systems. Thus, in these planets, the core crystallized at the top of the liquid core and 'snowing core' formation occurred during crystallization. The solid inner core is depleted in both Si and S whereas the liquid outer core is relatively enriched in Si and S in these planets. On the other hand, the core adiabats in large planets, Earth and Venus, are smaller than the solidus and liquidus curves of the systems. The

  13. Influence of core flows on the decade variations of the polar motion

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Le Huy, M.; Le Mouël, J.-L.

    We address the possibility for the core flows that generate the geomagnetic field to contribute significantly to the decade variations of the mean pole position (generally called the Markowitz wobble). This assumption is made plausible by the observation that the flow at the surface of the core-estimated from the geomagnetic secular variation models-experiences important changes on this time scale. We discard the viscous and electromagnetic core-mantle couplings and consider only the pressure torque pf resulting from the fluid flow overpressure acting on the non-spherical core-mantle boundary (CMB) at the bottom of the mantle, and the gravity torque gf due to the density heterogeneity driving the core flow. We show that forces within the core balance each other on the time scale considered and, using global integrals over the core, the mantle and the whole Earth, we write Euler's equation for the mantle in terms of two more useful torques Pgeo and . The "geostrophic torque", γ Pgeo incorporates γpf and part of γgf, while γ is another fraction of γgf. We recall how the geostrophic pressure pgeo, and thus γPgeo for a given topography, can be derived from the flow at the CMB and compute the motion of the mean pole from 1900 to 1990, assuming in a first approach that the unknown γ can be neglected. The amplitude of the computed pole motion is three to ten times less than the observed one and out of the phase with it. In order to estimate the possible contribution of γ we then use a second approach and consider the case in which the reference state for the Earth is assumed to be the classical axisymmetric ellipsoidal figure with an almost constant ellipticity within the core. We show that (γPgeo + γ) is then equal to a pseudo-electromagnetic torque γL3, the torque exerted on the core by the component of the Lorentz force along the axis of rotation (this torque exists even though the mantle is assumed insulating). This proves that, at least in this case and

  14. How cores grow by pebble accretion. I. Direct core growth

    NASA Astrophysics Data System (ADS)

    Brouwers, M. G.; Vazan, A.; Ormel, C. W.

    2018-03-01

    Context. Planet formation by pebble accretion is an alternative to planetesimal-driven core accretion. In this scenario, planets grow by the accretion of cm- to m-sized pebbles instead of km-sized planetesimals. One of the main differences with planetesimal-driven core accretion is the increased thermal ablation experienced by pebbles. This can provide early enrichment to the planet's envelope, which influences its subsequent evolution and changes the process of core growth. Aims: We aim to predict core masses and envelope compositions of planets that form by pebble accretion and compare mass deposition of pebbles to planetesimals. Specifically, we calculate the core mass where pebbles completely evaporate and are absorbed before reaching the core, which signifies the end of direct core growth. Methods: We model the early growth of a protoplanet by calculating the structure of its envelope, taking into account the fate of impacting pebbles or planetesimals. The region where high-Z material can exist in vapor form is determined by the temperature-dependent vapor pressure. We include enrichment effects by locally modifying the mean molecular weight of the envelope. Results: In the pebble case, three phases of core growth can be identified. In the first phase (Mcore < 0.23-0.39 M⊕), pebbles impact the core without significant ablation. During the second phase (Mcore < 0.5M⊕), ablation becomes increasingly severe. A layer of high-Z vapor starts to form around the core that absorbs a small fraction of the ablated mass. The rest of the material either rains out to the core or instead mixes outwards, slowing core growth. In the third phase (Mcore > 0.5M⊕), the high-Z inner region expands outwards, absorbing an increasing fraction of the ablated material as vapor. Rainout ends before the core mass reaches 0.6 M⊕, terminating direct core growth. In the case of icy H2O pebbles, this happens before 0.1 M⊕. Conclusions: Our results indicate that pebble accretion can

  15. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  16. Post impact behavior of mobile reactor core containment systems

    NASA Technical Reports Server (NTRS)

    Puthoff, R. L.; Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    The reactor core containment vessel temperatures after impact, and the design variables that affect the post impact survival of the system are analyzed. The heat transfer analysis includes conduction, radiation, and convection in addition to the core material heats of fusion and vaporization under partially burial conditions. Also, included is the fact that fission products vaporize and transport radially outward and condense outward and condense on cooler surfaces, resulting in a moving heat source. A computer program entitled Executive Subroutines for Afterheat Temperature Analysis (ESATA) was written to consider this complex heat transfer analysis. Seven cases were calculated of a reactor power system capable of delivering up to 300 MW of thermal power to a nuclear airplane.

  17. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, F.; Bazin, L.; Capron, E.; Landais, A.; Lemieux-Dudon, B.; Masson-Delmotte, V.

    2015-05-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age-scale uncertainty are essential to interpret the climate and environmental records that they contain. It is, however, a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the lock-in depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice- and air-dated horizons, ice and air depth intervals with known durations, depth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 (Antarctic ice core chronology) for four Antarctic ice cores and one Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono1 are demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono1 and Datice codes. We also test new functionalities with respect to the original version of Datice

  18. 23. CORE WORKER OPERATING A COREBLOWER THAT PNEUMATICALLY FILLED CORE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. CORE WORKER OPERATING A CORE-BLOWER THAT PNEUMATICALLY FILLED CORE BOXES WITH RESIGN IMPREGNATED SAND AND CREATED A CORE THAT THEN REQUIRED BAKING, CA. 1950. - Stockham Pipe & Fittings Company, 4000 Tenth Avenue North, Birmingham, Jefferson County, AL

  19. Determination of the neutron activation profile of core drill samples by gamma-ray spectrometry.

    PubMed

    Gurau, D; Boden, S; Sima, O; Stanga, D

    2018-04-01

    This paper provides guidance for determining the neutron activation profile of core drill samples taken from the biological shield of nuclear reactors using gamma spectrometry measurements. Thus, it provides guidance for selecting a model of the right form to fit data and using least squares methods for model fitting. The activity profiles of two core samples taken from the biological shield of a nuclear reactor were determined. The effective activation depth and the total activity of core samples along with their uncertainties were computed by Monte Carlo simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Analysing Student Performance Using Sparse Data of Core Bachelor Courses

    ERIC Educational Resources Information Center

    Saarela, Mirka; Karkkainen, Tommi

    2015-01-01

    Curricula for Computer Science (CS) degrees are characterized by the strong occupational orientation of the discipline. In the BSc degree structure, with clearly separate CS core studies, the learning skills for these and other required courses may vary a lot, which is shown in students' overall performance. To analyze this situation, we apply…

  1. NASA CORE (Central Operation of Resources for Educators) Educational Materials Catalog

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This educational materials catalog presents NASA CORE (Central Operation of Resources for Educators). The topics include: 1) Videocassettes (Aeronautics, Earth Resources, Weather, Space Exploration/Satellites, Life Sciences, Careers); 2) Slide Programs; 3) Computer Materials; 4) NASA Memorabilia/Miscellaneous; 5) NASA Educator Resource Centers; 6) and NASA Resources.

  2. Core-Cutoff Tool

    NASA Technical Reports Server (NTRS)

    Gheen, Darrell

    2007-01-01

    A tool makes a cut perpendicular to the cylindrical axis of a core hole at a predetermined depth to free the core at that depth. The tool does not damage the surrounding material from which the core was cut, and it operates within the core-hole kerf. Coring usually begins with use of a hole saw or a hollow cylindrical abrasive cutting tool to make an annular hole that leaves the core (sometimes called the plug ) in place. In this approach to coring as practiced heretofore, the core is removed forcibly in a manner chosen to shear the core, preferably at or near the greatest depth of the core hole. Unfortunately, such forcible removal often damages both the core and the surrounding material (see Figure 1). In an alternative prior approach, especially applicable to toxic or fragile material, a core is formed and freed by means of milling operations that generate much material waste. In contrast, the present tool eliminates the damage associated with the hole-saw approach and reduces the extent of milling operations (and, hence, reduces the waste) associated with the milling approach. The present tool (see Figure 2) includes an inner sleeve and an outer sleeve and resembles the hollow cylindrical tool used to cut the core hole. The sleeves are thin enough that this tool fits within the kerf of the core hole. The inner sleeve is attached to a shaft that, in turn, can be attached to a drill motor or handle for turning the tool. This tool also includes a cutting wire attached to the distal ends of both sleeves. The cutting wire is long enough that with sufficient relative rotation of the inner and outer sleeves, the wire can cut all the way to the center of the core. The tool is inserted in the kerf until its distal end is seated at the full depth. The inner sleeve is then turned. During turning, frictional drag on the outer core pulls the cutting wire into contact with the core. The cutting force of the wire against the core increases with the tension in the wire and

  3. The Role of Visualization in Computer Science Education

    ERIC Educational Resources Information Center

    Fouh, Eric; Akbar, Monika; Shaffer, Clifford A.

    2012-01-01

    Computer science core instruction attempts to provide a detailed understanding of dynamic processes such as the working of an algorithm or the flow of information between computing entities. Such dynamic processes are not well explained by static media such as text and images, and are difficult to convey in lecture. The authors survey the history…

  4. Temporal Change of Seismic Earth's Inner Core Phases: Inner Core Differential Rotation Or Temporal Change of Inner Core Surface?

    NASA Astrophysics Data System (ADS)

    Yao, J.; Tian, D.; Sun, L.; Wen, L.

    2017-12-01

    Since Song and Richards [1996] first reported seismic evidence for temporal change of PKIKP wave (a compressional wave refracted in the inner core) and proposed inner core differential rotation as its explanation, it has generated enormous interests in the scientific community and the public, and has motivated many studies on the implications of the inner core differential rotation. However, since Wen [2006] reported seismic evidence for temporal change of PKiKP wave (a compressional wave reflected from the inner core boundary) that requires temporal change of inner core surface, both interpretations for the temporal change of inner core phases have existed, i.e., inner core rotation and temporal change of inner core surface. In this study, we discuss the issue of the interpretation of the observed temporal changes of those inner core phases and conclude that inner core differential rotation is not only not required but also in contradiction with three lines of seismic evidence from global repeating earthquakes. Firstly, inner core differential rotation provides an implausible explanation for a disappearing inner core scatterer between a doublet in South Sandwich Islands (SSI), which is located to be beneath northern Brazil based on PKIKP and PKiKP coda waves of the earlier event of the doublet. Secondly, temporal change of PKIKP and its coda waves among a cluster in SSI is inconsistent with the interpretation of inner core differential rotation, with one set of the data requiring inner core rotation and the other requiring non-rotation. Thirdly, it's not reasonable to invoke inner core differential rotation to explain travel time change of PKiKP waves in a very small time scale (several months), which is observed for repeating earthquakes in Middle America subduction zone. On the other hand, temporal change of inner core surface could provide a consistent explanation for all the observed temporal changes of PKIKP and PKiKP and their coda waves. We conclude that

  5. Computational Investigation of Shock-Mitigation Efficacy of Polyurea When Used in a Combat Helmet

    DTIC Science & Technology

    2012-01-01

    Multidiscipline Modeling in Materials and Structures Emerald Article: Computational investigation of shock-mitigation efficacy of polyurea when used...mitigation efficacy of polyurea when used in a combat helmet: A core sample analysis", Multidiscipline Modeling in Materials and Structures, Vol. 8 Iss...to 00-00-2012 4. TITLE AND SUBTITLE Computational investigation of shock-mitigation efficacy of polyurea when used in a combat helmet: A core

  6. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Horey, James L; Begoli, Edmon

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloudmore » Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.« less

  7. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  8. A vortex-filament and core model for wings with edge vortex separation

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Lan, C. E.

    1982-01-01

    A vortex filament-vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semi-empirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: (1) the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; (2) the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; (3) the two vortex core system applied to the double delta and strake wings produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and (4) the computer time for the present method is about two thirds of that of Mehrotra's method.

  9. Transport Properties of Earth's Core

    NASA Astrophysics Data System (ADS)

    Cohen, R. E.; Zhang, P.; Xu, J.

    2016-12-01

    One of the most important parameters governing the original heat that drives all processes in the Earth is the thermal conductivity of Earth's core. Heat is transferred through the core by convection and conduction, and the convective component provides energy to drive the geodynamo. Sha and Cohen (2011) found that the electrical conductivity of solid hcp-iron was much higher than had been assumed by geophysicists, based on electronic structure computations for electron-phonon scattering (e-p) within density functional theory [1]. Thermal conductivity is related to electrical conductivity through the empirical Wiedmann-Franz law of 1853 [2]. Pozzo et al. [3] found that the high electrical conductivity of liquid iron alloys was too high for conventional dynamo models to work—there simply is not enough energy, so O'Rourke and Stevenson proposed a model driven by participation of Mg from the core [4], supported by recent experients [5]. Recent measurements by Ohta et al. show even lower resistivities than predicted by DFT e-p, and invoked a saturation model to account for this, [6] whereas, Konopkova et al. found thermal conductivities consistent with earlier geophysical estimates. [7] We are using first-principles methods, including dynamical mean field theory for electron-electron scattering, and highly converged e-p computations, and find evidence for strong anisotropy in solid hcp-Fe that may help explain some experimental results. The current status of the field will be discussed along with our recent results. This work is supported by the ERC Advanced grant ToMCaT, the NSF, and the Carnegie Institution for Science.[1] X. Sha and R. E. Cohen, J.Phys.: Condens.Matter 23, 075401 (2011).[2] R. Franz and G. Wiedemann, Annalen Physik 165, 497 (1853).[3] M. Pozzo, C. Davies, D. Gubbins, and D. Alfe, Nature 485, 355 (2012).[4] J. G. O'Rourke and D. J. Stevenson, Nature 529, 387 (2016).[5] J. Badro, J. Siebert, and F. Nimmo, Nature (2016).[6] K. Ohta, Y. Kuwayama, K

  10. Learning Motivation in E-Learning Facilitated Computer Programming Courses

    ERIC Educational Resources Information Center

    Law, Kris M. Y.; Lee, Victor C. S.; Yu, Y. T.

    2010-01-01

    Computer programming skills constitute one of the core competencies that graduates from many disciplines, such as engineering and computer science, are expected to possess. Developing good programming skills typically requires students to do a lot of practice, which cannot sustain unless they are adequately motivated. This paper reports a…

  11. Sulforaphane Inhibits Lipopolysaccharide-Induced Inflammation, Cytotoxicity, Oxidative Stress, and miR-155 Expression and Switches to Mox Phenotype through Activating Extracellular Signal-Regulated Kinase 1/2-Nuclear Factor Erythroid 2-Related Factor 2/Antioxidant Response Element Pathway in Murine Microglial Cells.

    PubMed

    Eren, Erden; Tufekci, Kemal Ugur; Isci, Kamer Burak; Tastan, Bora; Genc, Kursad; Genc, Sermin

    2018-01-01

    Sulforaphane (SFN) is a natural product with cytoprotective, anti-inflammatory, and antioxidant effects. In this study, we evaluated the mechanisms of its effects on lipopolysaccharide (LPS)-induced cell death, inflammation, oxidative stress, and polarization in murine microglia. We found that SFN protects N9 microglial cells upon LPS-induced cell death and suppresses LPS-induced levels of secreted pro-inflammatory cytokines, tumor necrosis factor-alpha, interleukin-1 beta, and interleukin-6. SFN is also a potent inducer of redox sensitive transcription factor, nuclear factor erythroid 2-related factor 2 (Nrf2), which is responsible for the transcription of antioxidant, cytoprotective, and anti-inflammatory genes. SFN induced translocation of Nrf2 to the nucleus via extracellular signal-regulated kinase 1/2 (ERK1/2) pathway activation. siRNA-mediated knockdown study showed that the effects of SFN on LPS-induced reactive oxygen species, reactive nitrogen species, and pro-inflammatory cytokine production and cell death are partly Nrf2 dependent. Mox phenotype is a novel microglial phenotype that has roles in oxidative stress responses. Our results suggested that SFN induced the Mox phenotype in murine microglia through Nrf2 pathway. SFN also alleviated LPS-induced expression of inflammatory microRNA, miR-155. Finally, SFN inhibits microglia-mediated neurotoxicity as demonstrated by conditioned medium and co-culture experiments. In conclusion, SFN exerts protective effects on microglia and modulates the microglial activation state.

  12. Developments in the Gung Ho dynamical core

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas

    2017-04-01

    Gung Ho is the new dynamical core being developed for the next generation Met Office weather and climate model, suitable for meeting the exascale challenge on emerging computer architectures. It builds upon the earlier collaborative project between the Met Office, NERC and STFC Daresbury of the same name to investigate suitable numerical methods for dynamical cores. A mixed-finite element approach is used, where different finite element spaces are used to represent various fields. This method provides a number of beneficial improvements over the current model, such a compatibility and inherent conservation on quasi-uniform unstructured meshes, whilst maintaining the accuracy and good dispersion properties of the staggered grid currently used. Furthermore, the mixed finite element approach allows a large degree of flexibility in the type of mesh, order of approximation and discretisation, providing a simple way to test alternative options to obtain the best model possible.

  13. Computational Psychiatry

    PubMed Central

    Wang, Xiao-Jing; Krystal, John H.

    2014-01-01

    Psychiatric disorders such as autism and schizophrenia arise from abnormalities in brain systems that underlie cognitive, emotional and social functions. The brain is enormously complex and its abundant feedback loops on multiple scales preclude intuitive explication of circuit functions. In close interplay with experiments, theory and computational modeling are essential for understanding how, precisely, neural circuits generate flexible behaviors and their impairments give rise to psychiatric symptoms. This Perspective highlights recent progress in applying computational neuroscience to the study of mental disorders. We outline basic approaches, including identification of core deficits that cut across disease categories, biologically-realistic modeling bridging cellular and synaptic mechanisms with behavior, model-aided diagnosis. The need for new research strategies in psychiatry is urgent. Computational psychiatry potentially provides powerful tools for elucidating pathophysiology that may inform both diagnosis and treatment. To achieve this promise will require investment in cross-disciplinary training and research in this nascent field. PMID:25442941

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Bays; W. Skerjanc; M. Pope

    A comparative analysis and comparison of results obtained between 2-D lattice calculations and 3-D full core nodal calculations, in the frame of MOX fuel design, was conducted. This study revealed a set of advantages and disadvantages, with respect to each method, which can be used to guide the level of accuracy desired for future fuel and fuel cycle calculations. For the purpose of isotopic generation for fuel cycle analyses, the approach of using a 2-D lattice code (i.e., fuel assembly in infinite lattice) gave reasonable predictions of uranium and plutonium isotope concentrations at the predicted 3-D core simulation batch averagemore » discharge burnup. However, it was found that the 2-D lattice calculation can under-predict the power of pins located along a shared edge between MOX and UO2 by as much as 20%. In this analysis, this error did not occur in the peak pin. However, this was a coincidence and does not rule out the possibility that the peak pin could occur in a lattice position with high calculation uncertainty in future un-optimized studies. Another important consideration in realistic fuel design is the prediction of the peak axial burnup and neutron fluence. The use of 3-D core simulation gave peak burnup conditions, at the pellet level, to be approximately 1.4 times greater than what can be predicted using back-of-the-envelope assumptions of average specific power and irradiation time.« less

  15. Modal analysis and acoustic transmission through offset-core honeycomb sandwich panels

    NASA Astrophysics Data System (ADS)

    Mathias, Adam Dustin

    The work presented in this thesis is motivated by an earlier research that showed that double, offset-core honeycomb sandwich panels increased thermal resistance and, hence, decreased heat transfer through the panels. This result lead to the hypothesis that these panels could be used for acoustic insulation. Using commercial finite element modeling software, COMSOL Multiphysics, the acoustical properties, specifically the transmission loss across a variety of offset-core honeycomb sandwich panels, is studied for the case of a plane acoustic wave impacting the panel at normal incidence. The transmission loss results are compared with those of single-core honeycomb panels with the same cell sizes. The fundamental frequencies of the panels are also computed in an attempt to better understand the vibrational modes of these particular sandwich-structured panels. To ensure that the finite element analysis software is adequate for the task at hand, two relevant benchmark problems are solved and compared with theory. Results from these benchmark results compared well to those obtained from theory. Transmission loss results from the offset-core honeycomb sandwich panels show increased transmission loss, especially for large cell honeycombs when compared to single-core honeycomb panels.

  16. Adaptive control method for core power control in TRIGA Mark II reactor

    NASA Astrophysics Data System (ADS)

    Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd

    2018-01-01

    The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.

  17. Deformation Behavior of Al/a-Si Core-shell Nanostructures

    NASA Astrophysics Data System (ADS)

    Fleming, Robert

    Al/a-Si core-shell nanostructures (CSNs), consisting of a hemispherical Al core surrounded by a hard shell of a-Si, have been shown to display unusual mechanical behavior in response to compression loading. Most notably, these nanostructures exhibit substantial deformation recovery, even when loaded much beyond the elastic limit. Nanoindentation measurements revealed a unique mechanical response characterized by discontinuous signatures in the load-displacement data. In conjunction with the indentation signatures, nearly complete deformation recovery is observed. This behavior is attributed to dislocation nucleation and annihilation events enabled by the 3-dimensional confinement of the Al core. As the core confinement is reduced, either through an increase in confined core volume or a change in the geometrical confinement, the indentation signatures and deformation resistance are significantly reduced. Complimentary molecular dynamics simulations show that a substantial amount of dislocation egression occurs in the core of CSNs during unloading as dislocations annihilate at the core/shell interface. Smaller core diameters correlate with the development of a larger back-stress within the core during unloading, which further correlates with improved dislocation annihilation after unloading. Furthermore, dislocations nucleated in the core of core-shell nanorods are not as effectively removed as compared to CSNs. Nanostructure-textured surfaces (NSTSs) composed of Al/a-Si CSNs have improved tribological properties compared surfaces patterned with Al nanodots and a flat (100) Si surface. NSTSs have a coefficient of friction (COF) as low as 0.015, exhibit low adhesion with adhesion forces on the order of less than 1 microN, and are highly deformation resistant, with no apparent surface deformation after nanoscratch testing, even at contact forces up to 8000 microN. In comparison, (100) Si has substantially higher adhesion and COF ( 10 microN and 0.062, respectively

  18. Graphite grain-size spectrum and molecules from core-collapse supernovae

    NASA Astrophysics Data System (ADS)

    Clayton, Donald D.; Meyer, Bradley S.

    2018-01-01

    Our goal is to compute the abundances of carbon atomic complexes that emerge from the C + O cores of core-collapse supernovae. We utilize our chemical reaction network in which every atomic step of growth employs a quantum-mechanically guided reaction rate. This tool follows step-by-step the growth of linear carbon chain molecules from C atoms in the oxygen-rich C + O cores. We postulate that once linear chain molecules reach a sufficiently large size, they isomerize to ringed molecules, which serve as seeds for graphite grain growth. We demonstrate our technique for merging the molecular reaction network with a parallel program that can follow 1017 steps of C addition onto the rare seed species. Due to radioactivity within the C + O core, abundant ambient oxygen is unable to convert C to CO, except to a limited degree that actually facilitates carbon molecular ejecta. But oxygen severely minimizes the linear-carbon-chain abundances. Despite the tiny abundances of these linear-carbon-chain molecules, they can give rise to a small abundance of ringed-carbon molecules that serve as the nucleations on which graphite grain growth builds. We expand the C + O-core gas adiabatically from 6000 K for 109 s when reactions have essentially stopped. These adiabatic tracks emulate the actual expansions of the supernova cores. Using a standard model of 1056 atoms of C + O core ejecta having O/C = 3, we calculate standard ejection yields of graphite grains of all sizes produced, of the CO molecular abundance, of the abundances of linear-carbon molecules, and of Buckminsterfullerene. None of these except CO was expected from the C + O cores just a few years past.

  19. High Productivity Computing Systems and Competitiveness Initiative

    DTIC Science & Technology

    2007-07-01

    planning committee for the annual, international Supercomputing Conference in 2004 and 2005. This is the leading HPC industry conference in the world. It...sector partnerships. Partnerships will form a key part of discussions at the 2nd High Performance Computing Users Conference, planned for July 13, 2005...other things an interagency roadmap for high-end computing core technologies and an accessibility improvement plan . Improving HPC Education and

  20. Feasibility of computed tomography-guided core needle biopsy in producing state-of-the-art clinical management in Chinese lung cancer.

    PubMed

    Chen, Hua-Jun; Yang, Jin-Ji; Fang, Liang-Yi; Huang, Min-Min; Yan, Hong-Hong; Zhang, Xu-Chao; Xu, Chong-Rui; Wu, Yi-Long

    2014-03-01

    A satisfactory biopsy determines the state-of-the-art management of lung cancer in this era of personalized medicine. This study aimed to investigate the suitability and efficacy of computed tomography (CT)-guided core needle biopsy in clinical management. A cohort of 353 patients with clinically suspected lung cancer was enrolled in the study. Patient factors and biopsy variables were recorded. Epidermal growth factor receptor (EGFR) gene mutations and echinoderm microtubule-associated protein-like 4 (EML4)-anaplastic lymphoma kinase (ALK) rearrangement were detected in tumor specimens. Adequacy of biopsic obtainment for clinical trial screening and tissue bank establishment were reviewed. Overall diagnostic accuracy of malignancy achieved 98.5%. The median biopsy time of the cohort was 20 minutes. In patients with non-small cell lung cancer (NSCLC), 99.3% (287/289) were diagnosed as specific histologic subtypes, and two patients (0.7%) were determined as NSCLC not otherwise specified (NOS). EGFR mutations were analyzed in 81.7% (236/289) of patients with NSCLC, and 98.7% (233/236) showed conclusive results. EML4-ALK gene fusion was tested in 43.9% (127/289) of NSCLC patients, and 98.4% (125/127) showed conclusive results: 6.4% (8/125) of those had gene fusion. Ninety-six NSCLC patients participated in clinical trial screening and provided mandatory tumor slides for molecular profiling. Pathological evaluation was fulfilled in 90 patients (93.8%); 99.4% (320/322) of patients with malignancy provided extra tissue for the establishment of a tumor bank. CT-guided core needle biopsy provided optimal clinical management in this era of translational medicine. The biopsic modality should be prioritized in selected lung cancer patients.

  1. (Extreme) Core-collapse Supernova Simulations

    NASA Astrophysics Data System (ADS)

    Mösta, Philipp

    2017-01-01

    In this talk I will present recent progress on modeling core-collapse supernovae with massively parallel simulations on the largest supercomputers available. I will discuss the unique challenges in both input physics and computational modeling that come with a problem involving all four fundamental forces and relativistic effects and will highlight recent breakthroughs overcoming these challenges in full 3D simulations. I will pay particular attention to how these simulations can be used to reveal the engines driving some of the most extreme explosions and conclude by discussing what remains to be done in simulation work to maximize what we can learn from current and future time-domain astronomy transient surveys.

  2. Multi-level Hierarchical Poly Tree computer architectures

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug

    1990-01-01

    Based on the concept of hierarchical substructuring, this paper develops an optimal multi-level Hierarchical Poly Tree (HPT) parallel computer architecture scheme which is applicable to the solution of finite element and difference simulations. Emphasis is given to minimizing computational effort, in-core/out-of-core memory requirements, and the data transfer between processors. In addition, a simplified communications network that reduces the number of I/O channels between processors is presented. HPT configurations that yield optimal superlinearities are also demonstrated. Moreover, to generalize the scope of applicability, special attention is given to developing: (1) multi-level reduction trees which provide an orderly/optimal procedure by which model densification/simplification can be achieved, as well as (2) methodologies enabling processor grading that yields architectures with varying types of multi-level granularity.

  3. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  4. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    PubMed

    Owada, Keiho; Kojima, Masaki; Yassin, Walid; Kuroda, Miho; Kawakubo, Yuki; Kuwabara, Hitoshi; Kano, Yukiko; Yamasue, Hidenori

    2018-01-01

    To develop novel interventions for autism spectrum disorder (ASD) core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS) were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD) individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR < 0.05) with less variation in Neutral expression (d = 1.08, P = 0.003, PFDR < 0.05). Their expressions were also less "Happy" (d = -0.78, P = 0.038, PFDR > 0.05) with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR < 0.05). Moreover, the stronger Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042). These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  5. Clogging evaluation of porous asphalt concrete cores in conjunction with medical x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Su, Yu-Min; Hsu, Chen-Yu; Lin, Jyh-Dong

    2014-03-01

    This study was to assess the porosity of Porous Asphalt Concrete (PAC) in conjunction with a medical X-ray computed tomography (CT) facility. The PAC was designed as the surface course to achieve the target porosity 18%. There were graded aggregates, soils blended with 50% of coarse sand, and crushed gravel wrapped with geotextile compacted and served as the base, subbase, and infiltration layers underneath the PAC. The test site constructed in 2004 is located in Northern of Taiwan in which the daily traffic has been light and limited. The porosity of the test track was investigated. The permeability coefficient of PAC was found severely degraded from 2.2×10-1 to 1.2×10-3 -cm/sec, after nine-year service, while the permeability below the surface course remained intact. Several field PAC cores were drilled and brought to evaluate the distribution of air voids by a medical X-ray CT nondestructively. The helical mode was set to administrate the X-ray CT scan and two cross-sectional virtual slices were exported in seconds for analyzing air voids distribution. It shows that the clogging of voids occurred merely 20mm below the surface and the porosity can reduce as much about 3%. It was also found that the roller compaction can decrease the porosity by 4%. The permeability reduction in this test site can attribute to the voids of PAC that were compacted by roller during the construction and filled by the dusts on the surface during the service.

  6. Microhydration of LiOH: Insight from electronic decays of core-ionized states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kryzhevoi, Nikolai V., E-mail: nikolai.kryzhevoi@pci.uni-heidelberg.de

    2016-06-28

    We compute and compare the autoionization spectra of a core-ionized LiOH molecule both in its isolated and microhydrated states. Stepwise microhydration of LiOH leads to gradual elongation of the Li–OH bond length and finally to molecular dissociation. The accompanying changes in the local environment of the OH{sup −} and Li{sup +} counterions are reflected in the computed O 1s and Li 1s spectra. The role of solvent water molecules and the counterion in the spectral shape formation is assessed. Electronic decays of the microhydrated LiOH are found to be mostly intermolecular since the majority of the populated final states havemore » at least one outer-valence vacancy outside the initially core-ionized ion, mainly on a neighboring water molecule. The charge delocalization occurs through the intermolecular Coulombic and electron transfer mediated decays. Both mechanisms are highly efficient that is partly attributed to hybridization of molecular orbitals. The computed spectral shapes are sensitive to the counterion separation as well as to the number and arrangement of solvent molecules. These sensitivities can be used for studying the local hydration structure of solvated ions in aqueous solutions.« less

  7. A first step to compare geodynamical models and seismic observations of the inner core

    NASA Astrophysics Data System (ADS)

    Lasbleis, M.; Waszek, L.; Day, E. A.

    2016-12-01

    Seismic observations have revealed a complex inner core, with lateral and radial heterogeneities at all observable scales. The dominant feature is the east-west hemispherical dichotomy in seismic velocity and attenuation. Several geodynamical models have been proposed to explain the observed structure: convective instabilities, external forces, crystallisation processes or influence of outer core convection. However, interpreting such geodynamical models in terms of the seismic observations is difficult, and has been performed only for very specific models (Geballe 2013, Lincot 2014, 2016). Here, we propose a common framework to make such comparisons. We have developed a Python code that propagates seismic ray paths through kinematic geodynamical models for the inner core, computing a synthetic seismic data set that can be compared to seismic observations. Following the method of Geballe 2013, we start with the simple model of translation. For this, the seismic velocity is proposed to be function of the age or initial growth rate of the material (since there is no deformation included in our models); the assumption is reasonable when considering translation, growth and super rotation of the inner core. Using both artificial (random) seismic ray data sets and a real inner core data set (from Waszek et al. 2011), we compare these different models. Our goal is to determine the model which best matches the seismic observations. Preliminary results show that super rotation successfully creates an eastward shift in properties with depth, as has been observed seismically. Neither the growth rate of inner core material nor the relationship between crystal size and seismic velocity are well constrained. Consequently our method does not directly compute the seismic travel times. Instead, here we use age, growth rate and other parameters as proxies for the seismic properties, which represent a good first step to compare geodynamical and seismic observations.Ultimately we aim

  8. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  9. Evolution dynamics modeling and simulation of logistics enterprise's core competence based on service innovation

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Tong, Yuting

    2017-04-01

    With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.

  10. Selection of core animals in the Algorithm for Proven and Young using a simulation model.

    PubMed

    Bradford, H L; Pocrnić, I; Fragomeni, B O; Lourenco, D A L; Misztal, I

    2017-12-01

    The Algorithm for Proven and Young (APY) enables the implementation of single-step genomic BLUP (ssGBLUP) in large, genotyped populations by separating genotyped animals into core and non-core subsets and creating a computationally efficient inverse for the genomic relationship matrix (G). As APY became the choice for large-scale genomic evaluations in BLUP-based methods, a common question is how to choose the animals in the core subset. We compared several core definitions to answer this question. Simulations comprised a moderately heritable trait for 95,010 animals and 50,000 genotypes for animals across five generations. Genotypes consisted of 25,500 SNP distributed across 15 chromosomes. Genotyping errors and missing pedigree were also mimicked. Core animals were defined based on individual generations, equal representation across generations, and at random. For a sufficiently large core size, core definitions had the same accuracies and biases, even if the core animals had imperfect genotypes. When genotyped animals had unknown parents, accuracy and bias were significantly better (p ≤ .05) for random and across generation core definitions. © 2017 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.

  11. A solid reactor core thermal model for nuclear thermal rockets

    NASA Astrophysics Data System (ADS)

    Rider, William J.; Cappiello, Michael W.; Liles, Dennis R.

    1991-01-01

    A Helium/Hydrogen Cooled Reactor Analysis (HERA) computer code has been developed. HERA has the ability to model arbitrary geometries in three dimensions, which allows the user to easily analyze reactor cores constructed of prismatic graphite elements. The code accounts for heat generation in the fuel, control rods, and other structures; conduction and radiation across gaps; convection to the coolant; and a variety of boundary conditions. The numerical solution scheme has been optimized for vector computers, making long transient analyses economical. Time integration is either explicit or implicit, which allows the use of the model to accurately calculate both short- or long-term transients with an efficient use of computer time. Both the basic spatial and temporal integration schemes have been benchmarked against analytical solutions.

  12. Core formation and core composition from coupled geochemical and geophysical constraints

    DOE PAGES

    Badro, James; Brodholt, John P.; Piet, Helene; ...

    2015-09-21

    The formation of Earth’s core left behind geophysical and geochemical signatures in both the core and mantle that remain to this day. Seismology requires that the core be lighter than pure iron and therefore must contain light elements, and the geochemistry of mantle-derived rocks reveals extensive siderophile element depletion and fractionation. Both features are inherited from metal–silicate differentiation in primitive Earth and depend upon the nature of physiochemical conditions that prevailed during core formation. To date, core formation models have only attempted to address the evolution of core and mantle compositional signatures separately, rather than seeking a joint solution. Heremore » we combine experimental petrology, geochemistry, mineral physics and seismology to constrain a range of core formation conditions that satisfy both constraints. We find that core formation occurred in a hot (liquidus) yet moderately deep magma ocean not exceeding 1,800 km depth, under redox conditions more oxidized than present-day Earth. This new scenario, at odds with the current belief that core formation occurred under reducing conditions, proposes that Earth’s magma ocean started oxidized and has become reduced through time, by oxygen incorporation into the core. As a result, this core formation model produces a core that contains 2.7–5% oxygen along with 2–3.6% silicon, with densities and velocities in accord with radial seismic models, and leaves behind a silicate mantle that matches the observed mantle abundances of nickel, cobalt, chromium, and vanadium.« less

  13. Core formation and core composition from coupled geochemical and geophysical constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badro, James; Brodholt, John P.; Piet, Helene

    The formation of Earth’s core left behind geophysical and geochemical signatures in both the core and mantle that remain to this day. Seismology requires that the core be lighter than pure iron and therefore must contain light elements, and the geochemistry of mantle-derived rocks reveals extensive siderophile element depletion and fractionation. Both features are inherited from metal–silicate differentiation in primitive Earth and depend upon the nature of physiochemical conditions that prevailed during core formation. To date, core formation models have only attempted to address the evolution of core and mantle compositional signatures separately, rather than seeking a joint solution. Heremore » we combine experimental petrology, geochemistry, mineral physics and seismology to constrain a range of core formation conditions that satisfy both constraints. We find that core formation occurred in a hot (liquidus) yet moderately deep magma ocean not exceeding 1,800 km depth, under redox conditions more oxidized than present-day Earth. This new scenario, at odds with the current belief that core formation occurred under reducing conditions, proposes that Earth’s magma ocean started oxidized and has become reduced through time, by oxygen incorporation into the core. As a result, this core formation model produces a core that contains 2.7–5% oxygen along with 2–3.6% silicon, with densities and velocities in accord with radial seismic models, and leaves behind a silicate mantle that matches the observed mantle abundances of nickel, cobalt, chromium, and vanadium.« less

  14. Evaluation of out-of-core computer programs for the solution of symmetric banded linear equations. [simultaneous equations

    NASA Technical Reports Server (NTRS)

    Dunham, R. S.

    1976-01-01

    FORTRAN coded out-of-core equation solvers that solve using direct methods symmetric banded systems of simultaneous algebraic equations. Banded, frontal and column (skyline) solvers were studied as well as solvers that can partition the working area and thus could fit into any available core. Comparison timings are presented for several typical two dimensional and three dimensional continuum type grids of elements with and without midside nodes. Extensive conclusions are also given.

  15. A Computational and Experimental Investigation of a Delta Wing with Vertical Tails

    NASA Technical Reports Server (NTRS)

    Krist. Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    2004-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 and a Reynolds number of 500; 000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  16. A computational and experimental investigation of a delta wing with vertical tails

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    1993-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 deg and a Reynolds number of 500,000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  17. Computational design of a homotrimeric metalloprotein with a trisbipyridyl core

    DOE PAGES

    Mills, Jeremy H.; Sheffler, William; Ener, Maraia E.; ...

    2016-12-08

    Metal-chelating heteroaryl small molecules have found widespread use as building blocks for coordination-driven, self-assembling nanostructures. The metal-chelating noncanonical amino acid (2,2'-bipyridin-5yl)alanine (Bpy-ala) could, in principle, be used to nucleate specific metalloprotein assemblies if introduced into proteins such that one assembly had much lower free energy than all alternatives. Here in this paper, we describe the use of the Rosetta computational methodology to design a self-assembling homotrimeric protein with [Fe(Bpy-ala) 3] 2+ complexes at the interface between monomers. X-ray crystallographic analysis of the homotrimer showed that the design process had near-atomic-level accuracy: The all-atom rmsd between the design model and crystalmore » structure for the residues at the protein interface is ~1.4 Å. These results demonstrate that computational protein design together with genetically encoded noncanonical amino acids can be used to drive formation of precisely specified metal-mediated protein assemblies that could find use in a wide range of photophysical applications.« less

  18. Computational design of a homotrimeric metalloprotein with a trisbipyridyl core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Jeremy H.; Sheffler, William; Ener, Maraia E.

    Metal-chelating heteroaryl small molecules have found widespread use as building blocks for coordination-driven, self-assembling nanostructures. The metal-chelating noncanonical amino acid (2,2'-bipyridin-5yl)alanine (Bpy-ala) could, in principle, be used to nucleate specific metalloprotein assemblies if introduced into proteins such that one assembly had much lower free energy than all alternatives. Here in this paper, we describe the use of the Rosetta computational methodology to design a self-assembling homotrimeric protein with [Fe(Bpy-ala) 3] 2+ complexes at the interface between monomers. X-ray crystallographic analysis of the homotrimer showed that the design process had near-atomic-level accuracy: The all-atom rmsd between the design model and crystalmore » structure for the residues at the protein interface is ~1.4 Å. These results demonstrate that computational protein design together with genetically encoded noncanonical amino acids can be used to drive formation of precisely specified metal-mediated protein assemblies that could find use in a wide range of photophysical applications.« less

  19. Analysis of perfusion, microcirculation and drug transport in tumors. A computational study.

    NASA Astrophysics Data System (ADS)

    Zunino, Paolo; Cattaneo, Laura

    2013-11-01

    We address blood flow through a network of capillaries surrounded by a porous interstitium. We develop a computational model based on the Immersed Boundary method [C. S. Peskin. Acta Numer. 2002.]. The advantage of such an approach relies in its efficiency, because it does not need a full description of the real geometry allowing for a large economy of memory and CPU time and it facilitates handling fully realistic vascular networks [L. Cattaneo and P. Zunino. Technical report, MOX, Department of Mathematics, Politecnico di Milano, 2013.]. The analysis of perfusion and drug release in vascularized tumors is a relevant application of such techniques. Blood vessels in tumors are substantially leakier than in healthy tissue and they are tortuous. These vascular abnormalities lead to an impaired blood supply and abnormal tumor microenvironment characterized by hypoxia and elevated interstitial fluid pressure that reduces the distribution of drugs through advection [L.T. Baxter and R.K. Jain. Microvascular Research, 1989]. Finally, we discuss the application of the model to deliver nanoparticles. In particular, transport of nanoparticles in the vessels network, their adhesion to the vessel wall and the drug release in the surrounding tissue will be addressed.

  20. Elastic and plastic buckling of simply supported solid-core sandwich plates in compression

    NASA Technical Reports Server (NTRS)

    Seide, Paul; Stowell, Elbridge Z

    1950-01-01

    A solution is presented for the problem of the compressive buckling of simply supported, flat, rectangular, solid-core sandwich plates stressed either in the elastic range or in the plastic range. Charts for the analysis of long sandwich plates are presented for plates having face materials of 24s-t3 aluminum alloy, 76s-t6 alclad aluminum alloy, and stainless steel. A comparison of computed and experimental buckling stresses of square solid-core sandwich plates indicates fair agreement between theory and experiment.

  1. GPU-computing in econophysics and statistical physics

    NASA Astrophysics Data System (ADS)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  2. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  3. Advanced core-analyses for subsurface characterization

    NASA Astrophysics Data System (ADS)

    Pini, R.

    2017-12-01

    The heterogeneity of geological formations varies over a wide range of length scales and represents a major challenge for predicting the movement of fluids in the subsurface. Although they are inherently limited in the accessible length-scale, laboratory measurements on reservoir core samples still represent the only way to make direct observations on key transport properties. Yet, properties derived on these samples are of limited use and should be regarded as sample-specific (or `pseudos'), if the presence of sub-core scale heterogeneities is not accounted for in data processing and interpretation. The advent of imaging technology has significantly reshaped the landscape of so-called Special Core Analysis (SCAL) by providing unprecedented insight on rock structure and processes down to the scale of a single pore throat (i.e. the scale at which all reservoir processes operate). Accordingly, improved laboratory workflows are needed that make use of such wealth of information by e.g., referring to the internal structure of the sample and in-situ observations, to obtain accurate parameterisation of both rock- and flow-properties that can be used to populate numerical models. We report here on the development of such workflow for the study of solute mixing and dispersion during single- and multi-phase flows in heterogeneous porous systems through a unique combination of two complementary imaging techniques, namely X-ray Computed Tomography (CT) and Positron Emission Tomography (PET). The experimental protocol is applied to both synthetic and natural porous media, and it integrates (i) macroscopic observations (tracer effluent curves), (ii) sub-core scale parameterisation of rock heterogeneities (e.g., porosity, permeability and capillary pressure), and direct 3D observation of (iii) fluid saturation distribution and (iv) the dynamic spreading of the solute plumes. Suitable mathematical models are applied to reproduce experimental observations, including both 1D and 3D

  4. Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.

  5. Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.

    PubMed

    Skoneczny, Szymon

    2015-01-01

    The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.

  6. IceChrono1: a probabilistic model to compute a common and optimal chronology for several ice cores

    NASA Astrophysics Data System (ADS)

    Parrenin, Frédéric; Bazin, Lucie; Capron, Emilie; Landais, Amaëlle; Lemieux-Dudon, Bénédicte; Masson-Delmotte, Valérie

    2016-04-01

    Polar ice cores provide exceptional archives of past environmental conditions. The dating of ice cores and the estimation of the age scale uncertainty are essential to interpret the climate and environmental records that they contain. It is however a complex problem which involves different methods. Here, we present IceChrono1, a new probabilistic model integrating various sources of chronological information to produce a common and optimized chronology for several ice cores, as well as its uncertainty. IceChrono1 is based on the inversion of three quantities: the surface accumulation rate, the Lock-In Depth (LID) of air bubbles and the thinning function. The chronological information integrated into the model are: models of the sedimentation process (accumulation of snow, densification of snow into ice and air trapping, ice flow), ice and air dated horizons, ice and air depth intervals with known durations, Δdepth observations (depth shift between synchronous events recorded in the ice and in the air) and finally air and ice stratigraphic links in between ice cores. The optimization is formulated as a least squares problem, implying that all densities of probabilities are assumed to be Gaussian. It is numerically solved using the Levenberg-Marquardt algorithm and a numerical evaluation of the model's Jacobian. IceChrono follows an approach similar to that of the Datice model which was recently used to produce the AICC2012 chronology for 4 Antarctic ice cores and 1 Greenland ice core. IceChrono1 provides improvements and simplifications with respect to Datice from the mathematical, numerical and programming point of views. The capabilities of IceChrono is demonstrated on a case study similar to the AICC2012 dating experiment. We find results similar to those of Datice, within a few centuries, which is a confirmation of both IceChrono and Datice codes. We also test new functionalities with respect to the original version of Datice: observations as ice intervals

  7. Prise en compte d'un couplage fin neutronique-thermique dans les calculs d'assemblage pour les reacteurs a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Greiner, Nathan

    Core simulations for Pressurized Water Reactors (PWR) is insured by a set of computer codes which allows, under certain assumptions, to approximate the physical quantities of interest, such as the effective multiplication factor or the power or temperature distributions. The neutronics calculation scheme relies on three great steps : -- the production of an isotopic cross-sections library ; -- the production of a reactor database through the lattice calculation ; -- the full-core calculation. In the lattice calculation, in which Boltzmann's transport equation is solved over an assembly geometry, the temperature distribution is uniform and constant during irradiation. This represents a set of approximations since, on the one hand, the temperature distribution in the assembly is not uniform (strong temperature gradients in the fuel pins, discrepancies between the fuel pins) and on the other hand, irradiation causes the thermal properties of the pins to change, which modifies the temperature distribution. Our work aims at implementing and introducing a neutronics-thermomechanics coupling into the lattice calculation to finely discretize the temperature distribution and to study its effects. To perform the study, CEA (Commissariat a l'Energie Atomique et aux Energies Alternatives) lattice code APOLLO2 was used for neutronics and EDF (Electricite De France) code C3THER was used for the thermal calculations. We show very small effects of the pin-scaled coupling when comparing the use of a temperature profile with the use of an uniform temperature over UOX-type and MOX-type fuels. We next investigate the thermal feedback using an assembly-scaled coupling taking into account the presence of large water gaps on an UOX-type assembly at burnup 0. We show the very small impact on the calculation of the hot spot factor. Finally, the coupling is introduced into the isotopic depletion calculation and we show that reactivity and isotopic number densities deviations remain small

  8. Enhancement of the Accretion of Jupiters Core by a Voluminous Low-Mass Envelope

    NASA Technical Reports Server (NTRS)

    Lissauer, Jack J.; D'angelo, Gennaro; Weidenschilling, Stuart John; Bodenheimer, Peter; Hubickyj, Olenka

    2013-01-01

    We present calculations of the early stages of the formation of Jupiter via core nucleated accretion and gas capture. The core begins as a seed body of about 350 kilometers in radius and orbits in a swarm of planetesimals whose initial radii range from 15 meters to 100 kilometers. We follow the evolution of the swarm by accounting for growth and fragmentation, viscous and gravitational stirring, and for drag-induced migration and velocity damping. Gas capture by the core substantially enhances the cross-section of the planet for accretion of small planetesimals. The dust opacity within the atmosphere surrounding the planetary core is computed self-consistently, accounting for coagulation and sedimentation of dust particles released in the envelope as passing planetesimals are ablated. The calculation is carried out at an orbital semi-major axis of 5.2 AU and an initial solids' surface density of 10/g/cm^2 at that distance. The results give a core mass of 7 Earth masses and an envelope mass of approximately 0.1 Earth mass after 500,000 years, at which point the envelope growth rate surpasses that of the core. The same calculation without the envelope gives a core mass of only 4 Earth masses.

  9. Extending fullwave core ICRF simulation to SOL and antenna regions using FEM solver

    NASA Astrophysics Data System (ADS)

    Shiraiwa, S.; Wright, J. C.

    2016-10-01

    A full wave simulation approach to solve a driven RF waves problem including hot core, SOL plasmas and possibly antenna is presented. This approach allows for exploiting advantages of two different way of representing wave field, namely treating spatially dispersive hot conductivity in a spectral solver and handling complicated geometry in SOL/antenna region using an unstructured mesh. Here, we compute a mode set in each region with the RF electric field excitation on the connecting boundary between core and edge regions. A mode corresponding to antenna excitation is also computed. By requiring the continuity of tangential RF electric and magnetic fields, the solution is obtained as unique superposition of these modes. In this work, TORIC core spectral solver is modified to allow for mode excitation, and the edge region of diverted Alcator C-Mod plasma is modeled using COMSOL FEM package. The reconstructed RF field is similar in the core region to TORIC stand-alone simulation. However, it contains higher poloidal modes near the edge and captures a wave bounced and propagating in the poloidal direction near the vacuum-plasma boundary. These features could play an important role when the single power pass absorption is modest. This new capability will enable antenna coupling calculations with a realistic load plasma, including collisional damping in realistic SOL plasma and other loss mechanisms such as RF sheath rectification. USDoE Awards DE-FC02-99ER54512, DE-FC02-01ER54648.

  10. Inner core structure behind the PKP core phase triplication

    NASA Astrophysics Data System (ADS)

    Blom, Nienke A.; Deuss, Arwen; Paulssen, Hanneke; Waszek, Lauren

    2015-06-01

    The structure of the Earth's inner core is not well known between depths of ˜100-200 km beneath the inner core boundary. This is a result of the PKP core phase triplication and the existence of strong precursors to PKP phases, which hinder the measurement of inner core compressional PKIKP waves at epicentral distances between roughly 143 and 148°. Consequently, interpretation of the detailed structure of deeper regions also remains difficult. To overcome these issues we stack seismograms in slowness and time, separating the PKP and PKIKP phases which arrive simultaneously but with different slowness. We apply this method to study the inner core's Western hemisphere beneath South and Central America using paths travelling in the quasi-polar direction between 140 and 150° epicentral distance, which enables us to measure PKiKP-PKIKP differential traveltimes up to greater epicentral distance than has previously been done. The resulting PKiKP-PKIKP differential traveltime residuals increase with epicentral distance, which indicates a marked increase in seismic velocity for polar paths at depths greater than 100 km compared to reference model AK135. Assuming a homogeneous outer core, these findings can be explained by either (i) inner core heterogeneity due to an increase in isotropic velocity or (ii) increase in anisotropy over the studied depth range. Although this study only samples a small region of the inner core and the current data cannot distinguish between the two alternatives, we prefer the latter interpretation in the light of previous work.

  11. Core-Noise

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2010-01-01

    This presentation is a technical progress report and near-term outlook for NASA-internal and NASA-sponsored external work on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system level noise metrics for the 2015, 2020, and 2025 timeframes; the emerging importance of core noise and its relevance to the SFW Reduced-Noise-Aircraft Technical Challenge; the current research activities in the core-noise area, with some additional details given about the development of a high-fidelity combustion-noise prediction capability; the need for a core-noise diagnostic capability to generate benchmark data for validation of both high-fidelity work and improved models, as well as testing of future noise-reduction technologies; relevant existing core-noise tests using real engines and auxiliary power units; and examples of possible scenarios for a future diagnostic facility. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Reduced-Noise-Aircraft Technical Challenge aims to enable concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries. This reduction of aircraft noise is critical for enabling the anticipated large increase in future air traffic. Noise generated in the jet engine core, by sources such as the compressor, combustor, and turbine, can be a significant contribution to the overall noise signature at low-power conditions, typical of approach flight. At high engine power during takeoff, jet and fan noise have traditionally dominated over core noise. However, current design trends and expected technological advances in engine-cycle design as well as noise-reduction methods are likely to reduce non-core noise even at engine-power points higher than approach. In addition, future low-emission combustor designs could increase

  12. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  13. Core Research Center

    USGS Publications Warehouse

    Hicks, Joshua; Adrian, Betty

    2009-01-01

    The Core Research Center (CRC) of the U.S. Geological Survey (USGS), located at the Denver Federal Center in Lakewood, Colo., currently houses rock core from more than 8,500 boreholes representing about 1.7 million feet of rock core from 35 States and cuttings from 54,000 boreholes representing 238 million feet of drilling in 28 States. Although most of the boreholes are located in the Rocky Mountain region, the geologic and geographic diversity of samples have helped the CRC become one of the largest and most heavily used public core repositories in the United States. Many of the boreholes represented in the collection were drilled for energy and mineral exploration, and many of the cores and cuttings were donated to the CRC by private companies in these industries. Some cores and cuttings were collected by the USGS along with other government agencies. Approximately one-half of the cores are slabbed and photographed. More than 18,000 thin sections and a large volume of analytical data from the cores and cuttings are also accessible. A growing collection of digital images of the cores are also becoming available on the CRC Web site Internet http://geology.cr.usgs.gov/crc/.

  14. The Effect of Core and Veneering Design on the Optical Properties of Polyether Ether Ketone.

    PubMed

    Zeighami, S; Mirmohammadrezaei, S; Safi, M; Falahchai, S M

    2017-12-01

    This study aimed to evaluate the effect of core shade and core and veneering thickness on color parameters and translucency of polyether ether ketone (PEEK). Sixty PEEK discs (0.5 and 1 mm in thickness) with white and dentine shades were veneered with A2 shade indirect composite resin with 0.5, 1 and 1.5 mm thickness (n=5). Cores without the veneering material served as controls for translucency evaluation. Color parameters were measured by a spectroradiometer. Color difference (ΔE₀₀) and translucency parameters (TP) were computed. Data were analyzed using one-way ANOVA and Tukey's test (for veneering thickness) and independent t-test (for core shade and thickness) via SPSS 20.0 (p⟨0.05). Regarding the veneering thickness, white cores of 0.5 mm thickness showed significant differences in all color parameters. In white cores of 1 mm thickness and dentine cores of 0.5 and 1 mm thickness, there were statistically significant differences only in L∗, a∗ and h∗. The mean TP was significantly higher in all white cores of 1 mm thickness than dentine cores of 1 mm. Considering ΔE₀₀=3.7 as clinically unacceptable, only three groups had higher mean ΔE₀₀ values. Core shade, core thickness, and the veneering thickness affected the color and translucency of PEEK restorations. Copyright© 2017 Dennis Barber Ltd.

  15. Flexural wave attenuation in a sandwich beam with viscoelastic periodic cores

    NASA Astrophysics Data System (ADS)

    Guo, Zhiwei; Sheng, Meiping; Pan, Jie

    2017-07-01

    The flexural-wave attenuation performance of traditional constraint-layer damping in a sandwich beam is improved by using periodic constrained-layer damping (PCLD), where the monolithic viscoelastic core is replaced with two periodically alternating viscoelastic cores. Closed-form solutions of the wave propagation constants of the infinite periodic sandwich beam and the forced response of the corresponding finite sandwich structure are theoretically derived, providing computational support on the analysis of attenuation characteristics. In a sandwich beam with PCLD, the flexural waves can be attenuated by both Bragg scattering effect and damping effect, where the attenuation level is mainly dominated by Bragg scattering in the band-gaps and by damping in the pass-bands. Affected by these two effects, when the parameters of periodic cores are properly selected, a sandwich beam with PCLD can effectively reduce vibrations of much lower frequencies than that with traditional constrained-layer damping. The effects of the parameters of viscoelastic periodic cores on band-gap properties are also discussed, showing that the average attenuation in the desired frequency band can be maximized by tuning the length ratio and core thickness to proper values. The research in this paper could possibly provide useful information for the researches and engineers to design damping structures.

  16. DCMIP2016: a review of non-hydrostatic dynamical core design and intercomparison of participating models

    NASA Astrophysics Data System (ADS)

    Ullrich, Paul A.; Jablonowski, Christiane; Kent, James; Lauritzen, Peter H.; Nair, Ramachandran; Reed, Kevin A.; Zarzycki, Colin M.; Hall, David M.; Dazlich, Don; Heikes, Ross; Konor, Celal; Randall, David; Dubos, Thomas; Meurdesoif, Yann; Chen, Xi; Harris, Lucas; Kühnlein, Christian; Lee, Vivian; Qaddouri, Abdessamad; Girard, Claude; Giorgetta, Marco; Reinert, Daniel; Klemp, Joseph; Park, Sang-Hun; Skamarock, William; Miura, Hiroaki; Ohno, Tomoki; Yoshida, Ryuji; Walko, Robert; Reinecke, Alex; Viner, Kevin

    2017-12-01

    Atmospheric dynamical cores are a fundamental component of global atmospheric modeling systems and are responsible for capturing the dynamical behavior of the Earth's atmosphere via numerical integration of the Navier-Stokes equations. These systems have existed in one form or another for over half of a century, with the earliest discretizations having now evolved into a complex ecosystem of algorithms and computational strategies. In essence, no two dynamical cores are alike, and their individual successes suggest that no perfect model exists. To better understand modern dynamical cores, this paper aims to provide a comprehensive review of 11 non-hydrostatic dynamical cores, drawn from modeling centers and groups that participated in the 2016 Dynamical Core Model Intercomparison Project (DCMIP) workshop and summer school. This review includes a choice of model grid, variable placement, vertical coordinate, prognostic equations, temporal discretization, and the diffusion, stabilization, filters, and fixers employed by each system.

  17. Effect of superconducting solenoid model cores on spanwise iron magnet roll control

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1985-01-01

    Compared with conventional ferromagnetic fuselage cores, superconducting solenoid cores appear to offer significant reductions in the projected cost of a large wind tunnel magnetic suspension and balance system. The provision of sufficient magnetic roll torque capability has been a long-standing problem with all magnetic suspension and balance systems; and the spanwise iron magnet scheme appears to be the most powerful system available. This scheme utilizes iron cores which are installed in the wings of the model. It was anticipated that the magnetization of these cores, and hence the roll torque generated, would be affected by the powerful external magnetic field of the superconducting solenoid. A preliminary study has been made of the effect of the superconducting solenoid fuselage model core concept on the spanwise iron magnet roll torque generation schemes. Computed data for one representative configuration indicate that reductions in available roll torque occur over a range of applied magnetic field levels. These results indicate that a 30-percent increase in roll electromagnet capacity over that previously determined will be required for a representative 8-foot wind tunnel magnetic suspension and balance system design.

  18. 43 CFR 3593.1 - Core or test hole cores, samples, cuttings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (d) When drilling on lands with potential for encountering high pressure oil, gas or geothermal... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false Core or test hole cores, samples, cuttings...) EXPLORATION AND MINING OPERATIONS Bore Holes and Samples § 3593.1 Core or test hole cores, samples, cuttings...

  19. 43 CFR 3593.1 - Core or test hole cores, samples, cuttings.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (d) When drilling on lands with potential for encountering high pressure oil, gas or geothermal... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false Core or test hole cores, samples, cuttings...) EXPLORATION AND MINING OPERATIONS Bore Holes and Samples § 3593.1 Core or test hole cores, samples, cuttings...

  20. 43 CFR 3593.1 - Core or test hole cores, samples, cuttings.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (d) When drilling on lands with potential for encountering high pressure oil, gas or geothermal... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false Core or test hole cores, samples, cuttings...) EXPLORATION AND MINING OPERATIONS Bore Holes and Samples § 3593.1 Core or test hole cores, samples, cuttings...

  1. 43 CFR 3593.1 - Core or test hole cores, samples, cuttings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (d) When drilling on lands with potential for encountering high pressure oil, gas or geothermal... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false Core or test hole cores, samples, cuttings...) EXPLORATION AND MINING OPERATIONS Bore Holes and Samples § 3593.1 Core or test hole cores, samples, cuttings...

  2. Feasibility of computed tomography-guided core needle biopsy in producing state-of-the-art clinical management in Chinese lung cancer

    PubMed Central

    Chen, Hua-Jun; Yang, Jin-Ji; Fang, Liang-Yi; Huang, Min-Min; Yan, Hong-Hong; Zhang, Xu-Chao; Xu, Chong-Rui; Wu, Yi-Long

    2014-01-01

    Background A satisfactory biopsy determines the state-of-the-art management of lung cancer in this era of personalized medicine. This study aimed to investigate the suitability and efficacy of computed tomography (CT)-guided core needle biopsy in clinical management. Methods A cohort of 353 patients with clinically suspected lung cancer was enrolled in the study. Patient factors and biopsy variables were recorded. Epidermal growth factor receptor (EGFR) gene mutations and echinoderm microtubule-associated protein-like 4 (EML4)-anaplastic lymphoma kinase (ALK) rearrangement were detected in tumor specimens. Adequacy of biopsic obtainment for clinical trial screening and tissue bank establishment were reviewed. Results Overall diagnostic accuracy of malignancy achieved 98.5%. The median biopsy time of the cohort was 20 minutes. In patients with non-small cell lung cancer (NSCLC), 99.3% (287/289) were diagnosed as specific histologic subtypes, and two patients (0.7%) were determined as NSCLC not otherwise specified (NOS). EGFR mutations were analyzed in 81.7% (236/289) of patients with NSCLC, and 98.7% (233/236) showed conclusive results. EML4-ALK gene fusion was tested in 43.9% (127/289) of NSCLC patients, and 98.4% (125/127) showed conclusive results: 6.4% (8/125) of those had gene fusion. Ninety-six NSCLC patients participated in clinical trial screening and provided mandatory tumor slides for molecular profiling. Pathological evaluation was fulfilled in 90 patients (93.8%); 99.4% (320/322) of patients with malignancy provided extra tissue for the establishment of a tumor bank. Conclusions CT-guided core needle biopsy provided optimal clinical management in this era of translational medicine. The biopsic modality should be prioritized in selected lung cancer patients. PMID:26766993

  3. Effective delayed neutron fraction and prompt neutron lifetime of Tehran research reactor mixed-core.

    PubMed

    Lashkari, A; Khalafi, H; Kazeminejad, H

    2013-05-01

    In this work, kinetic parameters of Tehran research reactor (TRR) mixed cores have been calculated. The mixed core configurations are made by replacement of the low enriched uranium control fuel elements with highly enriched uranium control fuel elements in the reference core. The MTR_PC package, a nuclear reactor analysis tool, is used to perform the analysis. Simulations were carried out to compute effective delayed neutron fraction and prompt neutron lifetime. Calculation of kinetic parameters is necessary for reactivity and power excursion transient analysis. The results of this research show that effective delayed neutron fraction decreases and prompt neutron lifetime increases with the fuels burn-up. Also, by increasing the number of highly enriched uranium control fuel elements in the reference core, the prompt neutron lifetime increases, but effective delayed neutron fraction does not show any considerable change.

  4. Effective delayed neutron fraction and prompt neutron lifetime of Tehran research reactor mixed-core

    PubMed Central

    Lashkari, A.; Khalafi, H.; Kazeminejad, H.

    2013-01-01

    In this work, kinetic parameters of Tehran research reactor (TRR) mixed cores have been calculated. The mixed core configurations are made by replacement of the low enriched uranium control fuel elements with highly enriched uranium control fuel elements in the reference core. The MTR_PC package, a nuclear reactor analysis tool, is used to perform the analysis. Simulations were carried out to compute effective delayed neutron fraction and prompt neutron lifetime. Calculation of kinetic parameters is necessary for reactivity and power excursion transient analysis. The results of this research show that effective delayed neutron fraction decreases and prompt neutron lifetime increases with the fuels burn-up. Also, by increasing the number of highly enriched uranium control fuel elements in the reference core, the prompt neutron lifetime increases, but effective delayed neutron fraction does not show any considerable change. PMID:24976672

  5. A problem in representing the core magnetic field of the earth using spherical harmonics

    NASA Technical Reports Server (NTRS)

    Carle, H. M.; Harrison, C. G. A.

    1982-01-01

    Although there are computational advantages to the representation of the earth's magnetic field by spherical harmonic coefficients of the magnetic potential, up to the thirteenth degree and order, the following disadvantages emerge: (1) the use of spherical harmonics of up to a certain degree does not remove wavelengths greater than a certain value from the surface fields, and (2) the total field magnitudes represented by spherical harmonics up to a certain degree have minimum wavelengths equal to the circumference of the earth divided by twice the maximum degree of the harmonic used. The implications of the ways in which surface fields are separated into core and crustal components are discussed, and it is concluded that since field signals are generated in the core, the representation of the core field by spherical harmonics of potential does not adequately represent all core field components.

  6. 33. BENCH CORE STATION, GREY IRON FOUNDRY CORE ROOM WHERE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    33. BENCH CORE STATION, GREY IRON FOUNDRY CORE ROOM WHERE CORE MOLDS WERE HAND FILLED AND OFTEN PNEUMATICALLY COMPRESSED WITH A HAND-HELD RAMMER BEFORE THEY WERE BAKED. - Stockham Pipe & Fittings Company, Grey Iron Foundry, 4000 Tenth Avenue North, Birmingham, Jefferson County, AL

  7. Experimental Simulations of Methane Gas Migration through Water-Saturated Sediment Cores

    NASA Astrophysics Data System (ADS)

    Choi, J.; Seol, Y.; Rosenbaum, E. J.

    2010-12-01

    Previous numerical simulations (Jaines and Juanes, 2009) showed that modes of gas migration would mainly be determined by grain size; capillary invasion preferably occurring in coarse-grained sediments vs. fracturing dominantly in fine-grained sediments. This study was intended to experimentally simulate preferential modes of gas migration in various water-saturated sediment cores. The cores compacted in the laboratory include a silica sand core (mean size of 180 μm), a silica silt core (1.7 μm), and a kaolin clay core (1.0 μm). Methane gas was injected into the core placed within an x-ray-transparent pressure vessel, which was under continuous x-ray computed tomography (CT) scanning with controlled radial (σr), axial (σa), and pore pressures (P). The CT image analysis reveals that, under the radial effective stress (σr') of 0.69 MPa and the axial effective stress (σa') of 1.31 MPa, fracturings by methane gas injection occur in both silt and clay cores. Fracturing initiates at the capillary pressure (Pc) of ~ 0.41 MPa and ~ 2.41 MPa for silt and clay cores, respectively. Fracturing appears as irregular fracture-networks consisting of nearly invisibly-fine multiple fractures, longitudinally-oriented round tube-shape conduits, or fine fractures branching off from the large conduits. However, for the sand core, only capillary invasion was observed at or above 0.034 MPa of capillary pressure under the confining pressure condition of σr' = 1.38 MPa and σa' = 2.62 MPa. Compared to the numerical predictions under similar confining pressure conditions, fracturing occurs with relatively larger grain sizes, which may result from lower grain-contact compression and friction caused by loose compaction and flexible lateral boundary employed in the experiment.

  8. Core drilling apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusman, M.T.; Konstantinov, L.P.; Malkin, B.D.

    1974-04-16

    Mounted on the exterior of a nonrotatable core barrel is an end of a resilient tape, the other end of which extends inward into the barrel and is connected to a device for pulling the tape inward into the barrel. The apparatus also is provided with an arrangement which forms a sleeve from the tape as this is being pulled into the core barrel. During the coring operation, the tape is being pulled inward into the barrel and a sleeve is formed from the tape with the aid of the arrangement to encase and protect the core from disturbance. Themore » coring apparatus is intended for core drilling in soft, unconsolidated, and fractured formations. (3 claims)« less

  9. Chapter 13. Exploring Use of the Reserved Core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmen, John; Humphrey, Alan; Berzins, Martin

    2015-07-29

    In this chapter, we illustrate benefits of thinking in terms of thread management techniques when using a centralized scheduler model along with interoperability of MPI and PThread. This is facilitated through an exploration of thread placement strategies for an algorithm modeling radiative heat transfer with special attention to the 61st core. This algorithm plays a key role within the Uintah Computational Framework (UCF) and current efforts taking place at the University of Utah to model next-generation, large-scale clean coal boilers. In such simulations, this algorithm models the dominant form of heat transfer and consumes a large portion of compute time.more » Exemplified by a real-world example, this chapter presents our early efforts in porting a key portion of a scalability-centric codebase to the Intel Xeon Phi coprocessor. Specifically, this chapter presents results from our experiments profiling the native execution of a reverse Monte-Carlo ray tracing-based radiation model on a single coprocessor. These results demonstrate that our fastest run configurations utilized the 61st core and that performance was not profoundly impacted when explicitly oversubscribing the coprocessor operating system thread. Additionally, this chapter presents a portion of radiation model source code, a MIC-centric UCF cross-compilation example, and less conventional thread management technique for developers utilizing the PThreads threading model.« less

  10. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  11. Heteropolyhedral silver compounds containing the polydentate ligand N,N,O-E-[6-(hydroxyimino)ethyl]-1,3,7-trimethyllumazine. Preparation, spectral and XRD structural study and AIM calculations.

    PubMed

    Jiménez-Pulido, Sonia B; Hueso-Ureña, Francisco; Fernández-Liencres, M Paz; Fernández-Gómez, Manuel; Moreno-Carretero, Miguel N

    2013-01-14

    The oxime derived from 6-acetyl-1,3,7-trimethyllumazine (1) ((E-6-(hydroxyimino)ethyl)-1,3,7-trimethylpteridine-2,4(1H,3H)-dione, DLMAceMox) has been prepared and its molecular and crystal structure determined from spectral and XRD data. The oxime ligand was reacted with silver nitrate, perchlorate, thiocyanate, trifluoromethylsulfonate and tetrafluoroborate to give complexes with formulas [Ag(2)(DLMAceMox)(2)(NO(3))(2)](n) (2), [Ag(2)(DLMAceMox)(2)(ClO(4))(2)](n) (3), [Ag(2)(DLMAceMox)(2)(SCN)(2)] (4), [Ag(2)(DLMAceMox)(2)(CF(3)SO(3))(2)(CH(3)CH(2)OH)]·CH(3)CH(2)OH (5) and [Ag(DLMAceMox)(2)]BF(4) (6). Single-crystal XRD studies show that the asymmetrical residual unit of complexes 2, 3 and 5 contains two quite different but connected silver centers (Ag1-Ag2, 2.9-3.2 Å). In addition to this, the Ag1 ion displays coordination with the N5 and O4 atoms from both lumazine moieties and a ligand (nitrato, perchlorato or ethanol) bridging to another disilver unit. The Ag2 ion is coordinated to the N61 oxime nitrogens, a monodentate and a (O,O)-bridging nitrato/perchlorato or two monodentate O-trifluoromethylsulfonato anions. The coordination polyhedra can be best described as a strongly distorted octahedron (around Ag1) and a square-based pyramid (around Ag2). The Ag-N and Ag-O bond lengths range between 2.22-2.41 and 2.40-2.67 Å, respectively. Although the structure of 4 cannot be resolved by XRD, it is likely to be similar to those described for 2, 3 and 5, containing Ag-Ag units with S-thiocyanato terminal ligands. Finally, the structure of the tetrafluoroborate compound 6 is mononuclear with a strongly distorted tetrahedral AgN(4) core (Ag-N, 2.27-2.43 Å). Always, the different Ag-N distances found clearly point to the more basic character of the oxime N61 nitrogen atom when compared with the pyrazine N5 one. A topological analysis of the electron density within the framework provided by the quantum theory of atoms in molecules (QTAIM) using DFT(M06L) levels of

  12. Mercury's core evolution

    NASA Astrophysics Data System (ADS)

    Deproost, Marie-Hélène; Rivoldini, Attilio; Van Hoolst, Tim

    2016-10-01

    Remote sensing data of Mercury's surface by MESSENGER indicate that Mercury formed under reducing conditions. As a consequence, silicon is likely the main light element in the core together with a possible small fraction of sulfur. Compared to sulfur, which does almost not partition into solid iron at Mercury's core conditions and strongly decreases the melting temperature, silicon partitions almost equally well between solid and liquid iron and is not very effective at reducing the melting temperature of iron. Silicon as the major light element constituent instead of sulfur therefore implies a significantly higher core liquidus temperature and a decrease in the vigor of compositional convection generated by the release of light elements upon inner core formation.Due to the immiscibility in liquid Fe-Si-S at low pressure (below 15 GPa), the core might also not be homogeneous and consist of an inner S-poor Fe-Si core below a thinner Si-poor Fe-S layer. Here, we study the consequences of a silicon-rich core and the effect of the blanketing Fe-S layer on the thermal evolution of Mercury's core and on the generation of a magnetic field.

  13. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    PubMed

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20

  14. Multi-GPU Jacobian Accelerated Computing for Soft Field Tomography

    PubMed Central

    Borsic, A.; Attardo, E. A.; Halter, R. J.

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use Finite Element Models to represent the volume of interest and to solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are three-dimensional. Though the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in Electrical Impedance Tomography applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15 to 20 minutes with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Further, providing high-speed reconstructions are essential for some promising clinical application of EIT. For 3D problems 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In the present work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have a much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of

  15. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores.

    PubMed

    Chikkagoudar, Satish; Wang, Kai; Li, Mingyao

    2011-05-26

    Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.

  16. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores

    PubMed Central

    2011-01-01

    Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/. PMID:21615923

  17. Metallic nanoshells with semiconductor cores: optical characteristics modified by core medium properties.

    PubMed

    Bardhan, Rizia; Grady, Nathaniel K; Ali, Tamer; Halas, Naomi J

    2010-10-26

    It is well-known that the geometry of a nanoshell controls the resonance frequencies of its plasmon modes; however, the properties of the core material also strongly influence its optical properties. Here we report the synthesis of Au nanoshells with semiconductor cores of cuprous oxide and examine their optical characteristics. This material system allows us to systematically examine the role of core material on nanoshell optical properties, comparing Cu(2)O core nanoshells (ε(c) ∼ 7) to lower core dielectric constant SiO(2) core nanoshells (ε(c) = 2) and higher dielectric constant mixed valency iron oxide nanoshells (ε(c) = 12). Increasing the core dielectric constant increases nanoparticle absorption efficiency, reduces plasmon line width, and modifies plasmon energies. Modifying the core medium provides an additional means of tailoring both the near- and far-field optical properties in this unique nanoparticle system.

  18. Academic Rigor: The Core of the Core

    ERIC Educational Resources Information Center

    Brunner, Judy

    2013-01-01

    Some educators see the Common Core State Standards as reason for stress, most recognize the positive possibilities associated with them and are willing to make the professional commitment to implementing them so that academic rigor for all students will increase. But business leaders, parents, and the authors of the Common Core are not the only…

  19. A vortex-filament and core model for wings with edge vortex separation

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Lan, C. E.

    1981-01-01

    A method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semiempirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; the two vortex core system applied to the double delta and strake wing produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and the computer time for the present method is about two thirds of that of Mehrotra's method.

  20. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  1. Multi-Resolution Indexing for Hierarchical Out-of-Core Traversal of Rectilinear Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V.

    2000-07-10

    The real time processing of very large volumetric meshes introduces specific algorithmic challenges due to the impossibility of fitting the input data in the main memory of a computer. The basic assumption (RAM computational model) of uniform-constant-time access to each memory location is not valid because part of the data is stored out-of-core or in external memory. The performance of most algorithms does not scale well in the transition from the in-core to the out-of-core processing conditions. The performance degradation is due to the high frequency of I/O operations that may start dominating the overall running time. Out-of-core computing [28]more » addresses specifically the issues of algorithm redesign and data layout restructuring to enable data access patterns with minimal performance degradation in out-of-core processing. Results in this area are also valuable in parallel and distributed computing where one has to deal with the similar issue of balancing processing time with data migration time. The solution of the out-of-core processing problem is typically divided into two parts: (i) analysis of a specific algorithm to understand its data access patterns and, when possible, redesign the algorithm to maximize their locality; and (ii) storage of the data in secondary memory with a layout consistent with the access patterns of the algorithm to amortize the cost of each I/O operation over several memory access operations. In the case of a hierarchical visualization algorithms for volumetric data the 3D input hierarchy is traversed to build derived geometric models with adaptive levels of detail. The shape of the output models is then modified dynamically with incremental updates of their level of detail. The parameters that govern this continuous modification of the output geometry are dependent on the runtime user interaction making it impossible to determine a priori what levels of detail are going to be constructed. For example they can be dependent

  2. Inner Core Structure Behind the PKP Core Phase Triplication

    NASA Astrophysics Data System (ADS)

    Blom, N.; Paulssen, H.; Deuss, A. F.; Waszek, L.

    2015-12-01

    Despite its small size, the Earth's inner core plays an important role in the Earth's dynamics. Because it is slowly growing, its structure - and the variation thereof with depth - may reveal important clues about the history of the core, its convection and the resulting geodynamo. Learning more about this structure has been a prime effort in the past decades, leading to discoveries about anisotropy, hemispheres and heterogeneity in the inner core in general. In terms of detailed structure, mainly seismic body waves have contributed to these advances. However, at depths between ~100-200 km, the seismic structure is relatively poorly known. This is a result of the PKP core phase triplication and the existence of strong precursors to PKP phases, whose simultaneous arrival hinders the measurement of inner core waves PKIKP at epicentral distances between roughly 143-148°. As a consequence, the interpretation of deeper structure also remains difficult. To overcome these issues, we stack seismograms in slowness and time, separating PKP and PKIKP phases which arrive simultaneously, but with different slowness. We apply this method to study the inner core's Western hemisphere between South and Central America using paths travelling in the quasi-polar direction between epicentral distances of 140-150°. This enables us to measure PKiKP-PKIKP differential travel times up to greater epicentral distance than has previously been done. The resulting differential travel time residuals increase with epicentral distance, indicating a marked increase in seismic velocity with depth compared to reference model AK135 for the studied polar paths. Assuming a homogeneous outer core, these findings can be explained by either (i) inner core heterogeneity due to an increase in isotropic velocity, or (ii) increase in anisotropy over the studied depth range. Our current data set cannot distinguish between the two hypotheses, but in light of previous work we prefer the latter interpretation.

  3. Moxidectin and the avermectins: Consanguinity but not identity

    PubMed Central

    Prichard, Roger; Ménez, Cécile; Lespine, Anne

    2012-01-01

    The avermectins and milbemycins contain a common macrocyclic lactone (ML) ring, but are fermentation products of different organisms. The principal structural difference is that avermectins have sugar groups at C13 of the macrocyclic ring, whereas the milbemycins are protonated at C13. Moxidectin (MOX), belonging to the milbemycin family, has other differences, including a methoxime at C23. The avermectins and MOX have broad-spectrum activity against nematodes and arthropods. They have similar but not identical, spectral ranges of activity and some avermectins and MOX have diverse formulations for great user flexibility. The longer half-life of MOX and its safety profile, allow MOX to be used in long-acting formulations. Some important differences between MOX and avermectins in interaction with various invertebrate ligand-gated ion channels are known and could be the basis of different efficacy and safety profiles. Modelling of IVM interaction with glutamate-gated ion channels suggest different interactions will occur with MOX. Similarly, profound differences between MOX and the avermectins are seen in interactions with ABC transporters in mammals and nematodes. These differences are important for pharmacokinetics, toxicity in animals with defective transporter expression, and probable mechanisms of resistance. Resistance to the avermectins has become widespread in parasites of some hosts and MOX resistance also exists and is increasing. There is some degree of cross-resistance between the avermectins and MOX, but avermectin resistance and MOX resistance are not identical. In many cases when resistance to avermectins is noticed, MOX produces a higher efficacy and quite often is fully effective at recommended dose rates. These similarities and differences should be appreciated for optimal decisions about parasite control, delaying, managing or reversing resistances, and also for appropriate anthelmintic combination. PMID:24533275

  4. An approach to model reactor core nodalization for deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd

    2016-01-01

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  5. An approach to model reactor core nodalization for deterministic safety analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less

  6. cFE/CFS (Core Flight Executive/Core Flight System)

    NASA Technical Reports Server (NTRS)

    Wildermann, Charles P.

    2008-01-01

    This viewgraph presentation describes in detail the requirements and goals of the Core Flight Executive (cFE) and the Core Flight System (CFS). The Core Flight Software System is a mission independent, platform-independent, Flight Software (FSW) environment integrating a reusable core flight executive (cFE). The CFS goals include: 1) Reduce time to deploy high quality flight software; 2) Reduce project schedule and cost uncertainty; 3) Directly facilitate formalized software reuse; 4) Enable collaboration across organizations; 5) Simplify sustaining engineering (AKA. FSW maintenance); 6) Scale from small instruments to System of Systems; 7) Platform for advanced concepts and prototyping; and 7) Common standards and tools across the branch and NASA wide.

  7. Parallel computation of GA search for the artery shape determinants with CFD

    NASA Astrophysics Data System (ADS)

    Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.

    2010-06-01

    We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.

  8. [Strengthening innovation in clinical research methodology of acupuncture and moxibustion to promote internationalization process of acupuncture-moxibustion].

    PubMed

    Wang, Long; Zou, Wei; Chi, Qing-bin

    2009-06-01

    In order to explore the problems and countermeasure in the methodology of acupuncture and moxibustion clinical researches at present, clinical research literatures about acupuncture and moxibustion (Acup-Mox) published in recent years in our country were reviewed. For the urgent need of the current internationalization of Acup-Mox, the authors proposed the model of clinical research on Acup-Mox, which should strictly stick to the international standard and fully embody traditional Chinese medicine characteristics in the intervention measures of acupuncture. It is indicated that innovation of the methodology about clinical researches of Acup-Mox has great significance in improving the quality of clinical research on Acup-Mox in our country and promoting internationalization of Acup-Mox.

  9. Application Performance Analysis and Efficient Execution on Systems with multi-core CPUs, GPUs and MICs: A Case Study with Microscopy Image Analysis

    PubMed Central

    Teodoro, George; Kurc, Tahsin; Andrade, Guilherme; Kong, Jun; Ferreira, Renato; Saltz, Joel

    2015-01-01

    We carry out a comparative performance study of multi-core CPUs, GPUs and Intel Xeon Phi (Many Integrated Core-MIC) with a microscopy image analysis application. We experimentally evaluate the performance of computing devices on core operations of the application. We correlate the observed performance with the characteristics of computing devices and data access patterns, computation complexities, and parallelization forms of the operations. The results show a significant variability in the performance of operations with respect to the device used. The performances of operations with regular data access are comparable or sometimes better on a MIC than that on a GPU. GPUs are more efficient than MICs for operations that access data irregularly, because of the lower bandwidth of the MIC for random data accesses. We propose new performance-aware scheduling strategies that consider variabilities in operation speedups. Our scheduling strategies significantly improve application performance compared to classic strategies in hybrid configurations. PMID:28239253

  10. Oil-shale data, cores, and samples collected by the U.S. geological survey through 1989

    USGS Publications Warehouse

    Dyni, John R.; Gay, Frances; Michalski, Thomas C.; ,

    1990-01-01

    The U.S. Geological Survey has acquired a large collection of geotechnical data, drill cores, and crushed samples of oil shale from the Eocene Green River Formation in Colorado, Wyoming, and Utah. The data include about 250,000 shale-oil analyses from about 600 core holes. Most of the data is from Colorado where the thickest and highest-grade oil shales of the Green River Formation are found in the Piceance Creek basin. Other data on file but not yet in the computer database include hundreds of lithologic core descriptions, geophysical well logs, and mineralogical and geochemical analyses. The shale-oil analyses are being prepared for release on floppy disks for use on microcomputers. About 173,000 lineal feet of drill core of oil shale and associated rocks, as well as 100,000 crushed samples of oil shale, are stored at the Core Research Center, U.S. Geological Survey, Lakewood, Colo. These materials are available to the public for research.

  11. Bypass flow computations on the LOFA transient in a VHTR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Yu-Hsin; Johnson, Richard W.; Ferng, Yuh-Ming

    2014-01-01

    Bypass flow in the prismatic gas-cooled very high temperature reactor (VHTR) is not intentionally designed to occur, but is present in the gaps between graphite blocks. Previous studies of the bypass flow in the core indicated that the cooling provided by flow in the bypass gaps had a significant effect on temperature and flow distributions for normal operating conditions. However, the flow and heat transports in the core are changed significantly after a Loss of Flow Accident (LOFA). This study aims to study the effect and role of the bypass flow after a LOFA in terms of the temperature andmore » flow distributions and for the heat transport out of the core by natural convection of the coolant for a 1/12 symmetric section of the active core which is composed of images and mirror images of two sub-region models. The two sub-region models, 9 x 1/12 and 15 x 1/12 symmetric sectors of the active core, are employed as the CFD flow models using computational grid systems of 70.2 million and 117 million nodes, respectively. It is concluded that the effect of bypass flow is significant for the initial conditions and the beginning of LOFA, but the bypass flow has little effect after a long period of time in the transient computation of natural circulation.« less

  12. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots

    USDA-ARS?s Scientific Manuscript database

    Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...

  13. Design of air-gapped magnetic-core inductors for superimposed direct and alternating currents

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.

    1976-01-01

    Using data on standard magnetic-material properties and standard core sizes for air-gap-type cores, an algorithm designed for a computer solution is developed which optimally determines the air-gap length and locates the quiescent point on the normal magnetization curve so as to yield an inductor design with the minimum number of turns for a given ac voltage and frequency and with a given dc bias current superimposed in the same winding. Magnetic-material data used in the design are the normal magnetization curve and a family of incremental permeability curves. A second procedure, which requires a simpler set of calculations, starts from an assigned quiescent point on the normal magnetization curve and first screens candidate core sizes for suitability, then determines the required turns and air-gap length.

  14. Atomically informed nonlocal semi-discrete variational Peierls-Nabarro model for planar core dislocations

    PubMed Central

    Liu, Guisen; Cheng, Xi; Wang, Jian; Chen, Kaiguo; Shen, Yao

    2017-01-01

    Prediction of Peierls stress associated with dislocation glide is of fundamental concern in understanding and designing the plasticity and mechanical properties of crystalline materials. Here, we develop a nonlocal semi-discrete variational Peierls-Nabarro (SVPN) model by incorporating the nonlocal atomic interactions into the semi-discrete variational Peierls framework. The nonlocal kernel is simplified by limiting the nonlocal atomic interaction in the nearest neighbor region, and the nonlocal coefficient is directly computed from the dislocation core structure. Our model is capable of accurately predicting the displacement profile, and the Peierls stress, of planar-extended core dislocations in face-centered cubic structures. Our model could be extended to study more complicated planar-extended core dislocations, such as <110> {111} dislocations in Al-based and Ti-based intermetallic compounds. PMID:28252102

  15. The unstaggered extension to GFDL's FV3 dynamical core on the cubed-sphere

    NASA Astrophysics Data System (ADS)

    Chen, X.; Lin, S. J.; Harris, L.

    2017-12-01

    Finite-volume schemes have become popular for atmospheric transport since they provide intrinsic mass conservation to constituent species. Many CFD codes use unstaggered discretizations for finite volume methods with an approximate Riemann solver. However, this approach is inefficient for geophysical flows due to the complexity of the Riemann solver. We introduce a Low Mach number Approximate Riemann Solver (LMARS) simplified using assumptions appropriate for atmospheric flows: the wind speed is much slower than the sound speed, weak discontinuities, and locally uniform sound wave velocity. LMARS makes possible a Riemann-solver-based dynamical core comparable in computational efficiency to many current dynamical cores. We will present a 3D finite-volume dynamical core using LMARS in a cubed-sphere geometry with a vertically Lagrangian discretization. Results from standard idealized test cases will be discussed.

  16. CERN Computing in Commercial Clouds

    NASA Astrophysics Data System (ADS)

    Cordeiro, C.; Field, L.; Garrido Bear, B.; Giordano, D.; Jones, B.; Keeble, O.; Manzi, A.; Martelli, E.; McCance, G.; Moreno-García, D.; Traylen, S.

    2017-10-01

    By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.

  17. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  18. The statistical analysis of circadian phase and amplitude in constant-routine core-temperature data

    NASA Technical Reports Server (NTRS)

    Brown, E. N.; Czeisler, C. A.

    1992-01-01

    Accurate estimation of the phases and amplitude of the endogenous circadian pacemaker from constant-routine core-temperature series is crucial for making inferences about the properties of the human biological clock from data collected under this protocol. This paper presents a set of statistical methods based on a harmonic-regression-plus-correlated-noise model for estimating the phases and the amplitude of the endogenous circadian pacemaker from constant-routine core-temperature data. The methods include a Bayesian Monte Carlo procedure for computing the uncertainty in these circadian functions. We illustrate the techniques with a detailed study of a single subject's core-temperature series and describe their relationship to other statistical methods for circadian data analysis. In our laboratory, these methods have been successfully used to analyze more than 300 constant routines and provide a highly reliable means of extracting phase and amplitude information from core-temperature data.

  19. Relative Neurotoxicity of Ivermectin and Moxidectin in Mdr1ab (−/−) Mice and Effects on Mammalian GABA(A) Channel Activity

    PubMed Central

    Ménez, Cécile; Sutra, Jean-François; Prichard, Roger; Lespine, Anne

    2012-01-01

    The anthelmintics ivermectin (IVM) and moxidectin (MOX) display differences in toxicity in several host species. Entrance into the brain is restricted by the P-glycoprotein (P-gp) efflux transporter, while toxicity is mediated through the brain GABA(A) receptors. This study compared the toxicity of IVM and MOX in vivo and their interaction with GABA(A) receptors in vitro. Drug toxicity was assessed in Mdr1ab(−/−) mice P-gp-deficient after subcutaneous administration of increasing doses (0.11–2.0 and 0.23–12.9 µmol/kg for IVM and MOX in P-gp-deficient mice and half lethal doses (LD50) in wild-type mice). Survival was evaluated over 14-days. In Mdr1ab(−/−) mice, LD50 was 0.46 and 2.3 µmol/kg for IVM and MOX, respectively, demonstrating that MOX was less toxic than IVM. In P-gp-deficient mice, MOX had a lower brain-to-plasma concentration ratio and entered into the brain more slowly than IVM. The brain sublethal drug concentrations determined after administration of doses close to LD50 were, in Mdr1ab(−/−) and wild-type mice, respectively, 270 and 210 pmol/g for IVM and 830 and 740–1380 pmol/g for MOX, indicating that higher brain concentrations are required for MOX toxicity than IVM. In rat α1β2γ2 GABA channels expressed in Xenopus oocytes, IVM and MOX were both allosteric activators of the GABA-induced response. The Hill coefficient was 1.52±0.45 for IVM and 0.34±0.56 for MOX (p<0.001), while the maximum potentiation caused by IVM and MOX relative to GABA alone was 413.7±66.1 and 257.4±40.6%, respectively (p<0.05), showing that IVM causes a greater potentiation of GABA action on this receptor. Differences in the accumulation of IVM and MOX in the brain and in the interaction of IVM and MOX with GABA(A) receptors account for differences in neurotoxicity seen in intact and Mdr1-deficient animals. These differences in neurotoxicity of IVM and MOX are important in considering their use in humans. PMID:23133688

  20. InSAR Scientific Computing Environment

    NASA Astrophysics Data System (ADS)

    Gurrola, E. M.; Rosen, P. A.; Sacco, G.; Zebker, H. A.; Simons, M.; Sandwell, D. T.

    2010-12-01

    The InSAR Scientific Computing Environment (ISCE) is a software development effort in its second year within the NASA Advanced Information Systems and Technology program. The ISCE will provide a new computing environment for geodetic image processing for InSAR sensors that will enable scientists to reduce measurements directly from radar satellites and aircraft to new geophysical products without first requiring them to develop detailed expertise in radar processing methods. The environment can serve as the core of a centralized processing center to bring Level-0 raw radar data up to Level-3 data products, but is adaptable to alternative processing approaches for science users interested in new and different ways to exploit mission data. The NRC Decadal Survey-recommended DESDynI mission will deliver data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. The InSAR Scientific Computing Environment is planned to become a key element in processing DESDynI data into higher level data products and it is expected to enable a new class of analyses that take greater advantage of the long time and large spatial scales of these new data, than current approaches. At the core of ISCE is both legacy processing software from the JPL/Caltech ROI_PAC repeat-pass interferometry package as well as a new InSAR processing package containing more efficient and more accurate processing algorithms being developed at Stanford for this project that is based on experience gained in developing processors for missions such as SRTM and UAVSAR. Around the core InSAR processing programs we are building object-oriented wrappers to enable their incorporation into a more modern, flexible, extensible software package that is informed by modern programming methods, including rigorous componentization of processing codes, abstraction and generalization of data models, and a robust, intuitive user interface with

  1. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  2. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  3. Determination of the core temperature of a Li-ion cell during thermal runaway

    NASA Astrophysics Data System (ADS)

    Parhizi, M.; Ahmed, M. B.; Jain, A.

    2017-12-01

    Safety and performance of Li-ion cells is severely affected by thermal runaway where exothermic processes within the cell cause uncontrolled temperature rise, eventually leading to catastrophic failure. Most past experimental papers on thermal runaway only report surface temperature measurement, while the core temperature of the cell remains largely unknown. This paper presents an experimentally validated method based on thermal conduction analysis to determine the core temperature of a Li-ion cell during thermal runaway using surface temperature and chemical kinetics data. Experiments conducted on a thermal test cell show that core temperature computed using this method is in good agreement with independent thermocouple-based measurements in a wide range of experimental conditions. The validated method is used to predict core temperature as a function of time for several previously reported thermal runaway tests. In each case, the predicted peak core temperature is found to be several hundreds of degrees Celsius higher than the measured surface temperature. This shows that surface temperature alone is not sufficient for thermally characterizing the cell during thermal runaway. Besides providing key insights into the fundamental nature of thermal runaway, the ability to determine the core temperature shown here may lead to practical tools for characterizing and mitigating thermal runaway.

  4. Landfills potential source for cores -- computer model analyzes landfills for on-site recycling operations

    Treesearch

    Philip A. Araman; R.J. Bush; E.B. Hager; A.L. Hammett

    1999-01-01

    Are you having trouble finding enough used pallet cores? Do you have trouble finding more than one reliable source of used pallet parts? Have you ever considered your local landfill as a "source?" In 1995, more pallets ended up in landfills that at pallet recovery-repair companies. Virginia Tech and the U.S. Forest Service have developed a business plan...

  5. Integrating Computational Science Tools into a Thermodynamics Course

    NASA Astrophysics Data System (ADS)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  6. Visualizing Earth's Core-Mantle Interactions using Nanoscale X-ray Tomography

    NASA Astrophysics Data System (ADS)

    Mao, W. L.; Wang, J.; Yang, W.; Hayter, J.; Pianetta, P.; Zhang, L.; Fei, Y.; Mao, H.; Hustoft, J. W.; Kohlstedt, D. L.

    2010-12-01

    Early-stage, core-mantle differentiation and core formation represent a pivotal geological event which defined the major geochemical signatures. However current hypotheses of the potential mechanism for core-mantle separation and interaction need more experimental input which has been awaiting technological breakthroughs. Nanoscale x-ray computed tomography (nanoXCT) within a laser-heated diamond anvil cell has exciting potential as a powerful 3D petrographic probe for non-destructive, nanoscale (<40nm) resolution of multiple minerals and amorphous phases (including melts) which are synthesized under the high pressure-temperature conditions found deep within the Earth and planetary interiors. Results from high pressure-temperature experiments which illustrate the potential for this technique will be presented. By extending measurements of the texture, shape, porosity, tortuosity, dihedral angle, and other characteristics of molten Fe-rich alloys in relation to silicates and oxides, along with the fracture systems of rocks under deformation by high pressure-temperature conditions, potential mechanisms of core formation can be tested. NanoXCT can also be used to investigate grain shape, intergrowth, orientation, and foliation -- as well as mineral chemistry and crystallography at core-mantle boundary conditions -- to understand whether shape-preferred orientation is a primary source of the observed seismic anisotropy in Earth’s D” layer and to determine the textures and shapes of the melt pockets and channels which would form putative partial melt which may exist in ultralow velocity zones.

  7. Core-Noise Research

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2012-01-01

    This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system-level noise metrics for the 2015 (N+1), 2020 (N+2), and 2025 (N+3) timeframes; SFW strategic thrusts and technical challenges; SFW advanced subsystems that are broadly applicable to N+3 vehicle concepts, with an indication where further noise research is needed; the components of core noise (compressor, combustor and turbine noise) and a rationale for NASA's current emphasis on the combustor-noise component; the increase in the relative importance of core noise due to turbofan design trends; the need to understand and mitigate core-noise sources for high-efficiency small gas generators; and the current research activities in the core-noise area, with additional details given about forthcoming updates to NASA's Aircraft Noise Prediction Program (ANOPP) core-noise prediction capabilities, two NRA efforts (Honeywell International, Phoenix, AZ and University of Illinois at Urbana-Champaign, respectively) to improve the understanding of core-noise sources and noise propagation through the engine core, and an effort to develop oxide/oxide ceramic-matrix-composite (CMC) liners for broadband noise attenuation suitable for turbofan-core application. Core noise must be addressed to ensure that the N+3 noise goals are met. Focused, but long-term, core-noise research is carried out to enable the advanced high-efficiency small gas-generator subsystem, common to several N+3 conceptual designs, needed to meet NASA's technical challenges. Intermediate updates to prediction tools are implemented as the understanding of the source structure and engine-internal propagation effects is improved. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The

  8. Core sample extractor

    NASA Technical Reports Server (NTRS)

    Akins, James; Cobb, Billy; Hart, Steve; Leaptrotte, Jeff; Milhollin, James; Pernik, Mark

    1989-01-01

    The problem of retrieving and storing core samples from a hole drilled on the lunar surface is addressed. The total depth of the hole in question is 50 meters with a maximum diameter of 100 millimeters. The core sample itself has a diameter of 60 millimeters and will be two meters in length. It is therefore necessary to retrieve and store 25 core samples per hole. The design utilizes a control system that will stop the mechanism at a certain depth, a cam-linkage system that will fracture the core, and a storage system that will save and catalogue the cores to be extracted. The Rod Changer and Storage Design Group will provide the necessary tooling to get into the hole as well as to the core. The mechanical design for the cam-linkage system as well as the conceptual design of the storage device are described.

  9. Electromagnetically driven westward drift and inner-core superrotation in Earth's core.

    PubMed

    Livermore, Philip W; Hollerbach, Rainer; Jackson, Andrew

    2013-10-01

    A 3D numerical model of the earth's core with a viscosity two orders of magnitude lower than the state of the art suggests a link between the observed westward drift of the magnetic field and superrotation of the inner core. In our model, the axial electromagnetic torque has a dominant influence only at the surface and in the deepest reaches of the core, where it respectively drives a broad westward flow rising to an axisymmetric equatorial jet and imparts an eastward-directed torque on the solid inner core. Subtle changes in the structure of the internal magnetic field may alter not just the magnitude but the direction of these torques. This not only suggests that the quasi-oscillatory nature of inner-core superrotation [Tkalčić H, Young M, Bodin T, Ngo S, Sambridge M (2013) The shuffling rotation of the earth's inner core revealed by earthquake doublets. Nat Geosci 6:497-502.] may be driven by decadal changes in the magnetic field, but further that historical periods in which the field exhibited eastward drift were contemporaneous with a westward inner-core rotation. The model further indicates a strong internal shear layer on the tangent cylinder that may be a source of torsional waves inside the core.

  10. Multi-core and GPU accelerated simulation of a radial star target imaged with equivalent t-number circular and Gaussian pupils

    NASA Astrophysics Data System (ADS)

    Greynolds, Alan W.

    2013-09-01

    Results from the GelOE optical engineering software are presented for the through-focus, monochromatic coherent and polychromatic incoherent imaging of a radial "star" target for equivalent t-number circular and Gaussian pupils. The FFT-based simulations are carried out using OpenMP threading on a multi-core desktop computer, with and without the aid of a many-core NVIDIA GPU accessing its cuFFT library. It is found that a custom FFT optimized for the 12-core host has similar performance to a simply implemented 256-core GPU FFT. A more sophisticated version of the latter but tuned to reduce overhead on a 448-core GPU is 20 to 28 times faster than a basic FFT implementation running on one CPU core.

  11. Toroidal converter core

    NASA Technical Reports Server (NTRS)

    Mclyman, W. T.

    1977-01-01

    Improved approach consists of cut and uncut cores nested in concentric configuration. Cores are made by winding steel ribbon on mandrel and impregnating with epoxy to bond layers together. Gap is made by cutting across wound and bonded core. Rough ends are ground or lapped.

  12. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  13. Approaches and Tools Used to Teach the Computer Input/Output Subsystem: A Survey

    ERIC Educational Resources Information Center

    Larraza-Mendiluze, Edurne; Garay-Vitoria, Nestor

    2015-01-01

    This paper surveys how the computer input/output (I/O) subsystem is taught in introductory undergraduate courses. It is important to study the educational process of the computer I/O subsystem because, in the curricula recommendations, it is considered a core topic in the area of knowledge of computer architecture and organization (CAO). It is…

  14. Formation of Cool Cores in Galaxy Clusters via Hierarchical Mergers

    NASA Astrophysics Data System (ADS)

    Motl, Patrick M.; Burns, Jack O.; Loken, Chris; Norman, Michael L.; Bryan, Greg

    2004-05-01

    We present a new scenario for the formation of cool cores in rich galaxy clusters, based on results from recent high spatial dynamic range, adaptive mesh Eulerian hydrodynamic simulations of large-scale structure formation. We find that cores of cool gas, material that would be identified as a classical cooling flow on the basis of its X-ray luminosity excess and temperature profile, are built from the accretion of discrete stable subclusters. Any ``cooling flow'' present is overwhelmed by the velocity field within the cluster; the bulk flow of gas through the cluster typically has speeds up to about 2000 km s-1, and significant rotation is frequently present in the cluster core. The inclusion of consistent initial cosmological conditions for the cluster within its surrounding supercluster environment is crucial when the evolution of cool cores in rich galaxy clusters is simulated. This new model for the hierarchical assembly of cool gas naturally explains the high frequency of cool cores in rich galaxy clusters, despite the fact that a majority of these clusters show evidence of substructure that is believed to arise from recent merger activity. Furthermore, our simulations generate complex cluster cores in concordance with recent X-ray observations of cool fronts, cool ``bullets,'' and filaments in a number of galaxy clusters. Our simulations were computed with a coupled N-body, Eulerian, adaptive mesh refinement, hydrodynamics cosmology code that properly treats the effects of shocks and radiative cooling by the gas. We employ up to seven levels of refinement to attain a peak resolution of 15.6 kpc within a volume 256 Mpc on a side and assume a standard ΛCDM cosmology.

  15. [caCORE: core architecture of bioinformation on cancer research in America].

    PubMed

    Gao, Qin; Zhang, Yan-lei; Xie, Zhi-yun; Zhang, Qi-peng; Hu, Zhang-zhi

    2006-04-18

    A critical factor in the advancement of biomedical research is the ease with which data can be integrated, redistributed and analyzed both within and across domains. This paper summarizes the Biomedical Information Core Infrastructure built by National Cancer Institute Center for Bioinformatics in America (NCICB). The main product from the Core Infrastructure is caCORE--cancer Common Ontologic Reference Environment, which is the infrastructure backbone supporting data management and application development at NCICB. The paper explains the structure and function of caCORE: (1) Enterprise Vocabulary Services (EVS). They provide controlled vocabulary, dictionary and thesaurus services, and EVS produces the NCI Thesaurus and the NCI Metathesaurus; (2) The Cancer Data Standards Repository (caDSR). It provides a metadata registry for common data elements. (3) Cancer Bioinformatics Infrastructure Objects (caBIO). They provide Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. The vision for caCORE is to provide a common data management framework that will support the consistency, clarity, and comparability of biomedical research data and information. In addition to providing facilities for data management and redistribution, caCORE helps solve problems of data integration. All NCICB-developed caCORE components are distributed under open-source licenses that support unrestricted usage by both non-profit and commercial entities, and caCORE has laid the foundation for a number of scientific and clinical applications. Based on it, the paper expounds caCORE-base applications simply in several NCI projects, of which one is CMAP (Cancer Molecular Analysis Project), and the other is caBIG (Cancer Biomedical Informatics Grid). In the end, the paper also gives good prospects of caCORE, and while caCORE was born out of the needs of the cancer research community, it is intended to serve as a general resource. Cancer research has historically

  16. Bifurcated helical core equilibrium states in tokamaks

    NASA Astrophysics Data System (ADS)

    Cooper, W. A.; Chapman, I. T.; Schmitz, O.; Turnbull, A. D.; Tobias, B. J.; Lazarus, E. A.; Turco, F.; Lanctot, M. J.; Evans, T. E.; Graves, J. P.; Brunetti, D.; Pfefferlé, D.; Reimerdes, H.; Sauter, O.; Halpern, F. D.; Tran, T. M.; Coda, S.; Duval, B. P.; Labit, B.; Pochelon, A.; Turnyanskiy, M. R.; Lao, L.; Luce, T. C.; Buttery, R.; Ferron, J. R.; Hollmann, E. M.; Petty, C. C.; van Zeeland, M.; Fenstermacher, M. E.; Hanson, J. M.; Lütjens, H.

    2013-07-01

    Tokamaks with weak to moderate reversed central shear in which the minimum inverse rotational transform (safety factor) qmin is in the neighbourhood of unity can trigger bifurcated magnetohydrodynamic equilibrium states, one of which is similar to a saturated ideal internal kink mode. Peaked prescribed pressure profiles reproduce the ‘snake’ structures observed in many tokamaks which has led to a novel explanation of the snake as a bifurcated equilibrium state. Snake equilibrium structures are computed in simulations of the tokamak à configuration variable (TCV), DIII-D and mega amp spherical torus (MAST) tokamaks. The internal helical deformations only weakly modulate the plasma-vacuum interface which is more sensitive to ripple and resonant magnetic perturbations. On the other hand, the external perturbations do not alter the helical core deformation in a significant manner. The confinement of fast particles in MAST simulations deteriorate with the amplitude of the helical core distortion. These three-dimensional bifurcated solutions constitute a paradigm shift that motivates the applications of tools developed for stellarator research in tokamak physics investigations.

  17. Local configurations and atomic intermixing in as-quenched and annealed Fe1-xCrx and Fe1-xMox ribbons

    NASA Astrophysics Data System (ADS)

    Stanciu, A. E.; Greculeasa, S. G.; Bartha, C.; Schinteie, G.; Palade, P.; Kuncser, A.; Leca, A.; Filoti, G.; Birsan, A.; Crisan, O.; Kuncser, V.

    2018-04-01

    Local atomic configuration, phase composition and atomic intermixing in Fe-rich Fe1-xCrx and Fe1-xMox ribbons (x = 0.05, 0.10, 0.15), of potential interest for high-temperature applications and nuclear devices, are investigated in this study in relation to specific processing and annealing routes. The Fe-based thin ribbons have been prepared by induction melting, followed by melt spinning and further annealed in He at temperatures up to 1250 °C. The complex structural, compositional and atomic configuration characterisation has been performed by means of X-ray diffraction (XRD), transmission Mössbauer spectroscopy and differential scanning calorimetry (TG-DSC). The XRD analysis indicates the formation of the desired solid solutions with body-centred cubic (bcc) structure in the as-quenched state. The Mössbauer spectroscopy results have been analysed in terms of the two-shell model. The distribution of Cr/Mo atoms in the first two coordination spheres is not homogeneous, especially after annealing, as supported by the short-range order parameters. In addition, high-temperature annealing treatments give rise to oxidation of Fe (to haematite, maghemite and magnetite) at the surface of the ribbons. Fe1-xCrx alloys are structurally more stable than the Mo counterpart under annealing at 700 °C. Annealing at 1250 °C in He enhances drastically the Cr clustering around Fe nuclei.

  18. FP core carrier technique: thermoplasticized gutta-percha root canal obturation technique using polypropylene core.

    PubMed

    Kato, Hiroshi; Nakagawa, Kan-Ichi

    2010-01-01

    Core carrier techniques are unique among the various root canal filling techniques for delivering and compacting gutta-percha in the prepared root canal system. Thermafil (TF), considered the major core carrier device, is provided as an obturator consisting of a master core coated with thermoplasticized gutta-percha. We have devised a thermoplasticized gutta-percha filling technique using a polypropylene core, FlexPoint® NEO (FP), which was developed as a canal filling material that can be sterilized in an autoclave. Therefore, FP can be coated onto thermoplasticized gutta-percha and inserted into the prepared canal as a core carrier. The FP core carrier technique offers many advantages over the TF system: the core can be tested in the root canal and verified radiographically; the core can be adjusted to fit and surplus material easily removed; furthermore the core can be easily removed for retreatment. The clinical procedure of the FP core carrier technique is simple, and similar that with the TF system. Thermoplasticized gutta-percha in a syringe is heated in an oven and extruded onto the FP core carrier after a trial insertion. The FP core carrier is inserted into the root canal to the working length. Excess FP is then removed with a red-hot plastic instrument at the orifice of the root canal. The FP core carrier technique incorporates the clinical advantages of the existing TF system while minimizing the disadvantages. Hence the FP core carrier technique is very useful in clinical practice. This paper describes the FP core carrier technique as a new core based method.

  19. Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor

    NASA Technical Reports Server (NTRS)

    Butler, C.; Albright, D.

    2007-01-01

    Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.

  20. Sulforaphane Inhibits Lipopolysaccharide-Induced Inflammation, Cytotoxicity, Oxidative Stress, and miR-155 Expression and Switches to Mox Phenotype through Activating Extracellular Signal-Regulated Kinase 1/2–Nuclear Factor Erythroid 2-Related Factor 2/Antioxidant Response Element Pathway in Murine Microglial Cells

    PubMed Central

    Eren, Erden; Tufekci, Kemal Ugur; Isci, Kamer Burak; Tastan, Bora; Genc, Kursad; Genc, Sermin

    2018-01-01

    Sulforaphane (SFN) is a natural product with cytoprotective, anti-inflammatory, and antioxidant effects. In this study, we evaluated the mechanisms of its effects on lipopolysaccharide (LPS)-induced cell death, inflammation, oxidative stress, and polarization in murine microglia. We found that SFN protects N9 microglial cells upon LPS-induced cell death and suppresses LPS-induced levels of secreted pro-inflammatory cytokines, tumor necrosis factor-alpha, interleukin-1 beta, and interleukin-6. SFN is also a potent inducer of redox sensitive transcription factor, nuclear factor erythroid 2-related factor 2 (Nrf2), which is responsible for the transcription of antioxidant, cytoprotective, and anti-inflammatory genes. SFN induced translocation of Nrf2 to the nucleus via extracellular signal-regulated kinase 1/2 (ERK1/2) pathway activation. siRNA-mediated knockdown study showed that the effects of SFN on LPS-induced reactive oxygen species, reactive nitrogen species, and pro-inflammatory cytokine production and cell death are partly Nrf2 dependent. Mox phenotype is a novel microglial phenotype that has roles in oxidative stress responses. Our results suggested that SFN induced the Mox phenotype in murine microglia through Nrf2 pathway. SFN also alleviated LPS-induced expression of inflammatory microRNA, miR-155. Finally, SFN inhibits microglia-mediated neurotoxicity as demonstrated by conditioned medium and co-culture experiments. In conclusion, SFN exerts protective effects on microglia and modulates the microglial activation state. PMID:29410668

  1. Incorporating Computer-Aided Software in the Undergraduate Chemical Engineering Core Courses

    ERIC Educational Resources Information Center

    Alnaizy, Raafat; Abdel-Jabbar, Nabil; Ibrahim, Taleb H.; Husseini, Ghaleb A.

    2014-01-01

    Introductions of computer-aided software and simulators are implemented during the sophomore-year of the chemical engineering (ChE) curriculum at the American University of Sharjah (AUS). Our faculty concurs that software integration within the curriculum is beneficial to our students, as evidenced by the positive feedback received from industry…

  2. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less

  3. Computer Science (CS) Education in Indian Schools: Situation Analysis Using Darmstadt Model

    ERIC Educational Resources Information Center

    Raman, Raghu; Venkatasubramanian, Smrithi; Achuthan, Krishnashree; Nedungadi, Prema

    2015-01-01

    Computer science (CS) and its enabling technologies are at the heart of this information age, yet its adoption as a core subject by senior secondary students in Indian schools is low and has not reached critical mass. Though there have been efforts to create core curriculum standards for subjects like Physics, Chemistry, Biology, and Math, CS…

  4. DUBLIN CORE

    EPA Science Inventory

    The Dublin Core is a metadata element set intended to facilitate discovery of electronic resources. It was originally conceived for author-generated descriptions of Web resources, and the Dublin Core has attracted broad ranging international and interdisciplinary support. The cha...

  5. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  6. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Effect of PEG and water-soluble chitosan coating on moxifloxacin-loaded PLGA long-circulating nanoparticles.

    PubMed

    Mustafa, Sanaul; Devi, V Kusum; Pai, Roopa S

    2017-02-01

    Moxifloxacin (MOX) is a Mycobacterium tuberculosis DNA gyrase inhibitor. Due to its intense hydrophilicity, MOX is cleared from the body within 24 h and required for repetitive doses which may then result in hepatotoxicity and acquisition of MOX resistant-TB, related with its use. To overcome the aforementioned limitations, the current study aimed to develop PLGA nanoparticles (PLGA NPs), to act as an efficient carrier for controlled delivery of MOX. To achieve a substantial extension in blood circulation, a combined design, affixation of polyethylene glycol (PEG) to MOX-PLGA NPs and adsorption of water-soluble chitosan (WSC) (cationic deacetylated chitin) to particle surface, was rose for surface modification of NPs. Surface modified NPs (MOX-PEG-WSC NPs) were prepared to provide controlled delivery and circulate in the bloodstream for an extended period of time, thus minimizing dosing frequency. In vivo pharmacokinetic and in vivo biodistribution following oral administration were investigated. NP surface charge was closed to neutral +4.76 mV and significantly affected by the WSC coating. MOX-PEG-WSC NPs presented striking prolongation in blood circulation, reduced protein binding, and long-drawn-out the blood circulation half-life with resultant reduced liver sequestration vis-à-vis MOX-PLGA NPs. The studies, therefore, indicate the successful formulation development of MOX-PEG-WSC NPs that showed sustained release behavior from nanoparticles which indicates low frequency of dosing.

  8. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray W

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less

  9. Core-melt source reduction system

    DOEpatents

    Forsberg, Charles W.; Beahm, Edward C.; Parker, George W.

    1995-01-01

    A core-melt source reduction system for ending the progression of a molten core during a core-melt accident and resulting in a stable solid cool matrix. The system includes alternating layers of a core debris absorbing material and a barrier material. The core debris absorbing material serves to react with and absorb the molten core such that containment overpressurization and/or failure does not occur. The barrier material slows the progression of the molten core debris through the system such that the molten core has sufficient time to react with the core absorbing material. The system includes a provision for cooling the glass/molten core mass after the reaction such that a stable solid cool matrix results.

  10. Computer-assisted learning in anatomy at the international medical school in Debrecen, Hungary: a preliminary report.

    PubMed

    Kish, Gary; Cook, Samuel A; Kis, Gréta

    2013-01-01

    The University of Debrecen's Faculty of Medicine has an international, multilingual student population with anatomy courses taught in English to all but Hungarian students. An elective computer-assisted gross anatomy course, the Computer Human Anatomy (CHA), has been taught in English at the Anatomy Department since 2008. This course focuses on an introduction to anatomical digital images along with clinical cases. This low-budget course has a large visual component using images from magnetic resonance imaging and computer axial tomogram scans, ultrasound clinical studies, and readily available anatomy software that presents topics which run in parallel to the university's core anatomy curriculum. From the combined computer images and CHA lecture information, students are asked to solve computer-based clinical anatomy problems in the CHA computer laboratory. A statistical comparison was undertaken of core anatomy oral examination performances of English program first-year medical students who took the elective CHA course and those who did not in the three academic years 2007-2008, 2008-2009, and 2009-2010. The results of this study indicate that the CHA-enrolled students improved their performance on required anatomy core curriculum oral examinations (P < 0.001), suggesting that computer-assisted learning may play an active role in anatomy curriculum improvement. These preliminary results have prompted ongoing evaluation of what specific aspects of CHA are valuable and which students benefit from computer-assisted learning in a multilingual and diverse cultural environment. Copyright © 2012 American Association of Anatomists.

  11. Internal core tightener

    DOEpatents

    Brynsvold, Glen V.; Snyder, Jr., Harold J.

    1976-06-22

    An internal core tightener which is a linear actuated (vertical actuation motion) expanding device utilizing a minimum of moving parts to perform the lateral tightening function. The key features are: (1) large contact areas to transmit loads during reactor operation; (2) actuation cam surfaces loaded only during clamping and unclamping operation; (3) separation of the parts and internal operation involved in the holding function from those involved in the actuation function; and (4) preloaded pads with compliant travel at each face of the hexagonal assembly at the two clamping planes to accommodate thermal expansion and irradiation induced swelling. The latter feature enables use of a "fixed" outer core boundary, and thus eliminates the uncertainty in gross core dimensions, and potential for rapid core reactivity changes as a result of core dimensional change.

  12. The Interior Angular Momentum of Core Hydrogen Burning Stars from Gravity-mode Oscillations

    NASA Astrophysics Data System (ADS)

    Aerts, C.; Van Reeth, T.; Tkachenko, A.

    2017-09-01

    A major uncertainty in the theory of stellar evolution is the angular momentum distribution inside stars and its change during stellar life. We compose a sample of 67 stars in the core hydrogen burning phase with a {log} g value from high-resolution spectroscopy, as well as an asteroseismic estimate of the near-core rotation rate derived from gravity-mode oscillations detected in space photometry. This assembly includes 8 B-type stars and 59 AF-type stars, covering a mass range from 1.4 to 5 M ⊙, I.e., it concerns intermediate-mass stars born with a well-developed convective core. The sample covers projected surface rotation velocities v\\sin I\\in [9,242] km s-1 and core rotation rates up to 26 μHz, which corresponds to 50% of the critical rotation frequency. We find deviations from rigid rotation to be moderate in the single stars of this sample. We place the near-core rotation rates in an evolutionary context and find that the core rotation must drop drastically before or during the short phase between the end of the core hydrogen burning and the onset of core helium burning. We compute the spin parameter, which is the ratio of twice the rotation rate to the mode frequency (also known as the inverse Rossby number), for 1682 gravity modes and find the majority (95%) to occur in the sub-inertial regime. The 10 stars with Rossby modes have spin parameters between 14 and 30, while the gravito-inertial modes cover the range from 1 to 15.

  13. Effects of anisotropic turbulent thermal diffusion on spherical magnetoconvection in the Earth's core

    NASA Astrophysics Data System (ADS)

    Ivers, D. J.; Phillips, C. G.

    2018-03-01

    We re-consider the plate-like model of turbulence in the Earth's core, proposed by Braginsky and Meytlis (1990), and show that it is plausible for core parameters not only in polar regions but extends to mid- and low-latitudes where rotation and gravity are not parallel, except in a very thin equatorial layer. In this model the turbulence is highly anisotropic with preferred directions imposed by the Earth's rotation and the magnetic field. Current geodynamo computations effectively model sub-grid scale turbulence by using isotropic viscous and thermal diffusion values significantly greater than the molecular values of the Earth's core. We consider a local turbulent dynamo model for the Earth's core in which the mean magnetic field, velocity and temperature satisfy the Boussinesq induction, momentum and heat equations with an isotropic turbulent Ekman number and Roberts number. The anisotropy is modelled only in the thermal diffusion tensor with the Earth's rotation and magnetic field as preferred directions. Nonlocal organising effects of gravity and rotation (but not aspect ratio in the Earth's core) such as an inverse cascade and nonlocal transport are assumed to occur at longer length scales, which computations may accurately capture with sufficient resolution. To investigate the implications of this anisotropy for the proposed turbulent dynamo model we investigate the linear instability of turbulent magnetoconvection on length scales longer than the background turbulence in a rotating sphere with electrically insulating exterior for no-slip and isothermal boundary conditions. The equations are linearised about an axisymmetric basic state with a conductive temperature, azimuthal magnetic field and differential rotation. The basic state temperature is a function of the anisotropy and the spherical radius. Elsasser numbers in the range 1-20 and turbulent Roberts numbers 0.01-1 are considered for both equatorial symmetries of the magnetic basic state. It is found

  14. Computer-assisted self interviewing in sexual health clinics.

    PubMed

    Fairley, Christopher K; Sze, Jun Kit; Vodstrcil, Lenka A; Chen, Marcus Y

    2010-11-01

    This review describes the published information on what constitutes the elements of a core sexual history and the use of computer-assisted self interviewing (CASI) within sexually transmitted disease clinics. We searched OVID Medline from 1990 to February 2010 using the terms "computer assisted interviewing" and "sex," and to identify published articles on a core sexual history, we used the term "core sexual history." Since 1990, 3 published articles used a combination of expert consensus, formal clinician surveys, and the Delphi technique to decide on what questions form a core sexual health history. Sexual health histories from 4 countries mostly ask about the sex of the partners, the number of partners (although the time period varies), the types of sex (oral, anal, and vaginal) and condom use, pregnancy intent, and contraceptive methods. Five published studies in the United States, Australia, and the United Kingdom compared CASI with in person interviews in sexually transmitted disease clinics. In general, CASI identified higher risk behavior more commonly than clinician interviews, although there were substantial differences between studies. CASI was found to be highly acceptable and individuals felt it allowed more honest reporting. Currently, there are insufficient data to determine whether CASI results in differences in sexually transmitted infection testing, diagnosis, or treatment or if CASI improves the quality of sexual health care or its efficiency. The potential public health advantages of the widespread use of CASI are discussed.

  15. Core Values | NREL

    Science.gov Websites

    Core Values Core Values NREL's core values are rooted in a safe and supportive work environment guide our everyday actions and efforts: Safe and supportive work environment Respect for the rights physical and social environment Integrity Maintain the highest standard of ethics, honesty, and integrity

  16. Coring device with a improved core sleeve and anti-gripping collar with a collective core catcher

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Story, A.L.; Filshtinsky, M.

    1986-01-28

    This patent describes an improved coring apparatus used in combination with a coring bit and drill string. This device consists of: an outer driving structure adapted to be connected at one end to the coring bit for cutting a core in a borehole, and at the other end to the lower end of the drill string in telescoping and co-rotatable manner therewith; an inner barrel disposed within the outer driving structure and including a lower end portion adjacent to the bit; first means supporting the inner barrel in spaced relationship to the outer driving structure while permitting rotation of themore » driving structure with respect to the inner barrel; a woven metal mesh sleeve mounted in surrounding relation on at least a portion of the exterior surface of the inner barrel; second means, connected to a free end of the sleeve opposite the leading portion of the sleeve, for maintaining the portion of the sleeve which surrounds the inner barrel in compression and to maintain an inside diameter greater than the outside diameter of the inner barrel of the portion of the sleeve surrounding the inner barrel while the portion of the sleeve positioned inside the inner barrel being in tension to grip and compress a core received within the sleeve and having an outside diameter less than the inside diameter of the inner barrel when in tension, wherein the second means is also for engaging the core when the means is drawn into the inner barrel, and third means positioned within the inner barrel and connected to the leading portion of the sleeve to draw the sleeve within the inner barrel and to apply tension to the portion of the sleeve within the barrel to encase and grip the core as it is cut.« less

  17. Parallel Agent-Based Simulations on Clusters of GPUs and Multi-Core Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaby, Brandon G; Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    An effective latency-hiding mechanism is presented in the parallelization of agent-based model simulations (ABMS) with millions of agents. The mechanism is designed to accommodate the hierarchical organization as well as heterogeneity of current state-of-the-art parallel computing platforms. We use it to explore the computation vs. communication trade-off continuum available with the deep computational and memory hierarchies of extant platforms and present a novel analytical model of the tradeoff. We describe our implementation and report preliminary performance results on two distinct parallel platforms suitable for ABMS: CUDA threads on multiple, networked graphical processing units (GPUs), and pthreads on multi-core processors. Messagemore » Passing Interface (MPI) is used for inter-GPU as well as inter-socket communication on a cluster of multiple GPUs and multi-core processors. Results indicate the benefits of our latency-hiding scheme, delivering as much as over 100-fold improvement in runtime for certain benchmark ABMS application scenarios with several million agents. This speed improvement is obtained on our system that is already two to three orders of magnitude faster on one GPU than an equivalent CPU-based execution in a popular simulator in Java. Thus, the overall execution of our current work is over four orders of magnitude faster when executed on multiple GPUs.« less

  18. pyPaSWAS: Python-based multi-core CPU and GPU sequence alignment.

    PubMed

    Warris, Sven; Timal, N Roshan N; Kempenaar, Marcel; Poortinga, Arne M; van de Geest, Henri; Varbanescu, Ana L; Nap, Jan-Peter

    2018-01-01

    Our previously published CUDA-only application PaSWAS for Smith-Waterman (SW) sequence alignment of any type of sequence on NVIDIA-based GPUs is platform-specific and therefore adopted less than could be. The OpenCL language is supported more widely and allows use on a variety of hardware platforms. Moreover, there is a need to promote the adoption of parallel computing in bioinformatics by making its use and extension more simple through more and better application of high-level languages commonly used in bioinformatics, such as Python. The novel application pyPaSWAS presents the parallel SW sequence alignment code fully packed in Python. It is a generic SW implementation running on several hardware platforms with multi-core systems and/or GPUs that provides accurate sequence alignments that also can be inspected for alignment details. Additionally, pyPaSWAS support the affine gap penalty. Python libraries are used for automated system configuration, I/O and logging. This way, the Python environment will stimulate further extension and use of pyPaSWAS. pyPaSWAS presents an easy Python-based environment for accurate and retrievable parallel SW sequence alignments on GPUs and multi-core systems. The strategy of integrating Python with high-performance parallel compute languages to create a developer- and user-friendly environment should be considered for other computationally intensive bioinformatics algorithms.

  19. Analytical methods in the high conversion reactor core design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeggel, W.; Oldekop, W.; Axmann, J.K.

    High conversion reactor (HCR) design methods have been used at the Technical University of Braunschweig (TUBS) with the technological support of Kraftwerk Union (KWU). The present state and objectives of this cooperation between KWU and TUBS in the field of HCRs have been described using existing design models and current activities aimed at further development and validation of the codes. The hard physical and thermal-hydraulic boundary conditions of pressurized water reactor (PWR) cores with a high degree of fuel utilization result from the tight packing of the HCR fuel rods and the high fissionable plutonium content of the fuel. Inmore » terms of design, the problem will be solved with rod bundles whose fuel rods are adjusted by helical spacers to the proposed small rod pitches. These HCR properties require novel computational models for neutron physics, thermal hydraulics, and fuel rod design. By means of a survey of the codes, the analytical procedure for present-day HCR core design is presented. The design programs are currently under intensive development, as design tools with a solid, scientific foundation and with essential parameters that are widely valid and are required for a promising optimization of the HCR core. Design results and a survey of future HCR development are given. In this connection, the reoptimization of the PWR core in the direction of an HCR is considered a fascinating scientific task, with respect to both economic and safety aspects.« less

  20. Muscle oxygenation as an early predictor of shock severity in trauma patients

    PubMed Central

    Arakaki, Lorilee S. L.; Bulger, Eileen M.; Ciesielski, Wayne A.; Carlbom, David J.; Fisk, Dana M.; Sheehan, Kellie L.; Asplund, Karin M.; Schenkman, Kenneth A.

    2016-01-01

    Introduction We evaluated the potential utility of a new prototype noninvasive muscle oxygenation (MOx) measurement for the identification of shock severity in a population of patients admitted to the trauma resuscitation rooms of a Level I regional trauma center. The goal of this project was to correlate MOx with shock severity as defined by standard measures of shock: systolic blood pressure, heart rate, and lactate. Methods Optical spectra were collected from subjects by placement of a custom-designed optical probe over the first dorsal interosseous muscles on the back of the hand. Spectra were acquired from trauma patients as soon as possible upon admission to the trauma resuscitation room. Patients with any injury were eligible for study. MOx was determined from the collected optical spectra with a multi-wavelength analysis that used both visible and near-infrared regions of light. Shock severity was determined in each patient by a scoring system based on combined degrees of hypotension, tachycardia, and lactate. MOx values of patients in each shock severity group (mild, moderate, and severe) were compared using two-sample t-tests. Results In 17 healthy control patients, the mean MOx value was 91.0 ± 5.5%. A total of 69 trauma patients were studied. Patients classified as having mild shock had a mean MOx of 62.5 ± 26.2% (n = 33), those classified as in moderate shock had a mean MOx of 56.9 ± 26.9% (n = 25) and those classified as in severe shock had a MOx of 31.0 ± 17.1% (n = 11). Mean MOx for each of these groups was statistically different from the healthy control group (p<0.05). Receiver operating characteristic (ROC) analyses show that MOx and shock index (heart rate/systolic blood pressure) identified shock similarly well (area under the curves (AUC) = 0.857 and 0.828, respectively). However, MOx identified mild shock better than shock index in the same group of patients (AUC = 0.782 and 0.671, respectively). Conclusions The results obtained from this

  1. Core-melt source reduction system

    DOEpatents

    Forsberg, C.W.; Beahm, E.C.; Parker, G.W.

    1995-04-25

    A core-melt source reduction system for ending the progression of a molten core during a core-melt accident and resulting in a stable solid cool matrix. The system includes alternating layers of a core debris absorbing material and a barrier material. The core debris absorbing material serves to react with and absorb the molten core such that containment overpressurization and/or failure does not occur. The barrier material slows the progression of the molten core debris through the system such that the molten core has sufficient time to react with the core absorbing material. The system includes a provision for cooling the glass/molten core mass after the reaction such that a stable solid cool matrix results. 4 figs.

  2. Characterizing core-periphery structure of complex network by h-core and fingerprint curve

    NASA Astrophysics Data System (ADS)

    Li, Simon S.; Ye, Adam Y.; Qi, Eric P.; Stanley, H. Eugene; Ye, Fred Y.

    2018-02-01

    It is proposed that the core-periphery structure of complex networks can be simulated by h-cores and fingerprint curves. While the features of core structure are characterized by h-core, the features of periphery structure are visualized by rose or spiral curve as the fingerprint curve linking to entire-network parameters. It is suggested that a complex network can be approached by h-core and rose curves as the first-order Fourier-approach, where the core-periphery structure is characterized by five parameters: network h-index, network radius, degree power, network density and average clustering coefficient. The simulation looks Fourier-like analysis.

  3. Chamber-core structures for fairing acoustic mitigation

    NASA Astrophysics Data System (ADS)

    Ardelean, Emil; Williams, Andrew; Korshin, Nicholas; Henderson, Kyle; Lane, Steven; Richard, Robert

    2005-05-01

    Extreme noise and vibration levels at lift-off and during ascent can damage sensitive payload components. Recently, the Air Force Research Laboratory, Space Vehicles Directorate has investigated a composite structure fabrication approach, called chamber-core, for building payload fairings. Chamber-core offers a strong, lightweight structure with inherent noise attenuation characteristics. It uses one-inch square axial tubes that are sandwiched between inner and outer face-sheets to form a cylindrical fairing structure. These hollow tubes can be used as acoustic dampers to attenuate the amplitude response of low frequency acoustic resonances within the fairing"s volume. A cylindrical, graphite-epoxy chamber-core structure was built to study noise transmission characteristics and to quantify the achievable performance improvement. The cylinder was tested in a semi-reverberant acoustics laboratory using bandlimited random noise at sound pressure levels up to 110 dB. The performance was measured using external and internal microphones. The noise reduction was computed as the ratio of the spatially averaged external response to the spatially averaged interior response. The noise reduction provided by the chamber-core cylinder was measured over three bandwidths, 20 Hz to 500 Hz, 20 Hz to 2000 Hz, and 20 Hz to 5000 Hz. For the bare cylinder with no acoustic resonators, the structure provided approximately 13 dB of attenuation over the 20 Hz to 500 Hz bandwidth. With the axial tubes acting as acoustic resonators at various frequencies over the bandwidth, the noise reduction provided by the cylinder increased to 18.2 dB, an overall increase of 4.8 dB over the bandwidth. Narrow-band reductions greater than 10 dB were observed at specific low frequency acoustic resonances. This was accomplished with virtually no added mass to the composite cylinder.

  4. Commentary: Ubiquitous Computing Revisited--A New Perspective

    ERIC Educational Resources Information Center

    Bull, Glen; Garofalo, Joe

    2006-01-01

    In 2002, representatives from the teacher educator associations representing the core content areas (science, mathematics, language arts, and social studies) and educational technology met at the National Technology Leadership Retreat (NTLR) to discuss potential implications of ubiquitous computing for K-12 schools. This paper re-examines some of…

  5. Design, synthesis and applications of core-shell, hollow core, and nanorattle multifunctional nanostructures.

    PubMed

    El-Toni, Ahmed Mohamed; Habila, Mohamed A; Labis, Joselito Puzon; ALOthman, Zeid A; Alhoshan, Mansour; Elzatahry, Ahmed A; Zhang, Fan

    2016-02-07

    With the evolution of nanoscience and nanotechnology, studies have been focused on manipulating nanoparticle properties through the control of their size, composition, and morphology. As nanomaterial research has progressed, the foremost focus has gradually shifted from synthesis, morphology control, and characterization of properties to the investigation of function and the utility of integrating these materials and chemical sciences with the physical, biological, and medical fields, which therefore necessitates the development of novel materials that are capable of performing multiple tasks and functions. The construction of multifunctional nanomaterials that integrate two or more functions into a single geometry has been achieved through the surface-coating technique, which created a new class of substances designated as core-shell nanoparticles. Core-shell materials have growing and expanding applications due to the multifunctionality that is achieved through the formation of multiple shells as well as the manipulation of core/shell materials. Moreover, core removal from core-shell-based structures offers excellent opportunities to construct multifunctional hollow core architectures that possess huge storage capacities, low densities, and tunable optical properties. Furthermore, the fabrication of nanomaterials that have the combined properties of a core-shell structure with that of a hollow one has resulted in the creation of a new and important class of substances, known as the rattle core-shell nanoparticles, or nanorattles. The design strategies of these new multifunctional nanostructures (core-shell, hollow core, and nanorattle) are discussed in the first part of this review. In the second part, different synthesis and fabrication approaches for multifunctional core-shell, hollow core-shell and rattle core-shell architectures are highlighted. Finally, in the last part of the article, the versatile and diverse applications of these nanoarchitectures in

  6. The Interplay of Opacities and Rotation in Promoting the Explosion of Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Vartanyan, David; Burrows, Adam; Radice, David

    2018-01-01

    For over five decades, the mechanism of explosion in core-collapse supernovae has been a central unsolved problem in astrophysics, challenging both our computational capabilities and our understanding of relevant physics. Current simulations often produce explosions, but they are at times underenergetic. The neutrino mechanism, wherein a fraction of emitted neutrinos is absorbed in the mantle of the star to reignite the stalled shock, remains the dominant model for reviving explosions in massive stars undergoing core collapse. We present here a diverse suite of 2D axisymmetric simulations produced by FORNAX, a highly parallelizable multidimensional supernova simulation code. We explore the effects of various corrections, including the many-body correction, to neutrino-matter opacities and the possible role of rotation in promoting explosion amongst various core-collapse progenitors.

  7. MUTILS - a set of efficient modeling tools for multi-core CPUs implemented in MEX

    NASA Astrophysics Data System (ADS)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2013-04-01

    The need for computational performance is common in scientific applications, and in particular in numerical simulations, where high resolution models require efficient processing of large amounts of data. Especially in the context of geological problems the need to increase the model resolution to resolve physical and geometrical complexities seems to have no limits. Alas, the performance of new generations of CPUs does not improve any longer by simply increasing clock speeds. Current industrial trends are to increase the number of computational cores. As a result, parallel implementations are required in order to fully utilize the potential of new processors, and to study more complex models. We target simulations on small to medium scale shared memory computers: laptops and desktop PCs with ~8 CPU cores and up to tens of GB of memory to high-end servers with ~50 CPU cores and hundereds of GB of memory. In this setting MATLAB is often the environment of choice for scientists that want to implement their own models with little effort. It is a useful general purpose mathematical software package, but due to its versatility some of its functionality is not as efficient as it could be. In particular, the challanges of modern multi-core architectures are not fully addressed. We have developed MILAMIN 2 - an efficient FEM modeling environment written in native MATLAB. Amongst others, MILAMIN provides functions to define model geometry, generate and convert structured and unstructured meshes (also through interfaces to external mesh generators), compute element and system matrices, apply boundary conditions, solve the system of linear equations, address non-linear and transient problems, and perform post-processing. MILAMIN strives to combine the ease of code development and the computational efficiency. Where possible, the code is optimized and/or parallelized within the MATLAB framework. Native MATLAB is augmented with the MUTILS library - a set of MEX functions that

  8. Seismic velocities at the core-mantle boundary inferred from P waves diffracted around the core

    NASA Astrophysics Data System (ADS)

    Sylvander, Matthieu; Ponce, Bruno; Souriau, Annie

    1997-05-01

    The very base of the mantle is investigated with core-diffracted P-wave (P diff) travel times published by the International Seismological Centre (ISC) for the period 1964-1987. Apparent slownesses are computed for two-station profiles using a difference method. As the short-period P diff mostly sample a very thin layer above the core-mantle boundary (CMB), a good approximation of the true velocity structure at the CMB can be derived from the apparent slownesses. More than 27000 profiles are built, and this provides an unprecedented P diff sampling of the CMB. The overall slowness distribution has an average value of 4.62 s/deg, which corresponds to a velocity more than 4% lower than that of most mean radial models. An analysis of the residuals of absolute ISC P and P diff travel times is independently carried out and confirms this result. It also shows that the degree of heterogeneities is significantly higher at the CMB than in the lower mantle. A search for lateral velocity variations is then undertaken; a first large-scale investigation reveals the presence of coherent slowness anomalies of very large dimensions of the order of 3000 km at the CMB. A tomographic inversion is then performed, which confirms the existence of pronounced (±8-10%) lateral velocity variations and provides a reliable map of the heterogeneities in the northern hemisphere. The influence of heterogeneity in the overlying mantle, of noise in the data and of CMB topography is evaluated; it seemingly proves minor compared with the contribution of heterogeneities at the CMB. Our results support the rising idea of a thin, low-velocity laterally varying boundary layer at the base of the D″ layer. The two principal candidate interpretations are the occurrence of partial melting, or the presence of a chemically distinct layer, featuring infiltrated core material.

  9. Toward Microscopic Equations of State for Core-Collapse Supernovae from Chiral Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Aboona, Bassam; Holt, Jeremy

    2017-09-01

    Chiral effective field theory provides a modern framework for understanding the structure and dynamics of nuclear many-body systems. Recent works have had much success in applying the theory to describe the ground- and excited-state properties of light and medium-mass atomic nuclei when combined with ab initio numerical techniques. Our aim is to extend the application of chiral effective field theory to describe the nuclear equation of state required for supercomputer simulations of core-collapse supernovae. Given the large range of densities, temperatures, and proton fractions probed during stellar core collapse, microscopic calculations of the equation of state require large computational resources on the order of one million CPU hours. We investigate the use of graphics processing units (GPUs) to significantly reduce the computational cost of these calculations, which will enable a more accurate and precise description of this important input to numerical astrophysical simulations. Cyclotron Institute at Texas A&M, NSF Grant: PHY 1659847, DOE Grant: DE-FG02-93ER40773.

  10. Complex Inner Core of the Earth

    NASA Astrophysics Data System (ADS)

    Tkalcic, H.; Pachhai, S.; Tanaka, S.; Mattesini, M.; Stephenson, J.

    2015-12-01

    Recent studies have revealed an increasingly complex structure of the Earth's inner core (IC) in properties such as seismic velocity, attenuation, anisotropy, and differential rotation. In addition, the inner core boundary (ICB) has proven to be more complex than just a dividing boundary between the liquid outer core and the solid IC. On one hand, these advancements have been achieved due to the availability of new data. On the other hand, this is due to better computational facilities, the introduction of new mathematical techniques to this field of study, and a multidisciplinary approach. Through first principles treatment of global seismological differential travel time data, it is possible to acquire a complex mineralogical structure of the IC, consisting of at least three different phases of iron. This has the potential to unify seismological observations and interpretation of IC anisotropy with mineral physics and recent geodynamical scenarios suggesting a predominant degree 1 structure in the IC, although a new complexity emerges from recent attenuation and isotropic velocity studies. A number of studies have recently shown lateral variability of these properties in the uppermost IC, to an increasingly more complex extent than a simple harmonic degree 1. While large earthquakes recorded on individual stations constrain established ray-path corridors through the IC, large arrays provide an unprecedented and overwhelming number of deep Earth-sensitive data. For example, the most complete collection of empirical travel time curves of core phases, from simultaneous recordings of a distant individual earthquake on hundreds of stations is now within reach. Similarly, we can recover hundreds of simultaneous observations of PKiKP and PcP waves from more proximate earthquakes. Traditionally, these have been used to study the sharpness of the ICB by a far more modest number of data points in the time domain. A new study of these observations in the frequency domain

  11. A diurnal resonance in the ocean tide and in the earth's load response due to the resonant free 'core nutation'

    NASA Technical Reports Server (NTRS)

    Wahr, J. M.; Sasao, T.

    1981-01-01

    The effects of the oceans, which are subject to a resonance due to a free rotational eigenmode of an elliptical, rotating earth with a fluid outer core having an eigenfrequency of (1 + 1/460) cycle/day, on the body tide and nutational response of the earth to the diurnal luni-tidal force are computed. The response of an elastic, rotating, elliptical, oceanless earth with a fluid outer core to a given load distribution on its surface is first considered, and the tidal sea level height for equilibrium and nonequilibrium oceans is examined. Computations of the effects of equilibrium and nonequilibrium oceans on the nutational and deformational responses of the earth are then presented which show small but significant perturbations to the retrograde 18.6-year and prograde six-month nutations, and more important effects on the earth body tide, which is also resonant at the free core notation eigenfrequency.

  12. Interactions between core and matrix thalamocortical projections in human sleep spindle synchronization

    PubMed Central

    Bonjean, Maxime; Baker, Tanya; Bazhenov, Maxim; Cash, Sydney; Halgren, Eric; Sejnowski, Terrence

    2012-01-01

    Sleep spindles, which are bursts of 11–15 Hz that occur during non-REM sleep, are highly synchronous across the scalp when measured with EEG, but have low spatial coherence and exhibit low correlation with EEG signals when simultaneously measured with MEG spindles in humans. We developed a computational model to explore the hypothesis that the spatial coherence of the EEG spindle is a consequence of diffuse matrix projections of the thalamus to layer 1 compared to the focal projections of the core pathway to layer 4 recorded by the MEG. Increasing the fanout of thalamocortical connectivity in the matrix pathway while keeping the core pathway fixed led to increased synchrony of the spindle activity in the superficial cortical layers in the model. In agreement with cortical recordings, the latency for spindles to spread from the core to the matrix was independent of the thalamocortical fanout but highly dependent on the probability of connections between cortical areas. PMID:22496571

  13. Electromagnetically driven westward drift and inner-core superrotation in Earth’s core

    PubMed Central

    Livermore, Philip W.; Hollerbach, Rainer; Jackson, Andrew

    2013-01-01

    A 3D numerical model of the earth’s core with a viscosity two orders of magnitude lower than the state of the art suggests a link between the observed westward drift of the magnetic field and superrotation of the inner core. In our model, the axial electromagnetic torque has a dominant influence only at the surface and in the deepest reaches of the core, where it respectively drives a broad westward flow rising to an axisymmetric equatorial jet and imparts an eastward-directed torque on the solid inner core. Subtle changes in the structure of the internal magnetic field may alter not just the magnitude but the direction of these torques. This not only suggests that the quasi-oscillatory nature of inner-core superrotation [Tkalčić H, Young M, Bodin T, Ngo S, Sambridge M (2013) The shuffling rotation of the earth’s inner core revealed by earthquake doublets. Nat Geosci 6:497–502.] may be driven by decadal changes in the magnetic field, but further that historical periods in which the field exhibited eastward drift were contemporaneous with a westward inner-core rotation. The model further indicates a strong internal shear layer on the tangent cylinder that may be a source of torsional waves inside the core. PMID:24043841

  14. Ab Initio Computations and Active Thermochemical Tables Hand in Hand: Heats of Formation of Core Combustion Species.

    PubMed

    Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko

    2017-09-07

    The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.

  15. Gravitational torque on the inner core and decadal polar motion

    NASA Astrophysics Data System (ADS)

    Dumberry, Mathieu

    2008-03-01

    A decadal polar motion with an amplitude of approximately 25 milliarcsecs (mas) is observed over the last century, a motion known as the Markowitz wobble. The origin of this motion remains unknown. In this paper, we investigate the possibility that a time-dependent axial misalignment between the density structures of the inner core and mantle can explain this signal. The longitudinal displacement of the inner core density structure leads to a change in the global moment of inertia of the Earth. In addition, as a result of the density misalignment, a gravitational equatorial torque leads to a tilt of the oblate geometric figure of the inner core, causing a further change in the global moment of inertia. To conserve angular momentum, an adjustment of the rotation vector must occur, leading to a polar motion. We develop theoretical expressions for the change in the moment of inertia and the gravitational torque in terms of the angle of longitudinal misalignment and the density structure of the mantle. A model to compute the polar motion in response to time-dependent axial inner core rotations is also presented. We show that the polar motion produced by this mechanism can be polarized about a longitudinal axis and is expected to have decadal periodicities, two general characteristics of the Markowitz wobble. The amplitude of the polar motion depends primarily on the Y12 spherical harmonic component of mantle density, on the longitudinal misalignment between the inner core and mantle, and on the bulk viscosity of the inner core. We establish constraints on the first two of these quantities from considerations of the axial component of this gravitational torque and from observed changes in length of day. These constraints suggest that the maximum polar motion from this mechanism is smaller than 1 mas, and too small to explain the Markowitz wobble.

  16. Inner Core Rotation from Geomagnetic Westward Drift and a Stationary Spherical Vortex in Earth's Core

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1998-01-01

    The idea that geomagnetic westward drift indicates convective leveling of the planetary momentum gradient within Earth's core is pursued in search of a differentially rotating mean state, upon which various oscillations and secular effects might be superimposed. The desired state conforms to roughly spherical boundary conditions, minimizes dissipative interference with convective cooling in the bulk of the core, yet may aid core cooling by depositing heat in the uppermost core and lower mantle. The variational calculus of stationary dissipation applied to a spherical vortex within the core yields an interesting differential rotation profile, akin to spherical Couette flow bounded by thin Hartmann layers. Four boundary conditions are required. To concentrate shear induced dissipation near the core-mantle boundary, these are taken to be: (i) no-slip at the core-mantle interface; (ii) geomagnetically estimated bulk westward flow at the base of the core-mantle boundary layer; (iii) no-slip at the inner-outer core interface; and, to describe magnetic locking of the inner core to the deep outer core; (iv) hydrodynamically stress-free at the inner-outer core boundary. By boldly assuming the axial core angular momentum anomaly to be zero, the super-rotation of the inner core relative to the mantle is calculated to be at most 1.5 deg./yr.

  17. Inner Core Rotation from Geomagnetic Westward Drift and a Stationary Spherical Vortex in Earth's Core

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.

    1999-01-01

    The idea that geomagnetic westward drift indicates convective leveling of the planetary momentum gradient within Earth's core is pursued in search of a differentially rotating mean state, upon which various oscillations and secular effects might be superimposed. The desired state conforms to roughly spherical boundary conditions, minimizes dissipative interference with convective cooling in the bulk of the core, yet may aide core cooling by depositing heat in the uppermost core and lower mantle. The variational calculus of stationary dissipation applied to a spherical vortex within the core yields an interesting differential rotation profile akin to spherical Couette flow bounded by thin Hartmann layers. Four boundary conditions are required. To concentrate shear induced dissipation near the core-mantle boundary, these are taken to be: (i) no-slip at the core-mantle interface; (ii) geomagnetically estimated bulk westward flow at the base of the core-mantle boundary layer; (iii) no-slip at the inner-outer core interface; and, to describe magnetic locking of the inner core to the deep outer core, (iv) hydrodynamically stress-free at the inner-outer core boundary. By boldly assuming the axial core angular momentum anomaly to be zero, the super-rotation of the inner core is calculated to be at most 1.5 degrees per year.

  18. A K-6 Computational Thinking Curriculum Framework: Implications for Teacher Knowledge

    ERIC Educational Resources Information Center

    Angeli, Charoula; Voogt, Joke; Fluck, Andrew; Webb, Mary; Cox, Margaret; Malyn-Smith, Joyce; Zagami, Jason

    2016-01-01

    Adding computer science as a separate school subject to the core K-6 curriculum is a complex issue with educational challenges. The authors herein address two of these challenges: (1) the design of the curriculum based on a generic computational thinking framework, and (2) the knowledge teachers need to teach the curriculum. The first issue is…

  19. Computer Science Concept Inventories: Past and Future

    ERIC Educational Resources Information Center

    Taylor, C.; Zingaro, D.; Porter, L.; Webb, K. C.; Lee, C. B.; Clancy, M.

    2014-01-01

    Concept Inventories (CIs) are assessments designed to measure student learning of core concepts. CIs have become well known for their major impact on pedagogical techniques in other sciences, especially physics. Presently, there are no widely used, validated CIs for computer science. However, considerable groundwork has been performed in the form…

  20. Enhancements to the Image Analysis Tool for Core Punch Experiments and Simulations (vs. 2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John Edward; Unal, Cetin

    A previous paper (Hogden & Unal, 2012, Image Analysis Tool for Core Punch Experiments and Simulations) described an image processing computer program developed at Los Alamos National Laboratory. This program has proven useful so developement has been continued. In this paper we describe enhacements to the program as of 2014.

  1. Computational Models of Relational Processes in Cognitive Development

    ERIC Educational Resources Information Center

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

  2. Time-efficient simulations of tight-binding electronic structures with Intel Xeon PhiTM many-core processors

    NASA Astrophysics Data System (ADS)

    Ryu, Hoon; Jeong, Yosang; Kang, Ji-Hoon; Cho, Kyu Nam

    2016-12-01

    Modelling of multi-million atomic semiconductor structures is important as it not only predicts properties of physically realizable novel materials, but can accelerate advanced device designs. This work elaborates a new Technology-Computer-Aided-Design (TCAD) tool for nanoelectronics modelling, which uses a sp3d5s∗ tight-binding approach to describe multi-million atomic structures, and simulate electronic structures with high performance computing (HPC), including atomic effects such as alloy and dopant disorders. Being named as Quantum simulation tool for Advanced Nanoscale Devices (Q-AND), the tool shows nice scalability on traditional multi-core HPC clusters implying the strong capability of large-scale electronic structure simulations, particularly with remarkable performance enhancement on latest clusters of Intel Xeon PhiTM coprocessors. A review of the recent modelling study conducted to understand an experimental work of highly phosphorus-doped silicon nanowires, is presented to demonstrate the utility of Q-AND. Having been developed via Intel Parallel Computing Center project, Q-AND will be open to public to establish a sound framework of nanoelectronics modelling with advanced HPC clusters of a many-core base. With details of the development methodology and exemplary study of dopant electronics, this work will present a practical guideline for TCAD development to researchers in the field of computational nanoelectronics.

  3. High-performance computing for airborne applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less

  4. Decay Heat Removal from a GFR Core by Natural Convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Wesley C.; Hejzlar, Pavel; Driscoll, Michael J.

    2004-07-01

    One of the primary challenges for Gas-cooled Fast Reactors (GFR) is decay heat removal after a loss of coolant accident (LOCA). Due to the fact that thermal gas cooled reactors currently under design rely on passive mechanisms to dissipate decay heat, there is a strong motivation to accomplish GFR core cooling through natural phenomena. This work investigates the potential of post-LOCA decay heat removal from a GFR core to a heat sink using an external convection loop. A model was developed in the form of the LOCA-COLA (Loss of Coolant Accident - Convection Loop Analysis) computer code as a meansmore » for 1D steady state convective heat transfer loop analysis. The results show that decay heat removal by means of gas cooled natural circulation is feasible under elevated post-LOCA containment pressure conditions. (authors)« less

  5. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE PAGES

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...

    2017-07-12

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  6. On the uncertain nature of the core of α Cen A

    NASA Astrophysics Data System (ADS)

    Bazot, M.; Christensen-Dalsgaard, J.; Gizon, L.; Benomar, O.

    2016-08-01

    High-quality astrometric, spectroscopic, interferometric and, importantly, asteroseismic observations are available for α Cen A, which is the closest binary star system to earth. Taking all these constraints into account, we study the internal structure of the star by means of theoretical modelling. Using the Aarhus STellar Evolution Code (ASTEC) and the tools of Computational Bayesian Statistics, in particular a Markov chain Monte Carlo algorithm, we perform statistical inferences for the physical characteristics of the star. We find that α Cen A has a probability of approximately 40 per cent of having a convective core. This probability drops to few per cent if one considers reduced rates for the 14N(p,γ)15O reaction. These convective cores have fractional radii less than 8 per cent when overshoot is neglected. Including overshooting also leads to the possibility of a convective core mostly sustained by the ppII chain energy output. We finally show that roughly 30 per cent of the stellar models describing α Cen A are in the subgiant regime.

  7. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  8. Self-consistent core-pedestal transport simulations with neural network accelerated models

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  9. Field tests demonstrating reduced activity of ivermectin and moxidectin against small strongyles in horses on 14 farms in Central Kentucky in 2007-2009.

    PubMed

    Lyons, Eugene T; Tolliver, Sharon C; Collins, Sandra S; Ionita, Mariana; Kuzmina, Tetiana A; Rossano, Mary

    2011-02-01

    Efficacy of ivermectin (IVM) and moxidectin (MOX) against small strongyles was evaluated in horses (n=363) in field tests on 14 farms in Central Kentucky between 2007 and 2009. Most of the horses were yearlings but a few were weanlings and mares. The number of horses treated with IVM was 255 and those treated with MOX was 108. Horses on six farms were allotted into two groups. One group was treated with each of the two drugs, whereas horses on the other eight farms were treated with only one of the two drugs--IVM on six farms and MOX on two farms. Strongyle eggs per gram of feces (EPGs) compared to initial use of IVM and MOX returned almost twice as quickly after treatment of horses on all of the farms. IVM has been used much more extensively in this geographical area than MOX. Reduced activity of MOX was evident even on farms with rare or no apparent previous use of MOX but with probable extensive use of IVM.

  10. Barrel maturation, oak alternatives and micro-oxygenation: influence on red wine aging and quality.

    PubMed

    Oberholster, A; Elmendorf, B L; Lerno, L A; King, E S; Heymann, H; Brenneman, C E; Boulton, R B

    2015-04-15

    The impact of micro-oxygenation (MOX) in conjunction with a variety of oak alternatives on phenolic composition and red wine aging was investigated and compared with traditional barrel aging. Although several studies concluded that MOX give similar results to barrel aging, few have compared them directly and none directly compared MOX with and without wood alternatives and barrel aging. Results confirmed that MOX had a positive effect on colour density, even after 5 months of bottle aging. This is supported by an increase in polymeric phenol and pigment content not only with aging but in the MOX compared to barrel matured wine treatments. Descriptive analysis showed that MOX in combination with wood alternatives such as oak chips and staves could mimic short term (six months) barrel aging in new American and French oak barrels in regards to sensory characteristics. Published by Elsevier Ltd.

  11. Palaeointensity, core thermal conductivity and the unknown age of the inner core

    NASA Astrophysics Data System (ADS)

    Smirnov, Aleksey V.; Tarduno, John A.; Kulakov, Evgeniy V.; McEnroe, Suzanne A.; Bono, Richard K.

    2016-05-01

    Data on the evolution of Earth's magnetic field intensity are important for understanding the geodynamo and planetary evolution. However, the paleomagnetic record in rocks may be adversely affected by many physical processes, which must be taken into account when analysing the palaeointensity database. This is especially important in the light of an ongoing debate regarding core thermal conductivity values, and how these relate to the Precambrian geodynamo. Here, we demonstrate that several data sets in the Precambrian palaeointensity database overestimate the true paleofield strength due to the presence of non-ideal carriers of palaeointensity signals and/or viscous re-magnetizations. When the palaeointensity overestimates are removed, the Precambrian database does not indicate a robust change in geomagnetic field intensity during the Mesoproterozoic. These findings call into question the recent claim that the solid inner core formed in the Mesoproterozoic, hence constraining the thermal conductivity in the core to `moderate' values. Instead, our analyses indicate that the presently available palaeointensity data are insufficient in number and quality to constrain the timing of solid inner core formation, or the outstanding problem of core thermal conductivity. Very young or very old inner core ages (and attendant high or low core thermal conductivity values) are consistent with the presently known history of Earth's field strength. More promising available data sets that reflect long-term core structure are geomagnetic reversal rate and field morphology. The latter suggests changes that may reflect differences in Archean to Proterozoic core stratification, whereas the former suggest an interval of geodynamo hyperactivity at ca. 550 Ma.

  12. An improved heat transfer configuration for a solid-core nuclear thermal rocket engine

    NASA Technical Reports Server (NTRS)

    Clark, John S.; Walton, James T.; Mcguire, Melissa L.

    1992-01-01

    Interrupted flow, impingement cooling, and axial power distribution are employed to enhance the heat-transfer configuration of a solid-core nuclear thermal rocket engine. Impingement cooling is introduced to increase the local heat-transfer coefficients between the reactor material and the coolants. Increased fuel loading is used at the inlet end of the reactor to enhance heat-transfer capability where the temperature differences are the greatest. A thermal-hydraulics computer program for an unfueled NERVA reactor core is employed to analyze the proposed configuration with attention given to uniform fuel loading, number of channels through the impingement wafers, fuel-element length, mass-flow rate, and wafer gap. The impingement wafer concept (IWC) is shown to have heat-transfer characteristics that are better than those of the NERVA-derived reactor at 2500 K. The IWC concept is argued to be an effective heat-transfer configuration for solid-core nuclear thermal rocket engines.

  13. Multiple Core Galaxies

    NASA Technical Reports Server (NTRS)

    Miller, R.H.; Morrison, David (Technical Monitor)

    1994-01-01

    Nuclei of galaxies often show complicated density structures and perplexing kinematic signatures. In the past we have reported numerical experiments indicating a natural tendency for galaxies to show nuclei offset with respect to nearby isophotes and for the nucleus to have a radial velocity different from the galaxy's systemic velocity. Other experiments show normal mode oscillations in galaxies with large amplitudes. These oscillations do not damp appreciably over a Hubble time. The common thread running through all these is that galaxies often show evidence of ringing, bouncing, or sloshing around in unexpected ways, even though they have not been disturbed by any external event. Recent observational evidence shows yet another phenomenon indicating the dynamical complexity of central regions of galaxies: multiple cores (M31, Markarian 315 and 463 for example). These systems can hardly be static. We noted long-lived multiple core systems in galaxies in numerical experiments some years ago, and we have more recently followed up with a series of experiments on multiple core galaxies, starting with two cores. The relevant parameters are the energy in the orbiting clumps, their relative.masses, the (local) strength of the potential well representing the parent galaxy, and the number of cores. We have studied the dependence of the merger rates and the nature of the final merger product on these parameters. Individual cores survive much longer in stronger background potentials. Cores can survive for a substantial fraction of a Hubble time if they travel on reasonable orbits.

  14. Waveguide to Core: A New Approach to RF Modelling

    NASA Astrophysics Data System (ADS)

    Wright, John; Shiraiwa, Syunichi; Rf-Scidac Team

    2017-10-01

    A new technique for the calculation of RF waves in toroidal geometry enables the simultaneous incorporation of antenna geometry, plasma facing components (PFCs), the scrape off-layer (SOL) and core propagation [Shiraiwa, NF 2017]. Calculations with this technique naturally capture wave propagation in the SOL and its interactions with non-conforming PFCs permitting self-consistent calculation of core absorption and edge power loss. The main motivating insight is that the core plasma region having closed flux surfaces requires a hot plasma dielectric while the open field line region in the scrape-off layer needs only a cold plasma dielectric. Spectral approaches work well for the former and finite elements work well for the latter. The validity of this process follows directly from the superposition principle of Maxwell's equations making this technique exact. The method is independent of the codes or representations used and works for any frequency regime. Applications to minority heating in Alcator C-Mod and ITER and high harmonic heating in NSTX-U will be presented in single pass and multi-pass regimes. Support from DoE Grant Number DE-FG02-91-ER54109 (theory and computer resources) and DE-FC02-01ER54648 (RF SciDAC).

  15. Toward Connecting Core-Collapse Supernova Theory with Observations: Nucleosynthetic Yields and Distribution of Elements in a 15 M⊙ Blue Supergiant Progenitor with SN 1987A Energetics

    NASA Astrophysics Data System (ADS)

    Plewa, Tomasz; Handy, Timothy; Odrzywolek, Andrzej

    2014-09-01

    We compute and discuss the process of nucleosynthesis in a series of core-collapse explosion models of a 15 solar mass, blue supergiant progenitor. We obtain nucleosynthetic yields and study the evolution of the chemical element distribution from the moment of core bounce until young supernova remnant phase. Our models show how the process of energy deposition due to radioactive decay modifies the dynamics and the core ejecta structure on small and intermediate scales. The results are compared against observations of young supernova remnants including Cas A and the recent data obtained for SN 1987A. We compute and discuss the process of nucleosynthesis in a series of core-collapse explosion models of a 15 solar mass, blue supergiant progenitor. We obtain nucleosynthetic yields and study the evolution of the chemical element distribution from the moment of core bounce until young supernova remnant phase. Our models show how the process of energy deposition due to radioactive decay modifies the dynamics and the core ejecta structure on small and intermediate scales. The results are compared against observations of young supernova remnants including Cas A and the recent data obtained for SN 1987A. The work has been supported by the NSF grant AST-1109113 and DOE grant DE-FG52-09NA29548. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the U.S. DoE under Contract No. DE-AC02-05CH11231.

  16. Long Valley Coring Project, Inyo County, California, 1998, preliminary stratigraphy and images of recovered core

    USGS Publications Warehouse

    Sackett, Penelope C.; McConnell, Vicki S.; Roach, Angela L.; Priest, Susan S.; Sass, John H.

    1999-01-01

    Phase III of the Long Valley Exploratory Well, the Long Valley Coring Project, obtained continuous core between the depths of 7,180 and 9,831 ft (2,188 to 2,996 meters) during the summer of 1998. This report contains a compendium of information designed to facilitate post-drilling research focussed on the study of the core. Included are a preliminary stratigraphic column compiled primarily from field observations and a general description of well lithology for the Phase III drilling interval. Also included are high-resolution digital photographs of every core box (10 feet per box) as well as scanned images of pieces of recovered core. The user can easily move from the stratigraphic column to corresponding core box photographs for any depth. From there, compressed, "unrolled" images of the individual core pieces (core scans) can be accessed. Those interested in higher-resolution core scans can go to archive CD-ROMs stored at a number of locations specified herein. All core is stored at the USGS Core Research Center in Denver, Colorado where it is available to researchers following the protocol described in this report. Preliminary examination of core provided by this report and the archive CD-ROMs should assist researchers in narrowing their choices when requesting core splits.

  17. [Core muscle chains activation during core exercises determined by EMG-a systematic review].

    PubMed

    Rogan, Slavko; Riesen, Jan; Taeymans, Jan

    2014-10-15

    Good core muscles strength is essential for daily life and sports activities. However, the mechanism how core muscles may be effectively triggered by exercises is not yet precisely described in the literature. The aim of this systematic review was to evaluate the rate of activation as measured by electromyography of the ventral, lateral and dorsal core muscle chains during core (trunk) muscle exercises. A total of 16 studies were included. Exercises with a vertical starting position, such as the deadlift or squat activated significantly more core muscles than exercises in the horizontal initial position.

  18. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  19. Core-core and core-valence correlation

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    The effect of 1s core correlation on properties and energy separations are analyzed using full configuration-interaction (FCI) calculations. The Be1S - 1P, the C 3P - 5S,m and CH(+) 1Sigma(+) - 1Pi separations, and CH(+) spectroscopic constants, dipole moment, and 1Sigma(+) - 1Pi transition dipole moment have been studied. The results of the FCI calculations are compared to those obtained using approximate methods.

  20. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.