Sample records for lattice code karma

  1. KARMA4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Salloum, Maher; Lee, Jina

    2017-07-10

    KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.

  2. A comparative study of Agni karma with Lauha, Tamra and PanchadhatuShalakas in Gridhrasi (Sciatica).

    PubMed

    Bakhashi, Babita; Gupta, S K; Rajagopala, Manjusha; Bhuyan, C

    2010-04-01

    Sushruta has mentioned different methods of management of diseases, such as Bheshaja karma, Kshara Karma, Agni karma, Shastrakarma and Raktamokshana. The approach of Agni karma has been mentioned in the context of diseases like Arsha, Arbuda, Bhagandar, Sira, Snayu, Asthi, Sandhigata Vata Vikaras and Gridhrasi. Gridhrasi is seen as a panic condition in the society as it is one of the burning problems, especially in the life of daily laborers. It is characterized by distinct pain starting from Sphik Pradesha (gluteal region) and goes down toward the Parshni Pratyanguli (foot region) of the affected side of leg. On the basis of symptomatology, Gridhrasi may be simulated with the disease sciatica in modern parlance. In modern medicine, the disease sciatica is managed only with potent analgesics or some sort of surgical interventions which have their own limitations and adverse effects, whereas in Ayurveda, various treatment modalities like Siravedha, Agni karma, Basti Chikitsa and palliative medicines are used successfully. Among these, Agni karma procedure seems to be more effective by providing timely relief. Shalakas for Agni karma, made up of different Dhatus like gold, silver, copper, iron, etc. for different stages of the disease conditions, have been proposed. In the present work, a comparative study of Agni karma by using iron, copper and previously studied Panchadhatu Shalaka in Gridhrashi has been conducted. A total of 22 patients were treated in three groups. Result of the entire study showed that Agni karma by Panchadhatu Shalaka provided better result in combating the symptoms, especially Ruka and Tandra, while Lauhadhatu Shalaka gave better results in combating symptoms of Spanadana and Gaurava. In the meantime, Tamradhatu Shalaka provided better effect in controlling symptoms like Toda, Stambha and Aruchi. Fifty percent patients in Panchadhatu Shalaka (Group A) were completely relieved. In Lauhadhatu Shalaka (Group B), the success rate was 00.00%, and in Tamradhatu Shalaka (Group C), the percentage of success rate was 14.28%. After analyzing the data, Tamradhatu Shalaka was found to be more effective than Lauha and Panchadhatu Shalakas.

  3. Diabetes Destiny in our Hands: Achieving Metabolic Karma.

    PubMed

    Kalra, Sanjay; Ved, Jignesh; Baruah, Manash P

    2017-01-01

    Karma is the ancient Indian philosophy of cause and effect, which implies that an individual's intentions, and actions, both have consequences. None can escape the consequences of one's actions. Applying the principle of karma to medicine and healthcare, the significance of optimal and timely interventions at various stages of disease, may be realized. A holistic approach to metabolic control in diabetes translates into improved clinical outcomes, as evident from the result of STENO-2, EMPA-REG OUTCOME, or LEADER trials. The principle of karma in the management of diabetes may have implications at the transgenerational level during pregnancy and nursing, at the individual patient-level based on phenotype, and at the community level in preventive medicine. The concept of metabolic karma can be used as an effective motivational tool to encourage better health care seeking behavior and adherence to prescribed interventions.

  4. Synthetic Infrared Scene: Improving the KARMA IRSG Module and Signature Modelling Tool SMAT

    DTIC Science & Technology

    2011-03-01

    d’engagements impliquant des autodirecteurs infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008...infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008 jusqu’à mars 2011. Ce rapport de contrat est axé...74 13 Evaluating Performance Validator tool

  5. Semi-automatic Data Integration using Karma

    NASA Astrophysics Data System (ADS)

    Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.

    2017-12-01

    Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.

  6. Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.

    We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less

  7. Lattice surgery on the Raussendorf lattice

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Paler, Alexandru; Devitt, Simon J.; Nori, Franco

    2018-07-01

    Lattice surgery is a method to perform quantum computation fault-tolerantly by using operations on boundary qubits between different patches of the planar code. This technique allows for universal planar code computation without eliminating the intrinsic two-dimensional nearest-neighbor properties of the surface code that eases physical hardware implementations. Lattice surgery approaches to algorithmic compilation and optimization have been demonstrated to be more resource efficient for resource-intensive components of a fault-tolerant algorithm, and consequently may be preferable over braid-based logic. Lattice surgery can be extended to the Raussendorf lattice, providing a measurement-based approach to the surface code. In this paper we describe how lattice surgery can be performed on the Raussendorf lattice and therefore give a viable alternative to computation using braiding in measurement-based implementations of topological codes.

  8. It is our destiny to die: the effects of mortality salience and culture-priming on fatalism and karma belief.

    PubMed

    Yen, Chih-Long

    2013-01-01

    The current study explores whether Asians use culture-specific belief systems to defend against their death anxiety. The effects of mortality salience (MS) and cultural priming on Taiwanese beliefs in fatalism and karma were investigated. Study 1 showed that people believe in fatalism and karma more following MS compared with the control condition. Study 2 found that the effect of MS on fatalism belief was stronger when Taiwanese were exposed to an Eastern cultural context than to a Western cultural context. However, a matched sample of Western participants did not show increased fatalism belief after either a West- or East-prime task. The present research provides evidence that Asians may use some culture-specific beliefs, particularly fatalism belief, to cope with their death awareness.

  9. KARMA: the observation preparation tool for KMOS

    NASA Astrophysics Data System (ADS)

    Wegner, Michael; Muschielok, Bernard

    2008-08-01

    KMOS is a multi-object integral field spectrometer working in the near infrared which is currently being built for the ESO VLT by a consortium of UK and German institutes. It is capable of selecting up to 24 target fields for integral field spectroscopy simultaneously by means of 24 robotic pick-off arms. For the preparation of observations with KMOS a dedicated preparation tool KARMA ("KMOS Arm Allocator") will be provided which optimizes the assignment of targets to these arms automatically, thereby taking target priorities and several mechanical and optical constraints into account. For this purpose two efficient algorithms, both being able to cope with the underlying optimization problem in a different way, were developed. We present the concept and architecture of KARMA in general and the optimization algorithms in detail.

  10. Applying the Karma Provenance tool to NASA's AMSR-E Data Production Stream

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Conover, H.; Regner, K.; Movva, S.; Goodman, H. M.; Pale, B.; Purohit, P.; Sun, Y.

    2010-12-01

    Current procedures for capturing and disseminating provenance, or data product lineage, are limited in both what is captured and how it is disseminated to the science community. For example, the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) Science Investigator-led Processing System (SIPS) generates Level 2 and Level 3 data products for a variety of geophysical parameters. Data provenance and quality information for these data sets is either very general (e.g., user guides, a list of anomalous data receipt and processing conditions over the life of the missions) or difficult to access or interpret (e.g., quality flags embedded in the data, production history files not easily available to users). Karma is a provenance collection and representation tool designed and developed for data driven workflows such as the productions streams used to produce EOS standard products. Karma records uniform and usable provenance metadata independent of the processing system while minimizing both the modification burden on the processing system and the overall performance overhead. Karma collects both the process and data provenance. The process provenance contains information about the workflow execution and the associated algorithm invocations. The data provenance captures metadata about the derivation history of the data product, including algorithms used and input data sources transformed to generate it. As part of an ongoing NASA funded project, Karma is being integrated into the AMSR-E SIPS data production streams. Metadata gathered by the tool will be presented to the data consumers as provenance graphs, which are useful in validating the workflows and determining the quality of the data product. This presentation will discuss design and implementation issues faced while incorporating a provenance tool into a structured data production flow. Prototype results will also be presented in this talk.

  11. Criticism and Study of the Astrology of the Eckankar Based on the Teachings of Islam

    ERIC Educational Resources Information Center

    Mahmoudi, Abdolreza; Shamsaie, Maryam; Kakaei, Hashem

    2017-01-01

    The subject of astrology in the School of Eckankar has two main bases of Karma and reincarnation. Karma or the very law of action and reaction can be called the moral basis of the Eckankar. The totality of this law is accepted by the reason and tradition. But yet what casts doubt and therefore a serious damage to this law would be a tight…

  12. Karma yoga: A path towards work in positive psychology

    PubMed Central

    Kumar, Arun; Kumar, Sanjay

    2013-01-01

    Karma yoga is the path that leads to salvation through action. Salvation is the ultimate state of consciousness. Work is the central and defining characteristic of life. It may have intrinsic value, instrumental value, or both. Instrumental value includes incentive, dignity and power, etc., which is the result expected from the work. The Gita teaches us to do work without thinking of result (work with intrinsic value). Attachment with the result leads to stress, competition and aggression. Stress further gives rise to heart ailments, depression and suicide. Positive psychology studies the factors and conditions leading to pleasurable and satisfying life. Understanding Karma yoga and its practice has a similar role that lead an individual towards work and leading to a satisfied life. This may play a unique role towards practical aspects of positive psychology to improve one's lifestyle and aid in the treatment of stress disorders. PMID:23858246

  13. Parallel software for lattice N = 4 supersymmetric Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Schaich, David; DeGrand, Thomas

    2015-05-01

    We present new parallel software, SUSY LATTICE, for lattice studies of four-dimensional N = 4 supersymmetric Yang-Mills theory with gauge group SU(N). The lattice action is constructed to exactly preserve a single supersymmetry charge at non-zero lattice spacing, up to additional potential terms included to stabilize numerical simulations. The software evolved from the MILC code for lattice QCD, and retains a similar large-scale framework despite the different target theory. Many routines are adapted from an existing serial code (Catterall and Joseph, 2012), which SUSY LATTICE supersedes. This paper provides an overview of the new parallel software, summarizing the lattice system, describing the applications that are currently provided and explaining their basic workflow for non-experts in lattice gauge theory. We discuss the parallel performance of the code, and highlight some notable aspects of the documentation for those interested in contributing to its future development.

  14. Self, other, and astrology: esoteric therapy in Sri Lanka.

    PubMed

    Perinbanayagam, R S

    1981-02-01

    HARRY STACK SULLIVAN'S argument that anxiety as a fundamental human experience is alleviated by the use of various procedures that he called "security operations" is used in this paper to examine the meaning of astrology in Sri Lanka. Astrology and the doctrine of karma provide the relevant framework in which various forms of misfortune are understood and handled. An examination of cases in Sri Lanka reveals that astrology and the doctrine of karma enable a person of that culture to create a number of structures which have a therapeutic effect.

  15. Development of a new lattice physics code robin for PWR application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less

  16. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  17. Computational tools and lattice design for the PEP-II B-Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Irwin, J.; Nosochkov, Y.

    1997-02-01

    Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}

  18. Computational tools and lattice design for the PEP-II B-Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Yunhai; Irwin, John; Nosochkov, Yuri

    1997-02-01

    Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.

  19. Implementing a strand of a scalable fault-tolerant quantum computing fabric.

    PubMed

    Chow, Jerry M; Gambetta, Jay M; Magesan, Easwar; Abraham, David W; Cross, Andrew W; Johnson, B R; Masluk, Nicholas A; Ryan, Colm A; Smolin, John A; Srinivasan, Srikanth J; Steffen, M

    2014-06-24

    With favourable error thresholds and requiring only nearest-neighbour interactions on a lattice, the surface code is an error-correcting code that has garnered considerable attention. At the heart of this code is the ability to perform a low-weight parity measurement of local code qubits. Here we demonstrate high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. With high-fidelity gates, we generate entanglement distributed across three superconducting qubits in a lattice where each code qubit is coupled to two bus resonators. Via high-fidelity measurement of the syndrome qubit, we deterministically entangle the code qubits in either an even or odd parity Bell state, conditioned on the syndrome qubit state. Finally, to fully characterize this parity readout, we develop a measurement tomography protocol. The lattice presented naturally extends to larger networks of qubits, outlining a path towards fault-tolerant quantum computing.

  20. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  1. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  2. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  3. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  4. A positive view on road safety: Can 'car karma' contribute to safe driving styles?

    PubMed

    Kleisen, Lucienne M B

    2013-01-01

    Many studies in the field of road safety are occupied with studying road unsafety since it generally concentrates on traffic crashes, crash, risk, and aberrant driving behaviour, especially in relation to young drivers. However, this study shows there is scope for thinking about driving and driver training from a different vantage point, that is in terms of safe or normal driving. The findings are reported from four group interviews with young drivers (18-25 years of age); the young drivers discussed their ideas of safe driving and their reasons for using (or not using) safe driving styles. The data show a type of optimistic thinking among young drivers which they call 'car karma'. This finding offers an opportunity to reconceptualise driving in a way that is focused on normal, safe driving styles, a topic that has received less attention in the past. The paper argues that greater focus on safe driving styles could be more conducive to young drivers actually driving safely than focusing on, for instance, crashes, which on an individual level are relatively rare (Elander et al., 1993, p. 277). Based on empirical research, the first positively stated definition of road safety is proposed based on the notion of 'car karma'. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Zebra: An advanced PWR lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, L.; Wu, H.; Zheng, Y.

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less

  6. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  7. Interkingdom Cross-Feeding of Ammonium from Marine Methylamine-Degrading Bacteria to the Diatom Phaeodactylum tricornutum.

    PubMed

    Suleiman, Marcel; Zecher, Karsten; Yücel, Onur; Jagmann, Nina; Philipp, Bodo

    2016-12-15

    Methylamines occur ubiquitously in the oceans and can serve as carbon, nitrogen, and energy sources for heterotrophic bacteria from different phylogenetic groups within the marine bacterioplankton. Diatoms, which constitute a large part of the marine phytoplankton, are believed to be incapable of using methylamines as a nitrogen source. As diatoms are typically associated with heterotrophic bacteria, the hypothesis came up that methylotrophic bacteria may provide ammonium to diatoms by degradation of methylamines. This hypothesis was investigated with the diatom Phaeodactylum tricornutum and monomethylamine (MMA) as the substrate. Bacteria supporting photoautotrophic growth of P. tricornutum with MMA as the sole nitrogen source could readily be isolated from seawater. Two strains, Donghicola sp. strain KarMa, which harbored genes for both monomethylamine dehydrogenase and the N methylglutamate pathway, and Methylophaga sp. strain M1, which catalyzed MMA oxidation by MMA dehydrogenase, were selected for further characterization. While strain M1 grew with MMA as the sole substrate, strain KarMa could utilize MMA as a nitrogen source only when, e.g., glucose was provided as a carbon source. With both strains, release of ammonium was detected during MMA utilization. In coculture with P. tricornutum, strain KarMa supported photoautotrophic growth with 2 mM MMA to the same extent as with the equimolar amount of NH 4 Cl. In coculture with strain M1, photoautotrophic growth of P. tricornutum was also supported, but to a much lower degree than by strain KarMa. This proof-of-principle study with a synthetic microbial community suggests that interkingdom cross-feeding of ammonium from methylamine-degrading bacteria is a contribution to phytoplankton growth which has been overlooked so far. Interactions between diatoms and heterotrophic bacteria are important for marine carbon cycling. In this study, a novel interaction is described. Bacteria able to degrade monomethylamine, which is a ubiquitous organic nitrogen compound in marine environments, can provide ammonium to diatoms. This interkingdom metabolite transfer enables growth under photoautotrophic conditions in coculture, which would not be possible in the respective monocultures. This proof-of-principle study calls attention to a so far overlooked contribution to phytoplankton growth. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  8. Interkingdom Cross-Feeding of Ammonium from Marine Methylamine-Degrading Bacteria to the Diatom Phaeodactylum tricornutum

    PubMed Central

    Suleiman, Marcel; Zecher, Karsten; Yücel, Onur; Jagmann, Nina

    2016-01-01

    ABSTRACT Methylamines occur ubiquitously in the oceans and can serve as carbon, nitrogen, and energy sources for heterotrophic bacteria from different phylogenetic groups within the marine bacterioplankton. Diatoms, which constitute a large part of the marine phytoplankton, are believed to be incapable of using methylamines as a nitrogen source. As diatoms are typically associated with heterotrophic bacteria, the hypothesis came up that methylotrophic bacteria may provide ammonium to diatoms by degradation of methylamines. This hypothesis was investigated with the diatom Phaeodactylum tricornutum and monomethylamine (MMA) as the substrate. Bacteria supporting photoautotrophic growth of P. tricornutum with MMA as the sole nitrogen source could readily be isolated from seawater. Two strains, Donghicola sp. strain KarMa, which harbored genes for both monomethylamine dehydrogenase and the N methylglutamate pathway, and Methylophaga sp. strain M1, which catalyzed MMA oxidation by MMA dehydrogenase, were selected for further characterization. While strain M1 grew with MMA as the sole substrate, strain KarMa could utilize MMA as a nitrogen source only when, e.g., glucose was provided as a carbon source. With both strains, release of ammonium was detected during MMA utilization. In coculture with P. tricornutum, strain KarMa supported photoautotrophic growth with 2 mM MMA to the same extent as with the equimolar amount of NH4Cl. In coculture with strain M1, photoautotrophic growth of P. tricornutum was also supported, but to a much lower degree than by strain KarMa. This proof-of-principle study with a synthetic microbial community suggests that interkingdom cross-feeding of ammonium from methylamine-degrading bacteria is a contribution to phytoplankton growth which has been overlooked so far. IMPORTANCE Interactions between diatoms and heterotrophic bacteria are important for marine carbon cycling. In this study, a novel interaction is described. Bacteria able to degrade monomethylamine, which is a ubiquitous organic nitrogen compound in marine environments, can provide ammonium to diatoms. This interkingdom metabolite transfer enables growth under photoautotrophic conditions in coculture, which would not be possible in the respective monocultures. This proof-of-principle study calls attention to a so far overlooked contribution to phytoplankton growth. PMID:27694241

  9. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  10. Management of internal hemorrhoids by Kshara karma: An educational case report.

    PubMed

    Mahapatra, Anita; Srinivasan, A; Sujithra, R; Bhat, Ramesh P

    2012-07-01

    A 66-year-old male patient came to the anorectal clinic, Outpatient department, AVT Institute for Advanced Research, Coimbatore, Tamil Nadu, with complaints of prolapsing pile mass during defecation and bleeding while passing stool. The case was diagnosed as "Raktarsha" - 11 & 7 'o' clock position II degree internal hemorrhoids, deeply situated, projecting one and caused by pitta and rakta; with bleeding tendency. Kshara karma (application of caustic alkaline paste) intervention was done in this case to internal hemorrhoids under local anesthesia. The pile mass and per rectal bleeding resolved in 8 days and the patient was relieved from all symptoms within 21 days. No complications were reported after the procedure. The patient was followed up regularly from 2004 onward till date and proctoscopic examination did not reveal any evidence of recurrence of the hemorrhoids.

  11. Management of internal hemorrhoids by Kshara karma: An educational case report

    PubMed Central

    Mahapatra, Anita; Srinivasan, A.; Sujithra, R.; Bhat, Ramesh P.

    2012-01-01

    A 66-year-old male patient came to the anorectal clinic, Outpatient department, AVT Institute for Advanced Research, Coimbatore, Tamil Nadu, with complaints of prolapsing pile mass during defecation and bleeding while passing stool. The case was diagnosed as “Raktarsha” - 11 & 7 ‘o’ clock position II degree internal hemorrhoids, deeply situated, projecting one and caused by pitta and rakta; with bleeding tendency. Kshara karma (application of caustic alkaline paste) intervention was done in this case to internal hemorrhoids under local anesthesia. The pile mass and per rectal bleeding resolved in 8 days and the patient was relieved from all symptoms within 21 days. No complications were reported after the procedure. The patient was followed up regularly from 2004 onward till date and proctoscopic examination did not reveal any evidence of recurrence of the hemorrhoids. PMID:23125506

  12. Quantum computing with Majorana fermion codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  13. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Understanding suffering and giving compassion: the reach of socially engaged Buddhism into China.

    PubMed

    Kuah-Pearce, Khun Eng

    2014-01-01

    This paper will explore the social engagement of Buddhists through their active voluntary works - works that result in the development of a religious philanthropic culture. Through three case examples, this paper will examine how the sangha and individual Buddhists understand social suffering and compassion and attempt to integrate their understanding of Buddhist virtues and values in their daily life where the performance of voluntary works is seen as Buddhist spiritualism. In this process, the individuals seek to understand the key principles of Buddhism that are of direct relevance to their daily existence and their quest to be a compassionate self. Foremost are two notions of yebao (karma) and gan-en (gratitude) and how through compassionate practices and gratitude for those who accepted compassionate acts, they would be rewarded with good karma. Here, pursuing compassionate acts and the alleviation of social suffering is the pursuit of this-worldly spiritualism.

  15. Trellis coding with multidimensional QAM signal sets

    NASA Technical Reports Server (NTRS)

    Pietrobon, Steven S.; Costello, Daniel J.

    1993-01-01

    Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.

  16. Lattice Truss Structural Response Using Energy Methods

    NASA Technical Reports Server (NTRS)

    Kenner, Winfred Scottson

    1996-01-01

    A deterministic methodology is presented for developing closed-form deflection equations for two-dimensional and three-dimensional lattice structures. Four types of lattice structures are studied: beams, plates, shells and soft lattices. Castigliano's second theorem, which entails the total strain energy of a structure, is utilized to generate highly accurate results. Derived deflection equations provide new insight into the bending and shear behavior of the four types of lattices, in contrast to classic solutions of similar structures. Lattice derivations utilizing kinetic energy are also presented, and used to examine the free vibration response of simple lattice structures. Derivations utilizing finite element theory for unique lattice behavior are also presented and validated using the finite element analysis code EAL.

  17. Simulation of gaseous diffusion in partially saturated porous media under variable gravity with lattice Boltzmann methods

    NASA Technical Reports Server (NTRS)

    Chau, Jessica Furrer; Or, Dani; Sukop, Michael C.; Steinberg, S. L. (Principal Investigator)

    2005-01-01

    Liquid distributions in unsaturated porous media under different gravitational accelerations and corresponding macroscopic gaseous diffusion coefficients were investigated to enhance understanding of plant growth conditions in microgravity. We used a single-component, multiphase lattice Boltzmann code to simulate liquid configurations in two-dimensional porous media at varying water contents for different gravity conditions and measured gas diffusion through the media using a multicomponent lattice Boltzmann code. The relative diffusion coefficients (D rel) for simulations with and without gravity as functions of air-filled porosity were in good agreement with measured data and established models. We found significant differences in liquid configuration in porous media, leading to reductions in D rel of up to 25% under zero gravity. The study highlights potential applications of the lattice Boltzmann method for rapid and cost-effective evaluation of alternative plant growth media designs under variable gravity.

  18. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, C. -Y.; Douglas, D.; Li, R.

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  20. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  1. A Time Diversity Coding Experiment for a UHF/VHF Satellite Channel with Scintillation: Equipment Description

    DTIC Science & Technology

    1977-09-01

    to state as successive input bits are brought into the encoder. We can more easily follow our progress on the equivalent lattice diagram where...Pg.Pj.. STATE DIAGRAM INPUT PATH i ,i.,i ,L.. = 1001 1’ 2’𔃽’ V Fig. 12. Convolutional Encoder, State Diagram and Lattice . 39 represented...and can in fact be traced. The Viterbi algorithm can be simply described with the aid of this lattice . Note that the nodes of the lattice represent

  2. Modification of the short straight sections of the high energy booster of the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M.; Johnson, D.; Kocur, P.

    1993-05-01

    The tracking analysis with the High Energy Booster (HEB) of the Superconducting Super Collider (SSC) indicated that the machine dynamic aperture for the current lattice (Rev 0 lattice) was limited by the quadrupoles in the short straight sections. A new lattice, Rev 1, with modified short straight sections was proposed. The results of tracking the two lattices up to 5 [times] 10[sup 5] turns (20 seconds at the injection energy) with various random seeds are presented in this paper. The new lattice has increased dynamic aperture from [approximately]7 mm to [approximately]8 mm, increases the abort kicker effectiveness, and eliminates onemore » family (length) of main quadrupoles. The code DIMAD was used for matching the new short straight sections to the ring. The code TEAPOT was used for the short term tracking and to create a machine file, zfile, which could in turn be used to generate a one-turn map with the ZLIB for fast long-term tracking using a symplectic one-turn map tracking program ZIMAPTRK.« less

  3. Modification of the short straight sections of the high energy booster of the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M.; Johnson, D.; Kocur, P.

    1993-05-01

    The tracking analysis with the High Energy Booster (HEB) of the Superconducting Super Collider (SSC) indicated that the machine dynamic aperture for the current lattice (Rev 0 lattice) was limited by the quadrupoles in the short straight sections. A new lattice, Rev 1, with modified short straight sections was proposed. The results of tracking the two lattices up to 5 {times} 10{sup 5} turns (20 seconds at the injection energy) with various random seeds are presented in this paper. The new lattice has increased dynamic aperture from {approximately}7 mm to {approximately}8 mm, increases the abort kicker effectiveness, and eliminates onemore » family (length) of main quadrupoles. The code DIMAD was used for matching the new short straight sections to the ring. The code TEAPOT was used for the short term tracking and to create a machine file, zfile, which could in turn be used to generate a one-turn map with the ZLIB for fast long-term tracking using a symplectic one-turn map tracking program ZIMAPTRK.« less

  4. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  5. Carmakila: An effective management by kshara karma

    PubMed Central

    Shindhe, Pradeep; Kiran, Mutnali

    2013-01-01

    Epidermal nevi are hamartomas that are characterized by hyperplasia of epidermis and adnexal structures. These nevi may be classified into a number of distinct variants, which are based on clinical morphology, extent of involvement, and the predominant epidermal structure in the lesion. Variants include verrucous epidermal nevus, nevus sebaceous, nevus comedonicus, eccrine nevus, apocrine nevus, Becker's nevus, and white sponge nevus. A 22-year-old girl approached us with complaints of blackish-colored hard growth, increasing in size over the right post-auricular region since 5 years. Ksharakarma is a procedure that involves the most important surgical, para-surgical, and critical-care procedures like incision, excision, scraping, and hemostatic locally (pratisaraneeya) and generally (panneya). Pratisaraneeya kshara is prepared with herbo-mineral medicines having an average pH of 13, possessing penetrating, corrosive, scraping, and healing properties, and are evidently indicated for external application in charmakīla. For the present case, kshara karma was preferred for application as the lesion was bigger in size and the results were appreciable clinically. PMID:24250149

  6. Carmakila: An effective management by kshara karma.

    PubMed

    Shindhe, Pradeep; Kiran, Mutnali

    2013-07-01

    Epidermal nevi are hamartomas that are characterized by hyperplasia of epidermis and adnexal structures. These nevi may be classified into a number of distinct variants, which are based on clinical morphology, extent of involvement, and the predominant epidermal structure in the lesion. Variants include verrucous epidermal nevus, nevus sebaceous, nevus comedonicus, eccrine nevus, apocrine nevus, Becker's nevus, and white sponge nevus. A 22-year-old girl approached us with complaints of blackish-colored hard growth, increasing in size over the right post-auricular region since 5 years. Ksharakarma is a procedure that involves the most important surgical, para-surgical, and critical-care procedures like incision, excision, scraping, and hemostatic locally (pratisaraneeya) and generally (panneya). Pratisaraneeya kshara is prepared with herbo-mineral medicines having an average pH of 13, possessing penetrating, corrosive, scraping, and healing properties, and are evidently indicated for external application in charmakīla. For the present case, kshara karma was preferred for application as the lesion was bigger in size and the results were appreciable clinically.

  7. An update on the BQCD Hybrid Monte Carlo program

    NASA Astrophysics Data System (ADS)

    Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk

    2018-03-01

    We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.

  8. SciDAC-3: Searching for Physics Beyond the Standard Model, University of Arizona component, Year 2 progress report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toussaint, Doug

    2014-03-21

    The Arizona component of the SciDAC-3 Lattice Gauge Theory program consisted of partial support for a postdoctoral position. In the original budget this covered three fourths of a postdoc, but the University of Arizona changed its ERE rate for postdoctoral positions from 4.3% to 21%, so the support level was closer to two-thirds of a postdoc. The grant covered the work of postdoc Thomas Primer. Dr. Primer's first task was an urgent one, although it was not forseen in our proposed work. It turned out that on the large lattices used in some of our current computations the gauge fixingmore » code was not working as expected, and this revealed itself in inconsistent results in the correlators needed to compute the semileptonic form factors for K and D decays. Dr. Primer participated in the effort to understand this problem and to modify our codes to deal with the large lattices we are now generating (as large as 144 3 x 288). Corrected code was incorporated in our standard codes, and workarounds that allow us to use the correlators already computed with the unexpected gauge fixing were been implemented.« less

  9. Thermal lattice BGK models for fluid dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Jian

    1998-11-01

    As an alternative in modeling fluid dynamics, the Lattice Boltzmann method has attracted considerable attention. In this thesis, we shall present a general form of thermal Lattice BGK. This form can handle large differences in density, temperature, and high Mach number. This generalized method can easily model gases with different adiabatic index values. The numerical transport coefficients of this model are estimated both theoretically and numerically. Their dependency on the sizes of integration steps in time and space, and on the flow velocity and temperature, are studied and compared with other established CFD methods. This study shows that the numerical viscosity of the Lattice Boltzmann method depends linearly on the space interval, and on the flow velocity as well for supersonic flow. This indicates this method's limitation in modeling high Reynolds number compressible thermal flow. On the other hand, the Lattice Boltzmann method shows promise in modeling micro-flows, i.e., gas flows in micron-sized devices. A two-dimensional code has been developed based on the conventional thermal lattice BGK model, with some modifications and extensions for micro- flows and wall-fluid interactions. Pressure-driven micro- channel flow has been simulated. Results are compared with experiments and simulations using other methods, such as a spectral element code using slip boundary condition with Navier-Stokes equations and a Direct Simulation Monte Carlo (DSMC) method.

  10. Evolution of a double-front Rayleigh-Taylor system using a graphics-processing-unit-based high-resolution thermal lattice-Boltzmann model.

    PubMed

    Ripesi, P; Biferale, L; Schifano, S F; Tripiccione, R

    2014-04-01

    We study the turbulent evolution originated from a system subjected to a Rayleigh-Taylor instability with a double density at high resolution in a two-dimensional geometry using a highly optimized thermal lattice-Boltzmann code for GPUs. Our investigation's initial condition, given by the superposition of three layers with three different densities, leads to the development of two Rayleigh-Taylor fronts that expand upward and downward and collide in the middle of the cell. By using high-resolution numerical data we highlight the effects induced by the collision of the two turbulent fronts in the long-time asymptotic regime. We also provide details on the optimized lattice-Boltzmann code that we have run on a cluster of GPUs.

  11. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  12. Teaching about Culture and Communicative Life in India.

    ERIC Educational Resources Information Center

    Jain, Nemi C.

    Basic patterns of culture and communication in India such as world view, reincarnation, concepts of Karma and Dharma, stages of life, the caste system, time orientation, collectivism, hierarchical orientation, language situation, and nonverbal communication norms are an integral part of Hinduism and Indian culture, and have a significant influence…

  13. Peace Amid Violence

    ERIC Educational Resources Information Center

    Overland, Martha Ann

    2007-01-01

    When the doors of the International Buddhist College opened in the southern rural province of Songkhla in Thailand after nearly a decade of hard work and planning, the founders praised the achievement as the culmination of devotion, faith, and, of course, good karma. With its rare combination of secular academics and monastic life, the college is…

  14. Promoting Education for Sustainability in a Vaishnava (Hindu) Community

    ERIC Educational Resources Information Center

    Chauhan, Sheila; Rama das, Sita; Rita, Natalia; Haigh, Martin

    2009-01-01

    Education for a sustainable future aspires to increase pro-environmental behavior. This evaluates a project designed to help a British Vaishnava congregation reduce their ecological footprint by linking "Karma to Climate Change." It employs a tented educational experience fielded at major Hindu Festivals. Participants are guided through…

  15. Fault-tolerance in Two-dimensional Topological Systems

    NASA Astrophysics Data System (ADS)

    Anderson, Jonas T.

    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates.

  16. Pattern Recognition by Retina-Like Devices.

    ERIC Educational Resources Information Center

    Weiman, Carl F. R.; Rothstein, Jerome

    This study has investigated some pattern recognition capabilities of devices consisting of arrays of cooperating elements acting in parallel. The problem of recognizing straight lines in general position on the quadratic lattice has been completely solved by applying parallel acting algorithms to a special code for lines on the lattice. The…

  17. General phase spaces: from discrete variables to rotor and continuum limits

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.

    2017-12-01

    We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.

  18. Fault-tolerance thresholds for the surface code with fabrication errors

    NASA Astrophysics Data System (ADS)

    Auger, James M.; Anwar, Hussain; Gimeno-Segovia, Mercedes; Stace, Thomas M.; Browne, Dan E.

    2017-10-01

    The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a nontrivial topology such that the quantum information is encoded in the global degrees of freedom (i.e., the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors—permanent faulty components such as missing physical qubits or failed entangling gates—introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the test bed. A known approach to mitigate defective lattices involves the use of primitive swap gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or use of swap gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect-matching decoder. Our numerical analysis is most applicable to two-dimensional chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8 % qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1 % .

  19. Lattice Methods and the Nuclear Few- and Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Lee, Dean

    This chapter builds upon the review of lattice methods and effective field theory of the previous chapter. We begin with a brief overview of lattice calculations using chiral effective field theory and some recent applications. We then describe several methods for computing scattering on the lattice. After that we focus on the main goal, explaining the theory and algorithms relevant to lattice simulations of nuclear few- and many-body systems. We discuss the exact equivalence of four different lattice formalisms, the Grassmann path integral, transfer matrix operator, Grassmann path integral with auxiliary fields, and transfer matrix operator with auxiliary fields. Along with our analysis we include several coding examples and a number of exercises for the calculations of few- and many-body systems at leading order in chiral effective field theory.

  20. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  1. An object oriented code for simulating supersymmetric Yang-Mills theories

    NASA Astrophysics Data System (ADS)

    Catterall, Simon; Joseph, Anosh

    2012-06-01

    We present SUSY_LATTICE - a C++ program that can be used to simulate certain classes of supersymmetric Yang-Mills (SYM) theories, including the well known N=4 SYM in four dimensions, on a flat Euclidean space-time lattice. Discretization of SYM theories is an old problem in lattice field theory. It has resisted solution until recently when new ideas drawn from orbifold constructions and topological field theories have been brought to bear on the question. The result has been the creation of a new class of lattice gauge theories in which the lattice action is invariant under one or more supersymmetries. The resultant theories are local, free of doublers and also possess exact gauge-invariance. In principle they form the basis for a truly non-perturbative definition of the continuum SYM theories. In the continuum limit they reproduce versions of the SYM theories formulated in terms of twisted fields, which on a flat space-time is just a change of the field variables. In this paper, we briefly review these ideas and then go on to provide the details of the C++ code. We sketch the design of the code, with particular emphasis being placed on SYM theories with N=(2,2) in two dimensions and N=4 in three and four dimensions, making one-to-one comparisons between the essential components of the SYM theories and their corresponding counterparts appearing in the simulation code. The code may be used to compute several quantities associated with the SYM theories such as the Polyakov loop, mean energy, and the width of the scalar eigenvalue distributions. Program summaryProgram title: SUSY_LATTICE Catalogue identifier: AELS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9315 No. of bytes in distributed program, including test data, etc.: 95 371 Distribution format: tar.gz Programming language: C++ Computer: PCs and Workstations Operating system: Any, tested on Linux machines Classification:: 11.6 Nature of problem: To compute some of the observables of supersymmetric Yang-Mills theories such as supersymmetric action, Polyakov/Wilson loops, scalar eigenvalues and Pfaffian phases. Solution method: We use the Rational Hybrid Monte Carlo algorithm followed by a Leapfrog evolution and a Metropolis test. The input parameters of the model are read in from a parameter file. Restrictions: This code applies only to supersymmetric gauge theories with extended supersymmetry, which undergo the process of maximal twisting. (See Section 2 of the manuscript for details.) Running time: From a few minutes to several hours depending on the amount of statistics needed.

  2. Automated generation of lattice QCD Feynman rules

    NASA Astrophysics Data System (ADS)

    Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.

    2009-12-01

    The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.

  3. 75 FR 30900 - Fisker Automotive; Receipt of Application for Temporary Exemption From Advanced Air Bag...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-02

    ...-0069] Fisker Automotive; Receipt of Application for Temporary Exemption From Advanced Air Bag... temporary exemption from certain advanced air bag requirements of FMVSS No. 208. The basis for the... air bag requirements. Fisker has requested an exemption for the Karma model, and that the exemption...

  4. Lattice QCD based on OpenCL

    NASA Astrophysics Data System (ADS)

    Bach, Matthias; Lindenstruth, Volker; Philipsen, Owe; Pinke, Christopher

    2013-09-01

    We present an OpenCL-based Lattice QCD application using a heatbath algorithm for the pure gauge case and Wilson fermions in the twisted mass formulation. The implementation is platform independent and can be used on AMD or NVIDIA GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double precision ⁄D implementation performs at 60 GFLOPS over a wide range of lattice sizes. The hybrid Monte Carlo presented reaches a speedup of four over the reference code running on a server CPU.

  5. Key Provenance of Earth Science Observational Data Products

    NASA Astrophysics Data System (ADS)

    Conover, H.; Plale, B.; Aktas, M.; Ramachandran, R.; Purohit, P.; Jensen, S.; Graves, S. J.

    2011-12-01

    As the sheer volume of data increases, particularly evidenced in the earth and environmental sciences, local arrangements for sharing data need to be replaced with reliable records about the what, who, how, and where of a data set or collection. This is frequently called the provenance of a data set. While observational data processing systems in the earth sciences have a long history of capturing metadata about the processing pipeline, current processes are limited in both what is captured and how it is disseminated to the science community. Provenance capture plays a role in scientific data preservation and stewardship precisely because it can automatically capture and represent a coherent picture of the what, how and who of a particular scientific collection. It reflects the transformations that a data collection underwent prior to its current form and the sequence of tasks that were executed and data products applied to generate a new product. In the NASA-funded Instant Karma project, we examine provenance capture in earth science applications, specifically the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) Science Investigator-led Processing system (SIPS). The project is integrating the Karma provenance collection and representation tool into the AMSR-E SIPS production environment, with an initial focus on Sea Ice. This presentation will describe capture and representation of provenance that is guided by the Open Provenance Model (OPM). Several things have become clear during the course of the project to date. One is that core OPM entities and relationships are not adequate for expressing the kinds of provenance that is of interest in the science domain. OPM supports name-value pair annotations that can be used to augment what is known about the provenance entities and relationships, but in Karma, annotations cannot be added during capture, but only after the fact. This limits the capture system's ability to record something it learned about an entity after the event of its creation in the provenance record. We will discuss extensions to the Open Provenance Model (OPM) and modifications to the Karma tool suite to address this issue, more efficient representations of earth science kinds of provenance, and definition of metadata structures for capturing related knowledge about the data products and science algorithms used to generate them. Use scenarios for provenance information is an active topic of investigation. It has additionally become clear through the project that not all provenance is created equal. In processing pipelines, some provenance is repetitive and uninteresting. Because of the volume of provenance, this obscures what are the interesting pieces of provenance. Methodologies to reveal science-relevant provenance will be presented, along with a preview of the AMSR-E Provenance Browser.

  6. Fortran code for SU(3) lattice gauge theory with and without MPI checkerboard parallelization

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.; Wu, Hao

    2012-10-01

    We document plain Fortran and Fortran MPI checkerboard code for Markov chain Monte Carlo simulations of pure SU(3) lattice gauge theory with the Wilson action in D dimensions. The Fortran code uses periodic boundary conditions and is suitable for pedagogical purposes and small scale simulations. For the Fortran MPI code two geometries are covered: the usual torus with periodic boundary conditions and the double-layered torus as defined in the paper. Parallel computing is performed on checkerboards of sublattices, which partition the full lattice in one, two, and so on, up to D directions (depending on the parameters set). For updating, the Cabibbo-Marinari heatbath algorithm is used. We present validations and test runs of the code. Performance is reported for a number of currently used Fortran compilers and, when applicable, MPI versions. For the parallelized code, performance is studied as a function of the number of processors. Program summary Program title: STMC2LSU3MPI Catalogue identifier: AEMJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26666 No. of bytes in distributed program, including test data, etc.: 233126 Distribution format: tar.gz Programming language: Fortran 77 compatible with the use of Fortran 90/95 compilers, in part with MPI extensions. Computer: Any capable of compiling and executing Fortran 77 or Fortran 90/95, when needed with MPI extensions. Operating system: Red Hat Enterprise Linux Server 6.1 with OpenMPI + pgf77 11.8-0, Centos 5.3 with OpenMPI + gfortran 4.1.2, Cray XT4 with MPICH2 + pgf90 11.2-0. Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of processors used: 2 to 11664 RAM: 200 Mega bytes per process. Classification: 11.5. Nature of problem: Physics of pure SU(3) Quantum Field Theory (QFT). This is relevant for our understanding of Quantum Chromodynamics (QCD). It includes the glueball spectrum, topological properties and the deconfining phase transition of pure SU(3) QFT. For instance, Relativistic Heavy Ion Collision (RHIC) experiments at the Brookhaven National Laboratory provide evidence that quarks confined in hadrons undergo at high enough temperature and pressure a transition into a Quark-Gluon Plasma (QGP). Investigations of its thermodynamics in pure SU(3) QFT are of interest. Solution method: Markov Chain Monte Carlo (MCMC) simulations of SU(3) Lattice Gauge Theory (LGT) with the Wilson action. This is a regularization of pure SU(3) QFT on a hypercubic lattice, which allows approaching the continuum SU(3) QFT by means of Finite Size Scaling (FSS) studies. Specifically, we provide updating routines for the Cabibbo-Marinari heatbath with and without checkerboard parallelization. While the first is suitable for pedagogical purposes and small scale projects, the latter allows for efficient parallel processing. Targetting the geometry of RHIC experiments, we have implemented a Double-Layered Torus (DLT) lattice geometry, which has previously not been used in LGT MCMC simulations and enables inside and outside layers at distinct temperatures, the lower-temperature layer acting as the outside boundary for the higher-temperature layer, where the deconfinement transition goes on. Restrictions: The checkerboard partition of the lattice makes the development of measurement programs more tedious than is the case for an unpartitioned lattice. Presently, only one measurement routine for Polyakov loops is provided. Unusual features: We provide three different versions for the send/receive function of the MPI library, which work for different operating system +compiler +MPI combinations. This involves activating the correct row in the last three rows of our latmpi.par parameter file. The underlying reason is distinct buffer conventions. Running time: For a typical run using an Intel i7 processor, it takes (1.8-6) E-06 seconds to update one link of the lattice, depending on the compiler used. For example, if we do a simulation on a small (4 * 83) DLT lattice with a statistics of 221 sweeps (i.e., update the two lattice layers of 4 * (4 * 83) links each 221 times), the total CPU time needed can be 2 * 4 * (4 * 83) * 221 * 3 E-06 seconds = 1.7 minutes, where 2 — two layers of lattice 4 — four dimensions 83 * 4 — lattice size 221 — sweeps of updating 6 E-06 s mdash; average time to update one link variable. If we divide the job into 8 parallel processes, then the real time is (for negligible communication overhead) 1.7 mins / 8 = 0.2 mins.

  7. "Women Must Endure According to Their Karma." Cambodian Immigrant Women Talk About Domestic Violence

    ERIC Educational Resources Information Center

    Bhuyan, Rupaleem; Mell, Molly; Senturia, Kirsten; Sullivan, Marianne; Shiu-Thornton, Sharyne

    2005-01-01

    Asian populations living in the United States share similar cultural values that influence their experiences with domestic violence. However, it is critical to recognize how differential cultural beliefs in the context of immigration and adjustment to life in the United States affect attitudes, interpretations, and response to domestic violence.…

  8. Pre-Service Teachers Institute

    NASA Image and Video Library

    2008-07-18

    The Pre-Service Teachers Institute sponsored by Jackson (Miss.) State University participated in an agencywide Hubble Space Telescope workshop at Stennis Space Center on July 18. Twenty-five JSU junior education majors participated in the workshop, a site tour and educational presentations by Karma Snyder of the NASA SSC Engineering & Safety Center and Anne Peek of the NASA SSC Deputy Science & Technology Division.

  9. Mobile Knowledge, Karma Points and Digital Peers: The Tacit Epistemology and Linguistic Representation of MOOCs

    ERIC Educational Resources Information Center

    Portmess, Lisa

    2013-01-01

    Media representations of massive open online courses (MOOCs) such as those offered by Coursera, edX and Udacity reflect tension and ambiguity in their bold promise of democratized education and global knowledge sharing. An approach to MOOCs that emphasizes the tacit epistemology of such representations suggests a richer account of the ambiguities…

  10. Pre-Service Teachers Institute

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Pre-Service Teachers Institute sponsored by Jackson (Miss.) State University participated in an agencywide Hubble Space Telescope workshop at Stennis Space Center on July 18. Twenty-five JSU junior education majors participated in the workshop, a site tour and educational presentations by Karma Snyder of the NASA SSC Engineering & Safety Center and Anne Peek of the NASA SSC Deputy Science & Technology Division.

  11. Karma and Human Rights: Bhutanese Teachers' Perspectives on Inclusion and Disability

    ERIC Educational Resources Information Center

    Kamenopoulou, Leda; Dukpa, Dawa

    2018-01-01

    The Sustainable Development Goals call on countries to ensure that all children, especially the most vulnerable, are included in education. The small kingdom of Bhutan has made attempts to embrace inclusion in education at the policy level. However, research on inclusion and disability in this context is limited, and there are few studies focusing…

  12. Demonstration of Weight-Four Parity Measurements in the Surface Code Architecture.

    PubMed

    Takita, Maika; Córcoles, A D; Magesan, Easwar; Abdo, Baleegh; Brink, Markus; Cross, Andrew; Chow, Jerry M; Gambetta, Jay M

    2016-11-18

    We present parity measurements on a five-qubit lattice with connectivity amenable to the surface code quantum error correction architecture. Using all-microwave controls of superconducting qubits coupled via resonators, we encode the parities of four data qubit states in either the X or the Z basis. Given the connectivity of the lattice, we perform a full characterization of the static Z interactions within the set of five qubits, as well as dynamical Z interactions brought along by single- and two-qubit microwave drives. The parity measurements are significantly improved by modifying the microwave two-qubit gates to dynamically remove nonideal Z errors.

  13. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  14. Vectorized Monte Carlo methods for reactor lattice analysis

    NASA Technical Reports Server (NTRS)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  15. Ground-state coding in partially connected neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1989-01-01

    Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail.

  16. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  17. A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Moriarty, K. J. M.; Rebbi, C.

    1984-01-01

    New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.

  18. Spiritual Concerns in Hindu Cancer Patients Undergoing Palliative Care: A Qualitative Study

    PubMed Central

    Simha, Srinagesh; Noble, Simon; Chaturvedi, Santosh K

    2013-01-01

    Aims: Spiritual concerns are being identified as important components of palliative care. The aim of this study was to explore the nature of spiritual concerns in cancer patients undergoing palliative care in a hospice in India. Materials and Methods: The methodology used was a qualitative method: Interpretive phenomenological analysis. A semi-structured interview guide was used to collect data, based on Indian and western literature reports. Certain aspects like karma and pooja, relevant to Hindus, were included. Theme saturation was achieved on interviewing 10 participants. Results: The seven most common spiritual concerns reported were benefit of pooja, faith in God, concern about the future, concept of rebirth, acceptance of one's situation, belief in karma, and the question Why me? No participant expressed four of the concerns studied: Loneliness, need of seeking forgiveness from others, not being remembered later, and religious struggle. Conclusions: This study confirms that there are spiritual concerns reported by patients receiving palliative care. The qualitative descriptions give a good idea about these experiences, and how patients deal with them. The study indicates the need for adequate attention to spiritual aspects during palliative care. PMID:24049350

  19. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  20. LB3D: A parallel implementation of the Lattice-Boltzmann method for simulation of interacting amphiphilic fluids

    NASA Astrophysics Data System (ADS)

    Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.

    2017-08-01

    We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.

  1. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  2. Characterizing a four-qubit planar lattice for arbitrary error detection

    NASA Astrophysics Data System (ADS)

    Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias

    2015-05-01

    Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].

  3. The Fermilab lattice supercomputer project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischler, M.; Atac, R.; Cook, A.

    1989-02-01

    The ACPMAPS system is a highly cost effective, local memory MIMD computer targeted at algorithm development and production running for gauge theory on the lattice. The machine consists of a compound hypercube of crates, each of which is a full crossbar switch containing several processors. The processing nodes are single board array processors based on the Weitek XL chip set, each with a peak power of 20 MFLOPS and supported by 8 MBytes of data memory. The system currently being assembled has a peak power of 5 GFLOPS, delivering performance at approximately $250/MFLOP. The system is programmable in C andmore » Fortran. An underpinning of software routines (CANOPY) provides an easy and natural way of coding lattice problems, such that the details of parallelism, and communication and system architecture are transparent to the user. CANOPY can easily be ported to any single CPU or MIMD system which supports C, and allows the coding of typical applications with very little effort. 3 refs., 1 fig.« less

  4. Computer program documentation for a subcritical wing design code using higher order far-field drag minimization

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.; Shu, J. Y.

    1981-01-01

    A subsonic, linearized aerodynamic theory, wing design program for one or two planforms was developed which uses a vortex lattice near field model and a higher order panel method in the far field. The theoretical development of the wake model and its implementation in the vortex lattice design code are summarized and sample results are given. Detailed program usage instructions, sample input and output data, and a program listing are presented in the Appendixes. The far field wake model assumes a wake vortex sheet whose strength varies piecewise linearly in the spanwise direction. From this model analytical expressions for lift coefficient, induced drag coefficient, pitching moment coefficient, and bending moment coefficient were developed. From these relationships a direct optimization scheme is used to determine the optimum wake vorticity distribution for minimum induced drag, subject to constraints on lift, and pitching or bending moment. Integration spanwise yields the bound circulation, which is interpolated in the near field vortex lattice to obtain the design camber surface(s).

  5. Simulations to study the static polarization limit for RHIC lattice

    NASA Astrophysics Data System (ADS)

    Duan, Zhe; Qin, Qing

    2016-01-01

    A study of spin dynamics based on simulations with the Polymorphic Tracking Code (PTC) is reported, exploring the dependence of the static polarization limit on various beam parameters and lattice settings for a practical RHIC lattice. It is shown that the behavior of the static polarization limit is dominantly affected by the vertical motion, while the effect of beam-beam interaction is small. In addition, the “nonresonant beam polarization” observed and studied in the lattice-independent model is also observed in this lattice-dependent model. Therefore, this simulation study gives insights of polarization evolution at fixed beam energies, that are not available in simple spin tracking. Supported by the U.S. Department of Energy (DE-AC02-98CH10886), Hundred-Talent Program (Chinese Academy of Sciences), and National Natural Science Foundation of China (11105164)

  6. First-principles calculations of lattice dynamics and thermal properties of polar solids

    DOE PAGES

    Wang, Yi; Shang, Shun -Li; Fang, Huazhi; ...

    2016-05-13

    Although the theory of lattice dynamics was established six decades ago, its accurate implementation for polar solids using the direct (or supercell, small displacement, frozen phonon) approach within the framework of density-function-theory-based first-principles calculations had been a challenge until recently. It arises from the fact that the vibration-induced polarization breaks the lattice periodicity, whereas periodic boundary conditions are required by typical first-principles calculations, leading to an artificial macroscopic electric field. In conclusion, the article reviews a mixed-space approach to treating the interactions between lattice vibration and polarization, its applications to accurately predicting the phonon and associated thermal properties, and itsmore » implementations in a number of existing phonon codes.« less

  7. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  8. Acoustic echo cancellation for full-duplex voice transmission on fading channels

    NASA Technical Reports Server (NTRS)

    Park, Sangil; Messer, Dion D.

    1990-01-01

    This paper discusses the implementation of an adaptive acoustic echo canceler for a hands-free cellular phone operating on a fading channel. The adaptive lattice structure, which is particularly known for faster convergence relative to the conventional tapped-delay-line (TDL) structure, is used in the initialization stage. After convergence, the lattice coefficients are converted into the coefficients for the TDL structure which can accommodate a larger number of taps in real-time operation due to its computational simplicity. The conversion method of the TDL coefficients from the lattice coefficients is derived and the DSP56001 assembly code for the lattice and TDL structure is included, as well as simulation results and the schematic diagram for the hardware implementation.

  9. Association of reproductive history with breast tissue characteristics and receptor status in the normal breast.

    PubMed

    Gabrielson, Marike; Chiesa, Flaminia; Behmer, Catharina; Rönnow, Katarina; Czene, Kamila; Hall, Per

    2018-03-30

    Reproductive history has been associated with breast cancer risk, but more knowledge of the underlying biological mechanisms is needed. Because of limited data on normal breast tissue from healthy women, we examined associations of reproductive history and established breast cancer risk factors with breast tissue composition and markers of hormone receptors and proliferation in a nested study within the Karolinska Mammography project for risk prediction for breast cancer (Karma). Tissues from 153 women were obtained by ultrasound-guided core needle biopsy as part of the Karma project. Immunohistochemical staining was used to assessed histological composition of epithelial, stromal and adipose tissue, epithelial and stromal oestrogen receptor (ER) and progesterone receptor (PR) status, and Ki-67 proliferation status. An individualised reproductive score including parity, number of pregnancies without birth, number of births, age at first birth, and duration of breastfeeding, was calculated based on self-reported reproductive history at the time of the Karma study entry. All analyses were adjusted for age and BMI. Cumulated reproductive score was associated with increased total epithelial content and greater expression of epithelial ER. Parity was associated with greater epithelial area, increased epithelial-stromal ratio, greater epithelial ER expression and a lower extent of stromal proliferation. Increasing numbers of pregnancies and births were associated with a greater epithelial area in the entire study set, which remained significant among postmenopausal women. Increasing numbers of pregnancies and births were also associated with a greater expression of epithelial ER among postmenopausal women. Longer duration of breastfeeding was associated with greater epithelial area and greater expression of epithelial PR both in the entire study set and among postmenopausal women. Breastfeeding was also positively associated with greater epithelial ER expression among postmenopausal women. Prior use of oral contraceptives was associated with lower epithelial-stromal ratio amongst all participants and among pre- and postmenopausal women separately. Reproductive risk factors significantly influence the epithelial tissue compartment and expression of hormone receptors in later life. These changes remain after menopause. This study provides deeper insights of the biological mechanisms by which reproductive history influences epithelial area and expression of hormone receptors, and as a consequence the risk of breast cancer.

  10. Clusters in irregular areas and lattices.

    PubMed

    Wieczorek, William F; Delmerico, Alan M; Rogerson, Peter A; Wong, David W S

    2012-01-01

    Geographic areas of different sizes and shapes of polygons that represent counts or rate data are often encountered in social, economic, health, and other information. Often political or census boundaries are used to define these areas because the information is available only for those geographies. Therefore, these types of boundaries are frequently used to define neighborhoods in spatial analyses using geographic information systems and related approaches such as multilevel models. When point data can be geocoded, it is possible to examine the impact of polygon shape on spatial statistical properties, such as clustering. We utilized point data (alcohol outlets) to examine the issue of polygon shape and size on visualization and statistical properties. The point data were allocated to regular lattices (hexagons and squares) and census areas for zip-code tabulation areas and tracts. The number of units in the lattices was set to be similar to the number of tract and zip-code areas. A spatial clustering statistic and visualization were used to assess the impact of polygon shape for zip- and tract-sized units. Results showed substantial similarities and notable differences across shape and size. The specific circumstances of a spatial analysis that aggregates points to polygons will determine the size and shape of the areal units to be used. The irregular polygons of census units may reflect underlying characteristics that could be missed by large regular lattices. Future research to examine the potential for using a combination of irregular polygons and regular lattices would be useful.

  11. The HIBEAM Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    2000-02-01

    HIBEAM is a 2 1/2D particle-in-cell (PIC) simulation code developed in the late 1990's in the Heavy-Ion Fusion research program at Lawrence Berkeley National Laboratory. The major purpose of HIBEAM is to simulate the transverse (i.e., X-Y) dynamics of a space-charge-dominated, non-relativistic heavy-ion beam being transported in a static accelerator focusing lattice. HIBEAM has been used to study beam combining systems, effective dynamic apertures in electrostatic quadrupole lattices, and emittance growth due to transverse misalignments. At present, HIBEAM runs on the CRAY vector machines (C90 and J90's) at NERSC, although it would be relatively simple to port the code tomore » UNIX workstations so long as IMSL math routines were available.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovelace, III, Henry H.

    In accelerator physics, models of a given machine are used to predict the behaviors of the beam, magnets, and radiofrequency cavities. The use of the computational model has become wide spread to ease the development period of the accelerator lattice. There are various programs that are used to create lattices and run simulations of both transverse and longitudinal beam dynamics. The programs include Methodical Accelerator Design(MAD) MAD8, MADX, Zgoubi, Polymorphic Tracking Code (PTC), and many others. In this discussion the BMAD (Baby Methodical Accelerator Design) is presented as an additional tool in creating and simulating accelerator lattices for the studymore » of beam dynamics in the Relativistic Heavy Ion Collider (RHIC).« less

  13. Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2017-11-01

    The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.

  14. Health Beliefs regarding Dietary Behavior and Physical Activity of Surinamese Immigrants of Indian Descent in The Netherlands: A Qualitative Study

    PubMed Central

    Hendriks, A.-M.; Gubbels, J. S.; Jansen, M. W. J.; Kremers, S. P. J.

    2012-01-01

    This study explored the health beliefs about eating habits and physical activity (PA) of Surinamese immigrants of Indian (Hindustani) descent to examine how health education messages to prevent obesity can be made more culturally sensitive. Indians are known for their increasing obesity incidence and are highly vulnerable for obesity-related consequences such as cardiovascular diseases and diabetes. Therefore they might benefit from culturally sensitive health education messages that stimulate healthy eating habits and increase PA levels. In order to examine how health education messages aimed at preventing obesity could be adapted to Indian culture, we interviewed eight Hindustanis living in The Netherland, and conducted two focus groups (n = 19) with members from a Surinamese Hindustani community. Results showed cultural implications that might affect the effectiveness of health education messages: karma has a role in explaining the onset of illness, traditional eating habits are perceived as difficult to change, and PA was generally disliked. We conclude that health education messages aimed at Hindustani immigrants should recognize the role of karma in explaining the onset of illness, include more healthy alternatives for traditional foods, pay attention to the symbolic meaning of food, and suggest more enjoyable and culturally sensitive forms of PA for women. PMID:24533213

  15. Multi-Group Formulation of the Temperature-Dependent Resonance Scattering Model and its Impact on Reactor Core Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed

    2014-01-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less

  16. Hyperbolic and semi-hyperbolic surface codes for quantum storage

    NASA Astrophysics Data System (ADS)

    Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.

    2017-09-01

    We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.

  17. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  18. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  19. Simulation Studies for Inspection of the Benchmark Test with PATRASH

    NASA Astrophysics Data System (ADS)

    Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.

    2002-12-01

    In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.

  20. A free interactive matching program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.-F. Ostiguy

    1999-04-16

    For physicists and engineers involved in the design and analysis of beamlines (transfer lines or insertions) the lattice function matching problem is central and can be time-consuming because it involves constrained nonlinear optimization. For such problems convergence can be difficult to obtain in general without expert human intervention. Over the years, powerful codes have been developed to assist beamline designers. The canonical example is MAD (Methodical Accelerator Design) developed at CERN by Christophe Iselin. MAD, through a specialized command language, allows one to solve a wide variety of problems, including matching problems. Although in principle, the MAD command interpreter canmore » be run interactively, in practice the solution of a matching problem involves a sequence of independent trial runs. Unfortunately, but perhaps not surprisingly, there still exists relatively few tools exploiting the resources offered by modern environments to assist lattice designer with this routine and repetitive task. In this paper, we describe a fully interactive lattice matching program, written in C++ and assembled using freely available software components. An important feature of the code is that the evolution of the lattice functions during the nonlinear iterative process can be graphically monitored in real time; the user can dynamically interrupt the iterations at will to introduce new variables, freeze existing ones into their current state and/or modify constraints. The program runs under both UNIX and Windows NT.« less

  1. A generalized vortex lattice method for subsonic and supersonic flow applications

    NASA Technical Reports Server (NTRS)

    Miranda, L. R.; Elliot, R. D.; Baker, W. M.

    1977-01-01

    If the discrete vortex lattice is considered as an approximation to the surface-distributed vorticity, then the concept of the generalized principal part of an integral yields a residual term to the vorticity-induced velocity field. The proper incorporation of this term to the velocity field generated by the discrete vortex lines renders the present vortex lattice method valid for supersonic flow. Special techniques for simulating nonzero thickness lifting surfaces and fusiform bodies with vortex lattice elements are included. Thickness effects of wing-like components are simulated by a double (biplanar) vortex lattice layer, and fusiform bodies are represented by a vortex grid arranged on a series of concentrical cylindrical surfaces. The analysis of sideslip effects by the subject method is described. Numerical considerations peculiar to the application of these techniques are also discussed. The method has been implemented in a digital computer code. A users manual is included along with a complete FORTRAN compilation, an executed case, and conversion programs for transforming input for the NASA wave drag program.

  2. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  3. Evaluation of the DRAGON code for VHTR design analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division

    2006-01-12

    This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less

  4. Phase diagram and high degeneracy points for generic anisotropic exchange on the garnet lattice

    NASA Astrophysics Data System (ADS)

    Andreanov, Alexei; McClarty, Paul

    Garnet magnets with chemical formula RE3Ga5O12 where RE is a rare earth ion have properties that are determined by a combination of geometrical frustration and strong spin-orbit coupling. The former arises from the RE structure which consists of two interpenetrating hyperkagome lattices while the latter leads, in general, to an anisotropy in the magnetic exchange. We systematically explore and describe the full phase diagram for the case of all nearest-neighbor interactions compatible with lattice symmetries and consider the role of fluctuations and further neighbor couplings around high degeneracy points in the phase diagram. AA was supported by Project Code(IBS-R024-D1).

  5. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  6. Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2016-11-01

    Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.

  7. Accuracy of the lattice-Boltzmann method using the Cell processor

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; de Fabritiis, G.; Giupponi, G.

    2008-11-01

    Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.

  8. Estimation of coolant void reactivity for CANDU-NG lattice using DRAGON and validation using MCNP5 and TRIPOLI-4.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthikeyan, R.; Tellier, R. L.; Hebert, A.

    2006-07-01

    The Coolant Void Reactivity (CVR) is an important safety parameter that needs to be estimated at the design stage of a nuclear reactor. It helps to have an a priori knowledge of the behavior of the system during a transient initiated by the loss of coolant. In the present paper, we have attempted to estimate the CVR for a CANDU New Generation (CANDU-NG) lattice, as proposed at an early stage of the Advanced CANDU Reactor (ACR) development. We have attempted to estimate the CVR with development version of the code DRAGON, using the method of characteristics. DRAGON has several advancedmore » self-shielding models incorporated in it, each of them compatible with the method of characteristics. This study will bring to focus the performance of these self-shielding models, especially when there is voiding of such a tight lattice. We have also performed assembly calculations in 2 x 2 pattern for the CANDU-NG fuel, with special emphasis on checkerboard voiding. The results obtained have been validated against Monte Carlo codes MCNP5 and TRIPOLI-4.3. (authors)« less

  9. SU (2) lattice gauge theory simulations on Fermi GPUs

    NASA Astrophysics Data System (ADS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-05-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.

  10. Nepali concepts of psychological trauma: the role of idioms of distress, ethnopsychology and ethnophysiology in alleviating suffering and preventing stigma.

    PubMed

    Kohrt, Brandon A; Hruschka, Daniel J

    2010-06-01

    In the aftermath of a decade-long Maoist civil war in Nepal and the recent relocation of thousands of Bhutanese refugees from Nepal to Western countries, there has been rapid growth of mental health and psychosocial support programs, including posttraumatic stress disorder treatment, for Nepalis and ethnic Nepali Bhutanese. This medical anthropology study describes the process of identifying Nepali idioms of distress and local ethnopsychology and ethnophysiology models that promote effective communication about psychological trauma in a manner that minimizes stigma for service users. Psychological trauma is shown to be a multifaceted concept that has no single linguistic corollary in the Nepali study population. Respondents articulated different categories of psychological trauma idioms in relation to impact on the heart-mind, brain-mind, body, spirit, and social status, with differences in perceived types of traumatic events, symptom sets, emotion clusters and vulnerability. Trauma survivors felt blamed for experiencing negative events, which were seen as karma transmitting past life sins or family member sins into personal loss. Some families were reluctant to seek care for psychological trauma because of the stigma of revealing this bad karma. In addition, idioms related to brain-mind dysfunction contributed to stigma, while heart-mind distress was a socially acceptable reason for seeking treatment. Different categories of trauma idioms support the need for multidisciplinary treatment with multiple points of service entry.

  11. Entanglement renormalization and gauge symmetry

    NASA Astrophysics Data System (ADS)

    Tagliacozzo, L.; Vidal, G.

    2011-03-01

    A lattice gauge theory is described by a redundantly large vector space that is subject to local constraints and can be regarded as the low-energy limit of an extended lattice model with a local symmetry. We propose a numerical coarse-graining scheme to produce low-energy, effective descriptions of lattice models with a local symmetry such that the local symmetry is exactly preserved during coarse-graining. Our approach results in a variational ansatz for the ground state(s) and low-energy excitations of such models and, by extension, of lattice gauge theories. This ansatz incorporates the local symmetry in its structure and exploits it to obtain a significant reduction of computational costs. We test the approach in the context of a Z2 lattice gauge theory formulated as the low-energy theory of a specific regime of the toric code with a magnetic field, for lattices with up to 16×16 sites (162×2=512 spins) on a torus. We reproduce the well-known ground-state phase diagram of the model, consisting of a deconfined and spin-polarized phases separated by a continuous quantum phase transition, and obtain accurate estimates of energy gaps, ground-state fidelities, Wilson loops, and several other quantities.

  12. REX3DV1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holm, Elizabeth A.

    2002-03-28

    This code is a FORTRAN code for three-dimensional Monte Carol Potts Model (MCPM) Recrystallization and grain growth. A continuum grain structure is mapped onto a three-dimensional lattice. The mapping procedure is analogous to color bitmapping the grain structure; grains are clusters of pixels (sites) of the same color (spin). The total system energy is given by the Pott Hamiltonian and the kinetics of grain growth are determined through a Monte Carlo technique with a nonconserved order parameter (Glauber dynamics). The code can be compiled and run on UNIX/Linux platforms.

  13. DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James; Bayless, Paul; Strydom, Gerhard

    2016-11-01

    Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less

  14. Size and habit evolution of PETN crystals - a lattice Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zepeda-Ruiz, L A; Maiti, A; Gee, R

    2006-02-28

    Starting from an accurate inter-atomic potential we develop a simple scheme of generating an ''on-lattice'' molecular potential of short range, which is then incorporated into a lattice Monte Carlo code for simulating size and shape evolution of nanocrystallites. As a specific example, we test such a procedure on the morphological evolution of a molecular crystal of interest to us, e.g., Pentaerythritol Tetranitrate, or PETN, and obtain realistic facetted structures in excellent agreement with experimental morphologies. We investigate several interesting effects including, the evolution of the initial shape of a ''seed'' to an equilibrium configuration, and the variation of growth morphologymore » as a function of the rate of particle addition relative to diffusion.« less

  15. Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, Martin J.

    This project was part of a coordinated software development effort which the nuclear physics lattice QCD community pursues in order to ensure that lattice calculations can make optimal use of present, and forthcoming leadership-class and dedicated hardware, including those of the national laboratories, and prepares for the exploitation of future computational resources in the exascale era. The UW team improved and extended software libraries used in lattice QCD calculations related to multi-nucleon systems, enhanced production running codes related to load balancing multi-nucleon production on large-scale computing platforms, and developed SQLite (addressable database) interfaces to efficiently archive and analyze multi-nucleon datamore » and developed a Mathematica interface for the SQLite databases.« less

  16. Recent developments in multidimensional transport methods for the APOLLO 2 lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmijarevic, I.; Sanchez, R.

    1995-12-31

    A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less

  17. SAWdoubler: A program for counting self-avoiding walks

    NASA Astrophysics Data System (ADS)

    Schram, Raoul D.; Barkema, Gerard T.; Bisseling, Rob H.

    2013-03-01

    This article presents SAWdoubler, a package for counting the total number ZN of self-avoiding walks (SAWs) on a regular lattice by the length-doubling method, of which the basic concept has been published previously by us. We discuss an algorithm for the creation of all SAWs of length N, efficient storage of these SAWs in a tree data structure, and an algorithm for the computation of correction terms to the count Z2N for SAWs of double length, removing all combinations of two intersecting single-length SAWs. We present an efficient numbering of the lattice sites that enables exploitation of symmetry and leads to a smaller tree data structure; this numbering is by increasing Euclidean distance from the origin of the lattice. Furthermore, we show how the computation can be parallelised by distributing the iterations of the main loop of the algorithm over the cores of a multicore architecture. Experimental results on the 3D cubic lattice demonstrate that Z28 can be computed on a dual-core PC in only 1 h and 40 min, with a speedup of 1.56 compared to the single-core computation and with a gain by using symmetry of a factor of 26. We present results for memory use and show how the computation is made to fit in 4 GB RAM. It is easy to extend the SAWdoubler software to other lattices; it is publicly available under the GNU LGPL license. Catalogue identifier: AEOB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public Licence No. of lines in distributed program, including test data, etc.: 2101 No. of bytes in distributed program, including test data, etc.: 19816 Distribution format: tar.gz Programming language: C. Computer: Any computer with a UNIX-like operating system and a C compiler. For large problems, use is made of specific 128-bit integer arithmetic provided by the gcc compiler. Operating system: Any UNIX-like system; developed under Linux and Mac OS 10. Has the code been vectorised or parallelised?: Yes. A parallel version of the code is available in the “Extras” directory of the distribution file. RAM: Problem dependent (2 GB for counting SAWs of length 28 on the 3D cubic lattice) Classification: 16.11. Nature of problem: Computing the number of self-avoiding walks of a given length on a given lattice. Solution method: Length-doubling. Restrictions: The length of the walk must be even. Lattice is 3D simple cubic. Additional comments: The lattice can be replaced by other lattices, such as BCC, FCC, or a 2D square lattice. Running time: Problem dependent (2.5 h using one processor core for length 28 on the 3D cubic lattice).

  18. Entropic Lattice Boltzmann Simulations of Turbulence

    NASA Astrophysics Data System (ADS)

    Keating, Brian; Vahala, George; Vahala, Linda; Soe, Min; Yepez, Jeffrey

    2006-10-01

    Because of its simplicity, nearly perfect parallelization and vectorization on supercomputer platforms, lattice Boltzmann (LB) methods hold great promise for simulations of nonlinear physics. Indeed, our MHD-LB code has the best sustained performance/PE of any code on the Earth Simulator. By projecting into the higher dimensional kinetic phase space, the solution trajectory is simpler and much easier to compute than standard CFD approach. However, simple LB -- with its simple advection and local BGK collisional relaxation -- does not impose positive definiteness of the distribution functions in the time evolution. This leads to numerical instabilities for very low transport coefficients. In Entropic LB (ELB) one determines a discrete H-theorem and the equilibrium distribution functions subject to the collisional invariants. The ELB algorithm is unconditionally stable to arbitrary small transport coefficients. Various choices of velocity discretization are examined: 15, 19 and 27-bit ELB models. The connection between Tsallis and Boltzmann entropies are clarified.

  19. Numerical investigations on the aerodynamics of SHEFEX-III launcher

    NASA Astrophysics Data System (ADS)

    Li, Yi; Reimann, Bodo; Eggers, Thino

    2014-04-01

    The present work is a numerical study of the aerodynamic problems related to the hot stage separation of a multistage rocket. The adapter between the first and the second stage of the rocket uses a lattice structure to vent the plume from the 2nd-stage-motor during the staging. The lattice structure acts as an axisymmetric cavity on the rocket and can affect the flight performance. To quantify the effects, the DLR CFD code, TAU, is applied to study the aerodynamic characteristics of the rocket. The CFD code is also used to simulate the start-up transients of the 2nd-stage-motor. Different plume deflectors are also investigated with the CFD techniques. For the CFD computation in this work, a 2-species-calorically-perfect-gas-model without chemical reactions is selected for modeling the rocket plume, which is a compromise between the demands of accuracy and efficiency.

  20. Simple scheme for encoding and decoding a qubit in unknown state for various topological codes

    PubMed Central

    Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał

    2015-01-01

    We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905

  1. Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems

    NASA Astrophysics Data System (ADS)

    Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel

    The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.

  2. Lattice QCD calculation using VPP500

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Ohta, Shigemi

    1995-02-01

    A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6 GFLOPS peak speed and 256 MB memory, connected by a crossbar switch with 400 MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1 GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8 GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix.

  3. Lattice Calibration with Turn-By-Turn BPM Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xiaobiao; /SLAC; Sebek, James

    2012-07-02

    Turn-by-turn beam position monitor (BPM) data from multiple BPMs are fitted with a tracking code to calibrate magnet strengths in a manner similar to the well known LOCO code. Simulation shows that this turn-by-turn method can be a quick and efficient way for optics calibration. The method is applicable to both linacs and ring accelerators. Experimental results for a section of the SPEAR3 ring is also shown.

  4. Dirac Sea and its Evolution

    NASA Astrophysics Data System (ADS)

    Volfson, Boris

    2013-09-01

    The hypothesis of transition from a chaotic Dirac Sea, via highly unstable positronium, into a Simhony Model of stable face-centered cubic lattice structure of electrons and positrons securely bound in vacuum space, is considered. 13.75 Billion years ago, the new lattice, which, unlike a Dirac Sea, is permeable by photons and phonons, made the Universe detectable. Many electrons and positrons ended up annihilating each other producing energy quanta and neutrino-antineutrino pairs. The weak force of the electron-positron crystal lattice, bombarded by the chirality-changing neutrinos, may have started capturing these neutrinos thus transforming from cubic crystals into a quasicrystal lattice. Unlike cubic crystal lattice, clusters of quasicrystals are "slippery" allowing the formation of centers of local torsion, where gravity condenses matter into galaxies, stars and planets. In the presence of quanta, in a quasicrystal lattice, the Majorana neutrinos' rotation flips to the opposite direction causing natural transformations in a category comprised of three components; two others being positron and electron. In other words, each particle-antiparticle pair "e-" and "e+", in an individual crystal unit, could become either a quasi- component "e- ve e+", or a quasi- component "e+ - ve e-". Five-to-six six billion years ago, a continuous stimulation of the quasicrystal aetherial lattice by the same, similar, or different, astronomical events, could have triggered Hebbian and anti-Hebbian learning processes. The Universe may have started writing script into its own aether in a code most appropriate for the quasicrystal aether "hardware": Eight three-dimensional "alphabet" characters, each corresponding to the individual quasi-crystal unit shape. They could be expressed as quantum Turing machine qubits, or, alternatively, in a binary code. The code numerals could contain terminal and nonterminal symbols of the Chomsky's hierarchy, wherein, the showers of quanta, forming the cosmic microwave background radiation, may re-script a quasi-component "e- ve e+" (in the binary code case, same as numeral "0") into a quasi-component "e+ -ve e-" (numeral "1"), or vice versa. According to both, the Chomsky"s logic, and the rules applicable to Majorana particles, terminals "e-" and "e+" cannot be changed using the rules of grammar, while nonterminals "ve" and "-ve" can. Under "quantum" showers, the quasi- unit cells re-shape, resulting in re-combination of the clusters that they form, with the affected pattern become same as, similar to, or different from, other pattern(s). The process of self-learning may have occurred as a natural response to various astronomical events and cosmic cataclysms: The same astronomical activity in two different areas resulted in the emission of the same energy forming the same secondary quasicrystal pattern. Different but similar astronomical activity resulted in the emission of a similar amount of energy forming a similar secondary quasicrystal pattern. Different astronomical activity resulted in the emission of a different amount of energy forming a different secondary quasicrystal pattern. Since quasicrystals conduct energy in one direction and don't conduct energy in the other, the control over quanta flows allows aether to scribe a script onto itself through changing its own quasi- patterns. The paper, as published below, is a lecture summary. The full text is published on website: www.borisvolfson.org

  5. The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids

    NASA Astrophysics Data System (ADS)

    Mazzeo, M. D.; Ricci, M.; Zannoni, C.

    2010-03-01

    We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.

  6. Time-independent lattice Boltzmann method calculation of hydrodynamic interactions between two particles

    NASA Astrophysics Data System (ADS)

    Ding, E. J.

    2015-06-01

    The time-independent lattice Boltzmann algorithm (TILBA) is developed to calculate the hydrodynamic interactions between two particles in a Stokes flow. The TILBA is distinguished from the traditional lattice Boltzmann method in that a background matrix (BGM) is generated prior to the calculation. The BGM, once prepared, can be reused for calculations for different scenarios, and the computational cost for each such calculation will be significantly reduced. The advantage of the TILBA is that it is easy to code and can be applied to any particle shape without complicated implementation, and the computational cost is independent of the shape of the particle. The TILBA is validated and shown to be accurate by comparing calculation results obtained from the TILBA to analytical or numerical solutions for certain problems.

  7. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  8. Prediction of the Reactor Antineutrino Flux for the Double Chooz Experiment

    NASA Astrophysics Data System (ADS)

    Jones, Chirstopher LaDon

    This thesis benchmarks the deterministic lattice code, DRAGON, against data, and then applies this code to make a prediction for the antineutrino flux from the Chooz Bl and B2 reactors. Data from the destructive assay of rods from the Takahama-3 reactor and from the SONGS antineutrino detector are used for comparisons. The resulting prediction from the tuned DRAGON code is then compared to the first antineutrino event spectra from Double Chooz. Use of this simulation in nuclear nonproliferation studies is discussed. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  9. Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne

    2003-01-01

    A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.

  10. Cracking the barcode of fullerene-like cortical microcolumns.

    PubMed

    Tozzi, Arturo; Peters, James F; Ori, Ottorino

    2017-03-22

    Artificial neural systems and nervous graph theoretical analysis rely upon the stance that the neural code is embodied in logic circuits, e.g., spatio-temporal sequences of ON/OFF spiking neurons. Nevertheless, this assumption does not fully explain complex brain functions. Here we show how nervous activity, other than logic circuits, could instead depend on topological transformations and symmetry constraints occurring at the micro-level of the cortical microcolumn, i.e., the embryological, anatomical and functional basic unit of the brain. Tubular microcolumns can be flattened in fullerene-like two-dimensional lattices, equipped with about 80 nodes standing for pyramidal neurons where neural computations take place. We show how the countless possible combinations of activated neurons embedded in the lattice resemble a barcode. Despite the fact that further experimental verification is required in order to validate our claim, different assemblies of firing neurons might have the appearance of diverse codes, each one responsible for a single mental activity. A two-dimensional fullerene-like lattice, grounded on simple topological changes standing for pyramidal neurons' activation, not just displays analogies with the real microcolumn's microcircuitry and the neural connectome, but also the potential for the manufacture of plastic, robust and fast artificial networks in robotic forms of full-fledged neural systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. SU (2) lattice gauge theory simulations on Fermi GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p

    2011-05-10

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less

  12. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  13. A Potpourri of Math Ideas.

    ERIC Educational Resources Information Center

    Weisberg, Phyllis G.

    1987-01-01

    The article offers practical games and "tricks" to help remediate deficits in arithmetic and mathematics at the elementary school level. Games include using the fingers to calculate, lattice multiplication, dividing paper into equal columns, square games, code cracking games, and fraction dominoes. (Author/DB)

  14. Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing

    DTIC Science & Technology

    2010-03-01

    because only certain collections (partitioned by font type) of sequences are allowed to be in each position (e.g., Arial = position 0, Comic ...rigidity of short oligos and the shape of the polar charge. Oligo movement was modeled by a Brownian motion 3 dimensional random walk. The one...temperature, kB is Boltz he viscosity of the medium. The random walk motion is modeled by assuming the oligo is on a three dimensional lattice and may

  15. A tale of two systems: beliefs and practices of South African Muslim and Hindu traditional healers regarding cleft lip and palate.

    PubMed

    Ross, Eleanor

    2007-11-01

    This South African study compared the views of 15 Muslim and 8 Hindu traditional healers regarding the etiology and treatment of craniofacial clefts, reasons for people consulting with them, and collaboration with Western professionals. The original data were collected via individual interviews. Secondary data analysis was conducted to highlight common themes. Four Hindu and 12 Muslim healers believed that the condition was God sent. Both groups acknowledged the existence of various superstitions within their communities. For example, if a pregnant woman handled a sharp object during an eclipse, her infant could be born with a cleft. All Hindu healers also attributed clefts to karma. All the Muslim healers counseled patients and families. Fourteen referred people for medical help, 10 emphasized the importance of prayer, and 3 recommended the wearing of amulets containing a prayer. No Hindu healers provided direct treatment. Three advised parents to fast, six arranged fire and purification ceremonies in the temples, and three consulted the person's astrological chart to dispel any bad karma. Both groups of healers advised people to give to charity. Eight Hindu healers and eight Muslim healers believed that people consulted with them because of cultural influences and because they alleviated feelings of guilt. Four Hindu and 13 Muslim healers favored collaboration with Western practitioners. Findings highlight the need for culturally sensitive rehabilitation practices, collaboration, referrals, and information sharing between Eastern and Western health care practitioners.

  16. Nepali Concepts of Psychological Trauma: The Role of Idioms of Distress, Ethnopsychology, and Ethnophysiology in Alleviating Suffering and Preventing Stigma

    PubMed Central

    Hruschka, Daniel J.

    2013-01-01

    In the aftermath of a decade-long Maoist civil war in Nepal and the recent relocation of thousands of Bhutanese refugees from Nepal to Western countries, there has been rapid growth of mental health and psychosocial support programs, including posttraumatic stress disorder (PTSD) treatment, for Nepalis and ethnic Nepali Bhutanese. This medical anthropology study describes the process of identifying Nepali idioms of distress and local ethnopsychology and ethnophysiology models that promote effective communication about psychological trauma in a manner that minimizes stigma for service users. Psychological trauma is shown to be a multi-faceted concept that has no single linguistic corollary in the Nepali study population. Respondents articulated different categories of psychological trauma idioms in relation to impact upon the heart-mind, brain-mind, body, spirit, and social status, with differences in perceived types of traumatic events, symptom sets, emotion clusters, and vulnerability. Trauma survivors felt blamed for experiencing negative events, which were seen as karma transmitting past life sins or family member sins into personal loss. Some families were reluctant to seek care for psychological trauma because of the stigma of revealing this bad karma. In addition, idioms related to brain-mind dysfunction contributed to stigma while heart-mind distress was a socially acceptable reason for seeking treatment. Different categories of trauma idioms support the need for multidisciplinary treatment with multiple points of service entry. PMID:20309724

  17. PA01.81. Impact of globalisation on health w.s.r. metabolic syndrome and its ayurvedic management)

    PubMed Central

    Layeeq, Shaizi; Srivastava, Alok K

    2012-01-01

    Purpose: According to WHO report 2002,Cardiovacular diseases (CVD) will be the largest cause of death and disability in India by 2012. Metabolic Syndrome (MetS), a constellation of dyslipidemia, elevated blood glucose, hypertension and obesity is emerging as the most common risk factor for CVD. The rising prevalence of individual components of Metabolic Syndrome is mainly attributed to globalisation which has made available cheap, unhealthy food on the main menu & also brought with it sedentary lifestyle. It is a need of time to pay due consideration on the problem and search for alternative medicine. So the aim of the study is: 1. To study the impact of globalisation on health w.s.r Metabolic Syndrome. 2. To assess the clinical efficacy of Panchakarma in its management. Method: For the study large-scale survey, other documented data and published articles were studied. For clinical contrieve 20 patients were registered and were given Virechana Karma followed by administration of Shuddha Guggulu as palliative measure. Result: The results show that globalisation has a great impact on all the components of Metabolic Syndrome. However on management with Panchakarma (Virechana Karma) followed by Shuddha Guggulu encouraging results were found. The overall effect of therapy was found to be 82.5%. Conclusion: There is a high prevalence of metabolic syndrome in India and it is a need of time to consider alternative treatment for its management alongwith change in lifestyle to reduce the risk of cardiovascular disease.

  18. Preparation macroconstants to simulate the core of VVER-1000 reactor

    NASA Astrophysics Data System (ADS)

    Seleznev, V. Y.

    2017-01-01

    Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.

  19. Symplectic orbit and spin tracking code for all-electric storage rings

    NASA Astrophysics Data System (ADS)

    Talman, Richard M.; Talman, John D.

    2015-07-01

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring "trap." At the "magic" kinetic energy of 232.792 MeV, proton spins are "frozen," for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10-29e -cm or greater will produce a statistically significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial "symplectification"). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to "resurrect," or reverse engineer, the "AGS-analog" all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM's. The companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.

  20. Spatial Lattice Modulation for MIMO Systems

    NASA Astrophysics Data System (ADS)

    Choi, Jiwook; Nam, Yunseo; Lee, Namyoon

    2018-06-01

    This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.

  1. Validation of Vortex-Lattice Method for Loads on Wings in Lift-Generated Wakes

    NASA Technical Reports Server (NTRS)

    Rossow, Vernon J.

    1995-01-01

    A study is described that evaluates the accuracy of vortex-lattice methods when they are used to compute the loads induced on aircraft as they encounter lift-generated wakes. The evaluation is accomplished by the use of measurements made in the 80 by 120 ft Wind Tunnel of the lift, rolling moment, and downwash in the wake of three configurations of a model of a subsonic transport aircraft. The downwash measurements are used as input for a vortex-lattice code in order to compute the lift and rolling moment induced on wings that have a span of 0.186, 0.510, or 1.022 times the span of the wake-generating model. Comparison of the computed results with the measured lift and rolling-moment distributions the vortex-lattice method is very reliable as long as the span of the encountering or following wing is less than about 0.2 of the generator span. As the span of the following wing increases above 0.2, the vortex-lattice method continues to correctly predict the trends and nature of the induced loads, but it overpredicts the magnitude of the loads by increasing amounts.

  2. Exact results for the star lattice chiral spin liquid

    NASA Astrophysics Data System (ADS)

    Kells, G.; Mehta, D.; Slingerland, J. K.; Vala, J.

    2010-03-01

    We examine the star lattice Kitaev model whose ground state is a chiral spin liquid. We fermionize the model such that the fermionic vacua are toric-code states on an effective Kagome lattice. This implies that the Abelian phase of the system is inherited from the fermionic vacua and that time-reversal symmetry is spontaneously broken at the level of the vacuum. In terms of these fermions we derive the Bloch-matrix Hamiltonians for the vortex-free sector and its time-reversed counterpart and illuminate the relationships between the sectors. The phase diagram for the model is shown to be a sphere in the space of coupling parameters around the triangles of the lattices. The Abelian phase lies inside the sphere and the critical boundary between topologically distinct Abelian and non-Abelian phases lies on the surface. Outside the sphere the system is generically gapped except in the planes where the coupling parameters between the vertices on triangles are zero. These cases correspond to bipartite lattice structures and the dispersion relations are similar to that of the original Kitaev honeycomb model. In a further analysis we demonstrate the threefold non-Abelian ground-state degeneracy on a torus by explicit calculation.

  3. Coupled optics reconstruction from TBT data using MAD-X

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexahin, Y.; Gianfelice-Wendt, E.; /Fermilab

    2007-06-01

    Turn-by-turn BPM data provide immediate information on the coupled optics functions at BPM locations. In the case of small deviations from the known (design) uncoupled optics some cognizance of the sources of perturbation, BPM calibration errors and tilts can also be inferred without detailed lattice modeling. In practical situations, however, fitting the lattice model with the help of some optics code would lead to more reliable results. We present an algorithm for coupled optics reconstruction from TBT data on the basis of MAD-X and give examples of its application for the Fermilab Tevatron accelerator.

  4. Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Januszewski, M.; Kostur, M.

    2014-09-01

    We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).

  5. kmos: A lattice kinetic Monte Carlo framework

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  6. Prise en compte d'un couplage fin neutronique-thermique dans les calculs d'assemblage pour les reacteurs a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Greiner, Nathan

    Core simulations for Pressurized Water Reactors (PWR) is insured by a set of computer codes which allows, under certain assumptions, to approximate the physical quantities of interest, such as the effective multiplication factor or the power or temperature distributions. The neutronics calculation scheme relies on three great steps : -- the production of an isotopic cross-sections library ; -- the production of a reactor database through the lattice calculation ; -- the full-core calculation. In the lattice calculation, in which Boltzmann's transport equation is solved over an assembly geometry, the temperature distribution is uniform and constant during irradiation. This represents a set of approximations since, on the one hand, the temperature distribution in the assembly is not uniform (strong temperature gradients in the fuel pins, discrepancies between the fuel pins) and on the other hand, irradiation causes the thermal properties of the pins to change, which modifies the temperature distribution. Our work aims at implementing and introducing a neutronics-thermomechanics coupling into the lattice calculation to finely discretize the temperature distribution and to study its effects. To perform the study, CEA (Commissariat a l'Energie Atomique et aux Energies Alternatives) lattice code APOLLO2 was used for neutronics and EDF (Electricite De France) code C3THER was used for the thermal calculations. We show very small effects of the pin-scaled coupling when comparing the use of a temperature profile with the use of an uniform temperature over UOX-type and MOX-type fuels. We next investigate the thermal feedback using an assembly-scaled coupling taking into account the presence of large water gaps on an UOX-type assembly at burnup 0. We show the very small impact on the calculation of the hot spot factor. Finally, the coupling is introduced into the isotopic depletion calculation and we show that reactivity and isotopic number densities deviations remain small albeit not negligible for UOX-type and MOX-type assemblies. The specific behavior of gadolinium-stuffed fuel pins in an UO2Gd2O 3-type assembly is highlighted.

  7. Effect of doping on electronic properties of HgSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nag, Abhinav, E-mail: abhinavn76@gmail.com; Sastri, O. S. K. S., E-mail: sastri.osks@gmail.com; Kumar, Jagdish, E-mail: jagdishphysicist@gmail.com

    2016-05-23

    First principle study of electronic properties of pure and doped HgSe have been performed using all electron Full Potential Linearized Augmented Plane Wave (FP-LAPW) method using ELK code. The electronic exchange and co-relations are considered using Generalized Gradient Approach (GGA). Lattice parameter, Density of States (DOS) and Band structure calculations have been performed. The total energy curve (Energy vs Lattice parameter), DOS and band structure calculations are in good agreement with the experimental values and those obtained using other DFT codes. The doped material is studied within the Virtual Crystal Approximation (VCA) with doping levels of 10% to 25% ofmore » electrons (hole) per unit cell. Results predict zero band gap in undopedHgSe and bands meet at Fermi level near the symmetry point Γ. For doped HgSe, we found that by electron (hole) doping, the point where conduction and valence bands meet can be shifted below (above) the fermi level.« less

  8. Crystal Symmetry Algorithms in a High-Throughput Framework for Materials

    NASA Astrophysics Data System (ADS)

    Taylor, Richard

    The high-throughput framework AFLOW that has been developed and used successfully over the last decade is improved to include fully-integrated software for crystallographic symmetry characterization. The standards used in the symmetry algorithms conform with the conventions and prescriptions given in the International Tables of Crystallography (ITC). A standard cell choice with standard origin is selected, and the space group, point group, Bravais lattice, crystal system, lattice system, and representative symmetry operations are determined. Following the conventions of the ITC, the Wyckoff sites are also determined and their labels and site symmetry are provided. The symmetry code makes no assumptions on the input cell orientation, origin, or reduction and has been integrated in the AFLOW high-throughput framework for materials discovery by adding to the existing code base and making use of existing classes and functions. The software is written in object-oriented C++ for flexibility and reuse. A performance analysis and examination of the algorithms scaling with cell size and symmetry is also reported.

  9. Deformed quantum double realization of the toric code and beyond

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Pramod; Ibieta-Jimenez, Juan Pablo; Bernabe Ferreira, Miguel Jorge; Teotonio-Sobrinho, Paulo

    2016-09-01

    Quantum double models, such as the toric code, can be constructed from transfer matrices of lattice gauge theories with discrete gauge groups and parametrized by the center of the gauge group algebra and its dual. For general choices of these parameters the transfer matrix contains operators acting on links which can also be thought of as perturbations to the quantum double model driving it out of its topological phase and destroying the exact solvability of the quantum double model. We modify these transfer matrices with perturbations and extract exactly solvable models which remain in a quantum phase, thus nullifying the effect of the perturbation. The algebra of the modified vertex and plaquette operators now obey a deformed version of the quantum double algebra. The Abelian cases are shown to be in the quantum double phase whereas the non-Abelian phases are shown to be in a modified phase of the corresponding quantum double phase. These are illustrated with the groups Zn and S3. The quantum phases are determined by studying the excitations of these systems namely their fusion rules and the statistics. We then go further to construct a transfer matrix which contains the other Z2 phase namely the double semion phase. More generally for other discrete groups these transfer matrices contain the twisted quantum double models. These transfer matrices can be thought of as being obtained by introducing extra parameters into the transfer matrix of lattice gauge theories. These parameters are central elements belonging to the tensor products of the algebra and its dual and are associated to vertices and volumes of the three dimensional lattice. As in the case of the lattice gauge theories we construct the operators creating the excitations in this case and study their braiding and fusion properties.

  10. X-cube model on generic lattices: Fracton phases and geometric order

    NASA Astrophysics Data System (ADS)

    Slagle, Kevin; Kim, Yong Baek

    2018-04-01

    Fracton order is a new kind of quantum order characterized by topological excitations that exhibit remarkable mobility restrictions and a robust ground-state degeneracy (GSD) which can increase exponentially with system size. In this paper, we present a generic lattice construction (in three dimensions) for a generalized X-cube model of fracton order, where the mobility restrictions of the subdimensional particles inherit the geometry of the lattice. This helps explain a previous result that lattice curvature can produce a robust GSD, even on a manifold with trivial topology. We provide explicit examples to show that the (zero-temperature) phase of matter is sensitive to the lattice geometry. In one example, the lattice geometry confines the dimension-1 particles to small loops, which allows the fractons to be fully mobile charges, and the resulting phase is equivalent to (3+1)-dimensional toric code. However, the phase is sensitive to more than just lattice curvature; different lattices without curvature (e.g., cubic or stacked kagome lattices) also result in different phases of matter, which are separated by phase transitions. Unintuitively, however, according to a previous definition of phase [X. Chen et al., Phys. Rev. B 82, 155138 (2010), 10.1103/PhysRevB.82.155138], even just a rotated or rescaled cubic results in different phases of matter, which motivates us to propose a coarser definition of phase for gapped ground states and fracton order. This equivalence relation between ground states is given by the composition of a local unitary transformation and a quasi-isometry (which can rotate and rescale the lattice); equivalently, ground states are in the same phase if they can be adiabatically connected by varying both the Hamiltonian and the positions of the degrees of freedom (via a quasi-isometry). In light of the importance of geometry, we further propose that fracton orders should be regarded as a geometric order.

  11. Quantum Engineering of Dynamical Gauge Fields on Optical Lattices

    DTIC Science & Technology

    2016-07-08

    exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian

  12. Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.

    PubMed

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2017-12-21

    The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.

  13. Spins Dynamics in a Dissipative Environment: Hierarchal Equations of Motion Approach Using a Graphics Processing Unit (GPU).

    PubMed

    Tsuchimoto, Masashi; Tanimura, Yoshitaka

    2015-08-11

    A system with many energy states coupled to a harmonic oscillator bath is considered. To study quantum non-Markovian system-bath dynamics numerically rigorously and nonperturbatively, we developed a computer code for the reduced hierarchy equations of motion (HEOM) for a graphics processor unit (GPU) that can treat the system as large as 4096 energy states. The code employs a Padé spectrum decomposition (PSD) for a construction of HEOM and the exponential integrators. Dynamics of a quantum spin glass system are studied by calculating the free induction decay signal for the cases of 3 × 2 to 3 × 4 triangular lattices with antiferromagnetic interactions. We found that spins relax faster at lower temperature due to transitions through a quantum coherent state, as represented by the off-diagonal elements of the reduced density matrix, while it has been known that the spins relax slower due to suppression of thermal activation in a classical case. The decay of the spins are qualitatively similar regardless of the lattice sizes. The pathway of spin relaxation is analyzed under a sudden temperature drop condition. The Compute Unified Device Architecture (CUDA) based source code used in the present calculations is provided as Supporting Information .

  14. Superconducting quantum simulator for topological order and the toric code

    NASA Astrophysics Data System (ADS)

    Sameti, Mahdi; Potočnik, Anton; Browne, Dan E.; Wallraff, Andreas; Hartmann, Michael J.

    2017-04-01

    Topological order is now being established as a central criterion for characterizing and classifying ground states of condensed matter systems and complements categorizations based on symmetries. Fractional quantum Hall systems and quantum spin liquids are receiving substantial interest because of their intriguing quantum correlations, their exotic excitations, and prospects for protecting stored quantum information against errors. Here, we show that the Hamiltonian of the central model of this class of systems, the toric code, can be directly implemented as an analog quantum simulator in lattices of superconducting circuits. The four-body interactions, which lie at its heart, are in our concept realized via superconducting quantum interference devices (SQUIDs) that are driven by a suitably oscillating flux bias. All physical qubits and coupling SQUIDs can be individually controlled with high precision. Topologically ordered states can be prepared via an adiabatic ramp of the stabilizer interactions. Strings of qubit operators, including the stabilizers and correlations along noncontractible loops, can be read out via a capacitive coupling to read-out resonators. Moreover, the available single-qubit operations allow to create and propagate elementary excitations of the toric code and to verify their fractional statistics. The architecture we propose allows to implement a large variety of many-body interactions and thus provides a versatile analog quantum simulator for topological order and lattice gauge theories.

  15. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  16. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  17. High-Performance I/O: HDF5 for Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  18. High-Performance I/O: HDF5 for Lattice QCD

    DOE PAGES

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; ...

    2017-05-09

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  19. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Symplectic orbit and spin tracking code for all-electric storage rings

    DOE PAGES

    Talman, Richard M.; Talman, John D.

    2015-07-22

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to “resurrect,” or reverse engineer, the “AGS-analog” all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM’s. As a result, the companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.« less

  1. POLYSHIFT Communications Software for the Connection Machine System CM-200

    DOE PAGES

    George, William; Brickner, Ralph G.; Johnsson, S. Lennart

    1994-01-01

    We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less

  2. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  3. Chaos for cardiac arrhythmias through a one-dimensional modulation equation for alternans

    PubMed Central

    Dai, Shu; Schaeffer, David G.

    2010-01-01

    Instabilities in cardiac dynamics have been widely investigated in recent years. One facet of this work has studied chaotic behavior, especially possible correlations with fatal arrhythmias. Previously chaotic behavior was observed in various models, specifically in the breakup of spiral and scroll waves. In this paper we study cardiac dynamics and find spatiotemporal chaotic behavior through the Echebarria–Karma modulation equation for alternans in one dimension. Although extreme parameter values are required to produce chaos in this model, it seems significant mathematically that chaos may occur by a different mechanism from previous observations. PMID:20590327

  4. Killing, karma and caring: euthanasia in Buddhism and Christianity.

    PubMed Central

    Keown, D; Keown, J

    1995-01-01

    In 1993 The Parliament of the World's Religions produced a declaration known as A Global Ethic which set out fundamental points of agreement on moral tissues between the religions of the world. However, the declaration did not deal explicitly with medical ethics. This article examines Buddhist and Christian perspectives on euthanasia and finds that in spite of their cultural and theological differences both oppose it for broadly similar reasons. Both traditions reject consequentialist patterns of justification and espouse a 'sanctity of life' position which precludes the intentional destruction of human life by act or omission. PMID:8558539

  5. Lattice Boltzmann Model of 3D Multiphase Flow in Artery Bifurcation Aneurysm Problem

    PubMed Central

    Abas, Aizat; Mokhtar, N. Hafizah; Ishak, M. H. H.; Abdullah, M. Z.; Ho Tian, Ang

    2016-01-01

    This paper simulates and predicts the laminar flow inside the 3D aneurysm geometry, since the hemodynamic situation in the blood vessels is difficult to determine and visualize using standard imaging techniques, for example, magnetic resonance imaging (MRI). Three different types of Lattice Boltzmann (LB) models are computed, namely, single relaxation time (SRT), multiple relaxation time (MRT), and regularized BGK models. The results obtained using these different versions of the LB-based code will then be validated with ANSYS FLUENT, a commercially available finite volume- (FV-) based CFD solver. The simulated flow profiles that include velocity, pressure, and wall shear stress (WSS) are then compared between the two solvers. The predicted outcomes show that all the LB models are comparable and in good agreement with the FVM solver for complex blood flow simulation. The findings also show minor differences in their WSS profiles. The performance of the parallel implementation for each solver is also included and discussed in this paper. In terms of parallelization, it was shown that LBM-based code performed better in terms of the computation time required. PMID:27239221

  6. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  7. Lattice Boltzmann simulation optimization on leading multicore platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, S.; Carter, J.; Oliker, L.

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHDmore » for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  8. Improved genetic algorithm for the protein folding problem by use of a Cartesian combination operator.

    PubMed Central

    Rabow, A. A.; Scheraga, H. A.

    1996-01-01

    We have devised a Cartesian combination operator and coding scheme for improving the performance of genetic algorithms applied to the protein folding problem. The genetic coding consists of the C alpha Cartesian coordinates of the protein chain. The recombination of the genes of the parents is accomplished by: (1) a rigid superposition of one parent chain on the other, to make the relation of Cartesian coordinates meaningful, then, (2) the chains of the children are formed through a linear combination of the coordinates of their parents. The children produced with this Cartesian combination operator scheme have similar topology and retain the long-range contacts of their parents. The new scheme is significantly more efficient than the standard genetic algorithm methods for locating low-energy conformations of proteins. The considerable superiority of genetic algorithms over Monte Carlo optimization methods is also demonstrated. We have also devised a new dynamic programming lattice fitting procedure for use with the Cartesian combination operator method. The procedure finds excellent fits of real-space chains to the lattice while satisfying bond-length, bond-angle, and overlap constraints. PMID:8880904

  9. Extending a CAD-Based Cartesian Mesh Generator for the Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantrell, J Nathan; Inclan, Eric J; Joshi, Abhijit S

    2012-01-01

    This paper describes the development of a custom preprocessor for the PaRAllel Thermal Hydraulics simulations using Advanced Mesoscopic methods (PRATHAM) code based on an open-source mesh generator, CartGen [1]. PRATHAM is a three-dimensional (3D) lattice Boltzmann method (LBM) based parallel flow simulation software currently under development at the Oak Ridge National Laboratory. The LBM algorithm in PRATHAM requires a uniform, coordinate system-aligned, non-body-fitted structured mesh for its computational domain. CartGen [1], which is a GNU-licensed open source code, already comes with some of the above needed functionalities. However, it needs to be further extended to fully support the LBM specificmore » preprocessing requirements. Therefore, CartGen is being modified to (i) be compiler independent while converting a neutral-format STL (Stereolithography) CAD geometry to a uniform structured Cartesian mesh, (ii) provide a mechanism for PRATHAM to import the mesh and identify the fluid/solid domains, and (iii) provide a mechanism to visually identify and tag the domain boundaries on which to apply different boundary conditions.« less

  10. A new scripting library for modeling flow and transport in fractured rock with channel networks

    NASA Astrophysics Data System (ADS)

    Dessirier, Benoît; Tsang, Chin-Fu; Niemi, Auli

    2018-02-01

    Deep crystalline bedrock formations are targeted to host spent nuclear fuel owing to their overall low permeability. They are however highly heterogeneous and only a few preferential paths pertaining to a small set of dominant rock fractures usually carry most of the flow or mass fluxes, a behavior known as channeling that needs to be accounted for in the performance assessment of repositories. Channel network models have been developed and used to investigate the effect of channeling. They are usually simpler than discrete fracture networks based on rock fracture mappings and rely on idealized full or sparsely populated lattices of channels. This study reexamines the fundamental parameter structure required to describe a channel network in terms of groundwater flow and solute transport, leading to an extended description suitable for unstructured arbitrary networks of channels. An implementation of this formalism in a Python scripting library is presented and released along with this article. A new algebraic multigrid preconditioner delivers a significant speedup in the flow solution step compared to previous channel network codes. 3D visualization is readily available for verification and interpretation of the results by exporting the results to an open and free dedicated software. The new code is applied to three example cases to verify its results on full uncorrelated lattices of channels, sparsely populated percolation lattices and to exemplify the use of unstructured networks to accommodate knowledge on local rock fractures.

  11. Evaluation of the constant pressure panel method (CPM) for unsteady air loads prediction

    NASA Technical Reports Server (NTRS)

    Appa, Kari; Smith, Michael J. C.

    1988-01-01

    This paper evaluates the capability of the constant pressure panel method (CPM) code to predict unsteady aerodynamic pressures, lift and moment distributions, and generalized forces for general wing-body configurations in supersonic flow. Stability derivatives are computed and correlated for the X-29 and an Oblique Wing Research Aircraft, and a flutter analysis is carried out for a wing wind tunnel test example. Most results are shown to correlate well with test or published data. Although the emphasis of this paper is on evaluation, an improvement in the CPM code's handling of intersecting lifting surfaces is briefly discussed. An attractive feature of the CPM code is that it shares the basic data requirements and computational arrangements of the doublet lattice method. A unified code to predict unsteady subsonic or supersonic airloads is therefore possible.

  12. Free wake analysis of hover performance using a new influence coefficient method

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.; Ong, Ching Cho; Ching, Cho Ong

    1990-01-01

    A new approach to the prediction of helicopter rotor performance using a free wake analysis was developed. This new method uses a relaxation process that does not suffer from the convergence problems associated with previous time marching simulations. This wake relaxation procedure was coupled to a vortex-lattice, lifting surface loads analysis to produce a novel, self contained performance prediction code: EHPIC (Evaluation of Helicopter Performance using Influence Coefficients). The major technical features of the EHPIC code are described and a substantial amount of background information on the capabilities and proper operation of the code is supplied. Sample problems were undertaken to demonstrate the robustness and flexibility of the basic approach. Also, a performance correlation study was carried out to establish the breadth of applicability of the code, with very favorable results.

  13. A hybrid LBG/lattice vector quantizer for high quality image coding

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)

    1991-01-01

    It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.

  14. Recovery Act: An Integrated Experimental and Numerical Study: Developing a Reaction Transport Model that Couples Chemical Reactions of Mineral Dissolution/Precipitation with Spatial and Temporal Flow Variations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.

    2016-06-24

    A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies,more » allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.« less

  15. The Secret Life of Quarks, Final Report for the University of North Carolina at Chapel Hill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, Robert J.

    This final report summarizes activities and results at the University of North Carolina as part of the the SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Quantum Chromodynamics. The overall objective of the project is to construct the software needed to study quantum chromo- dynamics (QCD), the theory of the strong interactions of subatomic physics, and similar strongly coupled gauge theories anticipated to be of importance in the LHC era. It built upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API)more » was developed that enables lat- tice gauge theorists to make effective use of a wide variety of massively parallel computers. In the SciDAC-2 project, optimized versions of the QCD API were being created for the IBM Blue- Gene/L (BG/L) and BlueGene/P (BG/P), the Cray XT3/XT4 and its successors, and clusters based on multi-core processors and Infiniband communications networks. The QCD API is being used to enhance the performance of the major QCD community codes and to create new applications. Software libraries of physics tools have been expanded to contain sharable building blocks for inclusion in application codes, performance analysis and visualization tools, and software for au- tomation of physics work flow. New software tools were designed for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. As part of the overall project, researchers at UNC were funded through ASCR to work in three general areas. The main thrust has been performance instrumentation and analysis in support of the SciDAC QCD code base as it evolved and as it moved to new computation platforms. In support of the performance activities, performance data was to be collected in a database for the purpose of broader analysis. Third, the UNC work was done at RENCI (Renaissance Computing Institute), which has extensive expertise and facilities for scientific data visualization, so we acted in an ongoing consulting and support role in that area.« less

  16. Religio-cultural factors contributing to perinatal mortality and morbidity in mountain villages of Nepal: Implications for future healthcare provision.

    PubMed

    Paudel, Mohan; Javanparast, Sara; Dasvarma, Gouranga; Newman, Lareen

    2018-01-01

    This paper examines the beliefs and experiences of women and their families in remote mountain villages of Nepal about perinatal sickness and death and considers the implications of these beliefs for future healthcare provision. Two mountain villages were chosen for this qualitative study to provide diversity of context within a highly disadvantaged region. Individual in-depth interviews were conducted with 42 women of childbearing age and their family members, 15 health service providers, and 5 stakeholders. The data were analysed using a thematic analysis technique with a comprehensive coding process. Three key themes emerged from the study: (1) 'Everyone has gone through it': perinatal death as a natural occurrence; (2) Dewata (God) as a factor in health and sickness: a cause and means to overcome sickness in mother and baby; and (3) Karma (Past deeds), Bhagya (Fate) or Lekhanta (Destiny): ways of rationalising perinatal deaths. Religio-cultural interpretations underlie a fatalistic view among villagers in Nepal's mountain communities about any possibility of preventing perinatal deaths. This perpetuates a silence around the issue, and results in severe under-reporting of ongoing high perinatal death rates and almost no reporting of stillbirths. The study identified a strong belief in religio-cultural determinants of perinatal death, which demonstrates that medical interventions alone are not sufficient to prevent these deaths and that broader social determinants which are highly significant in local life must be considered in policy making and programming.

  17. Compendium of energy-dependent sensitivity profiles for the TRX-2 thermal lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; Lucius, J.L.; Drischler, J.D.

    1978-03-01

    Energy-dependent sensitivity profiles for five responses calculated for the TRX-2 thermal lattice with the ORNL sensitivity code system FORSS are presented here both in graphical form and in SENPRO format. The responses are the multiplication factor, k/sub eff/; the ratio of epithermal-to-thermal captures in /sup 238/U, /sup 28/rho; the ratio of epithermal-to-thermal fissions in /sup 235/U, /sup 25/delta; the ratio of fissions in /sup 238/U to fissions in /sup 235/U, /sup 28/delta; and the ratio of captures in /sup 238/U to fissions in /sup 235/U, CR. A summary table of the total sensitivities is also presented.

  18. The displacement effect of a fluorine atom in CaF2 on the band structure

    NASA Astrophysics Data System (ADS)

    Mir, A.; Zaoui, A.; Bensaid, D.

    2018-05-01

    We obtained the results for each configuration [100], [110] and [111] and each configuration contains two atoms of calcium and four fluorine atoms with lattice type B. This study was made by a code that is based on the DFT called wien2k. The results obtained are in good agreement with the experiment. For CaF2, an important variation of the fluoride ions concentration in CaF2 after displacement has been observed on the map of e-Density. The interpretation of the results is based on the existence of an important number of defects which are created by changing the atomic positions inside of sub lattice.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Bays; W. Skerjanc; M. Pope

    A comparative analysis and comparison of results obtained between 2-D lattice calculations and 3-D full core nodal calculations, in the frame of MOX fuel design, was conducted. This study revealed a set of advantages and disadvantages, with respect to each method, which can be used to guide the level of accuracy desired for future fuel and fuel cycle calculations. For the purpose of isotopic generation for fuel cycle analyses, the approach of using a 2-D lattice code (i.e., fuel assembly in infinite lattice) gave reasonable predictions of uranium and plutonium isotope concentrations at the predicted 3-D core simulation batch averagemore » discharge burnup. However, it was found that the 2-D lattice calculation can under-predict the power of pins located along a shared edge between MOX and UO2 by as much as 20%. In this analysis, this error did not occur in the peak pin. However, this was a coincidence and does not rule out the possibility that the peak pin could occur in a lattice position with high calculation uncertainty in future un-optimized studies. Another important consideration in realistic fuel design is the prediction of the peak axial burnup and neutron fluence. The use of 3-D core simulation gave peak burnup conditions, at the pellet level, to be approximately 1.4 times greater than what can be predicted using back-of-the-envelope assumptions of average specific power and irradiation time.« less

  20. Braiding by Majorana tracking and long-range CNOT gates with color codes

    NASA Astrophysics Data System (ADS)

    Litinski, Daniel; von Oppen, Felix

    2017-11-01

    Color-code quantum computation seamlessly combines Majorana-based hardware with topological error correction. Specifically, as Clifford gates are transversal in two-dimensional color codes, they enable the use of the Majoranas' non-Abelian statistics for gate operations at the code level. Here, we discuss the implementation of color codes in arrays of Majorana nanowires that avoid branched networks such as T junctions, thereby simplifying their realization. We show that, in such implementations, non-Abelian statistics can be exploited without ever performing physical braiding operations. Physical braiding operations are replaced by Majorana tracking, an entirely software-based protocol which appropriately updates the Majoranas involved in the color-code stabilizer measurements. This approach minimizes the required hardware operations for single-qubit Clifford gates. For Clifford completeness, we combine color codes with surface codes, and use color-to-surface-code lattice surgery for long-range multitarget CNOT gates which have a time overhead that grows only logarithmically with the physical distance separating control and target qubits. With the addition of magic state distillation, our architecture describes a fault-tolerant universal quantum computer in systems such as networks of tetrons, hexons, or Majorana box qubits, but can also be applied to nontopological qubit platforms.

  1. Code comparison for accelerator design and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary inmore » these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.« less

  2. LEGO: A modular accelerator design code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Donald, M.; Irwin, J.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, themore » code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.« less

  3. Program package for multicanonical simulations of U(1) lattice gauge theory-Second version

    NASA Astrophysics Data System (ADS)

    Bazavov, Alexei; Berg, Bernd A.

    2013-03-01

    A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.

  4. Development of a Prototype Lattice Boltzmann Code for CFD of Fusion Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pattison, Martin J; Premnath, Kannan N; Banerjee, Sanjoy

    2007-02-26

    Designs of proposed fusion reactors, such as the ITER project, typically involve the use of liquid metals as coolants in components such as heat exchangers, which are generally subjected to strong magnetic fields. These fields induce electric currents in the fluids, resulting in magnetohydrodynamic (MHD) forces which have important effects on the flow. The objective of this SBIR project was to develop computational techniques based on recently developed lattice Boltzmann techniques for the simulation of these MHD flows and implement them in a computational fluid dynamics (CFD) code for the study of fluid flow systems encountered in fusion engineering. Themore » code developed during this project, solves the lattice Boltzmann equation, which is a kinetic equation whose behaviour represents fluid motion. This is in contrast to most CFD codes which are based on finite difference/finite volume based solvers. The lattice Boltzmann method (LBM) is a relatively new approach which has a number of advantages compared with more conventional methods such as the SIMPLE or projection method algorithms that involve direct solution of the Navier-Stokes equations. These are that the LBM is very well suited to parallel processing, with almost linear scaling even for very large numbers of processors. Unlike other methods, the LBM does not require solution of a Poisson pressure equation leading to a relatively fast execution time. A particularly attractive property of the LBM is that it can handle flows in complex geometries very easily. It can use simple rectangular grids throughout the computational domain -- generation of a body-fitted grid is not required. A recent advance in the LBM is the introduction of the multiple relaxation time (MRT) model; the implementation of this model greatly enhanced the numerical stability when used in lieu of the single relaxation time model, with only a small increase in computer time. Parallel processing was implemented using MPI and demonstrated the ability of the LBM to scale almost linearly. The equation for magnetic induction was also solved using a lattice Boltzmann method. This approach has the advantage that it fits in well to the framework used for the hydrodynamic equations, but more importantly that it preserves the ability of the code to run efficiently on parallel architectures. Since the LBM is a relatively recent model, a number of new developments were needed to solve the magnetic induction equation for practical problems. Existing methods were only suitable for cases where the fluid viscosity and the magnetic resistivity are of the same order, and a preconditioning method was used to allow the simulation of liquid metals, where these properties differ by several orders of magnitude. An extension of this method to the hydrodynamic equations allowed faster convergence to steady state. A new method of imposing boundary conditions using an extrapolation technique was derived, enabling the magnetic field at a boundary to be specified. Also, a technique by which the grid can be stretched was formulated to resolve thin layers at high imposed magnetic fields, allowing flows with Hartmann numbers of several thousand to be quickly and efficiently simulated. In addition, a module has been developed to calculate the temperature field and heat transfer. This uses a total variation diminishing scheme to solve the equations and is again very amenable to parallelisation. Although, the module was developed with thermal modelling in mind, it can also be applied to passive scalar transport. The code is fully three dimensional and has been applied to a wide variety of cases, including both laminar and turbulent flows. Validations against a series of canonical problems involving both MHD effects and turbulence have clearly demonstrated the ability of the LBM to properly model these types of flow. As well as applications to fusion engineering, the resulting code is flexible enough to be applied to a wide range of other flows, in particular those requiring parallel computations with many processors. For example, at present it is being used for studies in aerodynamics and acoustics involving flows at high Reynolds numbers. It is anticipated that it will be used for multiphase flow applications in the near future.« less

  5. MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices

    NASA Astrophysics Data System (ADS)

    Shasharina, Svetlana G.; Cary, John R.

    1997-05-01

    MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.

  6. CASMO5/TSUNAMI-3D spent nuclear fuel reactivity uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrer, R.; Rhodes, J.; Smith, K.

    2012-07-01

    The CASMO5 lattice physics code is used in conjunction with the TSUNAMI-3D sequence in ORNL's SCALE 6 code system to estimate the uncertainties in hot-to-cold reactivity changes due to cross-section uncertainty for PWR assemblies at various burnup points. The goal of the analysis is to establish the multiplication factor uncertainty similarity between various fuel assemblies at different conditions in a quantifiable manner and to obtain a bound on the hot-to-cold reactivity uncertainty over the various assembly types and burnup attributed to fundamental cross-section data uncertainty. (authors)

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talman, Richard M.; Talman, John D.

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to “resurrect,” or reverse engineer, the “AGS-analog” all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM’s. As a result, the companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.« less

  8. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  9. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  10. Evaluation of the Performance of the Hybrid Lattice Boltzmann Based Numerical Flux

    NASA Astrophysics Data System (ADS)

    Zheng, H. W.; Shu, C.

    2016-06-01

    It is well known that the numerical scheme is a key factor to the stability and accuracy of a Navier-Stokes solver. Recently, a new hybrid lattice Boltzmann numerical flux (HLBFS) is developed by Shu's group. It combines two different LBFS schemes by a switch function. It solves the Boltzmann equation instead of the Euler equation. In this article, the main object is to evaluate the ability of this HLBFS scheme by our in-house cell centered hybrid mesh based Navier-Stokes code. Its performance is examined by several widely-used bench-mark test cases. The comparisons on results between calculation and experiment are conducted. They show that the scheme can capture the shock wave as well as the resolving of boundary layer.

  11. SFM-FDTD analysis of triangular-lattice AAA structure: Parametric study of the TEM mode

    NASA Astrophysics Data System (ADS)

    Hamidi, M.; Chemrouk, C.; Belkhir, A.; Kebci, Z.; Ndao, A.; Lamrous, O.; Baida, F. I.

    2014-05-01

    This theoretical work reports a parametric study of enhanced transmission through annular aperture array (AAA) structure arranged in a triangular lattice. The effect of the incidence angle in addition to the inner and outer radii values on the evolution of the transmission spectra is carried out. To this end, a 3D Finite-Difference Time-Domain code based on the Split Field Method (SFM) is used to calculate the spectral response of the structure for any angle of incidence. In order to work through an orthogonal unit cell which presents the advantage to reduce time and space of computation, special periodic boundary conditions are implemented. This study provides a new modeling of AAA structures useful for producing tunable ultra-compact devices.

  12. Optimization of a Lattice Boltzmann Computation on State-of-the-Art Multicore Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid

    2009-04-10

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon E5345 (Clovertown), AMD Opteron 2214 (Santa Rosa), AMD Opteron 2356 (Barcelona), Sun T5140 T2+ (Victoria Falls), as well asmore » a QS20 IBM Cell Blade. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 15x improvement compared with the original code at a given concurrency. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.« less

  13. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  14. VLMD - VORTEX-LATTICE CODE FOR DETERMINATION OF MEAN CAMBER SURFACE FOR TRIMMED NONCOPLANER PLANFORMS WITH MINIMUM VORTEX DRAG

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.

    1994-01-01

    This program represents a subsonic aerodynamic method for determining the mean camber surface of trimmed noncoplaner planforms with minimum vortex drag. With this program, multiple surfaces can be designed together to yield a trimmed configuration with minimum induced drag at some specified lift coefficient. The method uses a vortex-lattice and overcomes previous difficulties with chord loading specification. A Trefftz plane analysis is used to determine the optimum span loading for minimum drag. The program then solves for the mean camber surface of the wing associated with this loading. Pitching-moment or root-bending-moment constraints can be employed at the design lift coefficient. Sensitivity studies of vortex-lattice arrangements have been made with this program and comparisons with other theories show generally good agreement. The program is very versatile and has been applied to isolated wings, wing-canard configurations, a tandem wing, and a wing-winglet configuration. The design problem solved with this code is essentially an optimization one. A subsonic vortex-lattice is used to determine the span load distribution(s) on bent lifting line(s) in the Trefftz plane. A Lagrange multiplier technique determines the required loading which is used to calculate the mean camber slopes, which are then integrated to yield the local elevation surface. The problem of determining the necessary circulation matrix is simplified by having the chordwise shape of the bound circulation remain unchanged across each span, though the chordwise shape may vary from one planform to another. The circulation matrix is obtained by calculating the spanwise scaling of the chordwise shapes. A chordwise summation of the lift and pitching-moment is utilized in the Trefftz plane solution on the assumption that the trailing wake does not roll up and that the general configuration has specifiable chord loading shapes. VLMD is written in FORTRAN for IBM PC series and compatible computers running MS-DOS. This program requires 360K of RAM for execution. The Ryan McFarland FORTRAN compiler and PLINK86 are required to recompile the source code; however, a sample executable is provided on the diskette. The standard distribution medium for VLMD is a 5.25 inch 360K MS-DOS format diskette. VLMD was originally developed for use on CDC 6000 series computers in 1976. It was originally ported to the IBM PC in 1986, and, after minor modifications, the IBM PC port was released in 1993.

  15. Religio-cultural factors contributing to perinatal mortality and morbidity in mountain villages of Nepal: Implications for future healthcare provision

    PubMed Central

    Javanparast, Sara; Dasvarma, Gouranga; Newman, Lareen

    2018-01-01

    Objective and the context This paper examines the beliefs and experiences of women and their families in remote mountain villages of Nepal about perinatal sickness and death and considers the implications of these beliefs for future healthcare provision. Methods Two mountain villages were chosen for this qualitative study to provide diversity of context within a highly disadvantaged region. Individual in-depth interviews were conducted with 42 women of childbearing age and their family members, 15 health service providers, and 5 stakeholders. The data were analysed using a thematic analysis technique with a comprehensive coding process. Findings Three key themes emerged from the study: (1) ‘Everyone has gone through it’: perinatal death as a natural occurrence; (2) Dewata (God) as a factor in health and sickness: a cause and means to overcome sickness in mother and baby; and (3) Karma (Past deeds), Bhagya (Fate) or Lekhanta (Destiny): ways of rationalising perinatal deaths. Conclusion Religio-cultural interpretations underlie a fatalistic view among villagers in Nepal’s mountain communities about any possibility of preventing perinatal deaths. This perpetuates a silence around the issue, and results in severe under-reporting of ongoing high perinatal death rates and almost no reporting of stillbirths. The study identified a strong belief in religio-cultural determinants of perinatal death, which demonstrates that medical interventions alone are not sufficient to prevent these deaths and that broader social determinants which are highly significant in local life must be considered in policy making and programming. PMID:29544226

  16. Electric dipole moment planning with a resurrected BNL Alternating Gradient Synchrotron electron analog ring

    NASA Astrophysics Data System (ADS)

    Talman, Richard M.; Talman, John D.

    2015-07-01

    There has been much recent interest in directly measuring the electric dipole moments (EDM) of the proton and the electron, because of their possible importance in the present day observed matter/antimatter imbalance in the Universe. Such a measurement will require storing a polarized beam of "frozen spin" particles, 15 MeV electrons or 230 MeV protons, in an all-electric storage ring. Only one such relativistic electric accelerator has ever been built—the 10 MeV "electron analog" ring at Brookhaven National Laboratory in 1954; it can also be referred to as the "AGS analog" ring to make clear it was a prototype for the Alternating Gradient Synchrotron (AGS) proton ring under construction at that time at BNL. (Its purpose was to investigate nonlinear resonances as well as passage through "transition" with the newly invented alternating gradient proton ring design.) By chance this electron ring, long since dismantled and its engineering drawings disappeared, would have been appropriate both for measuring the electron EDM and to serve as an inexpensive prototype for the arguably more promising, but 10 times more expensive, proton EDM measurement. Today it is cheaper yet to "resurrect" the electron analog ring by simulating its performance computationally. This is one purpose for the present paper. Most existing accelerator simulation codes cannot be used for this purpose because they implicitly assume magnetic bending. The new ual/eteapot code, described in detail in an accompanying paper, has been developed for modeling storage ring performance, including spin evolution, in electric rings. Illustrating its use, comparing its predictions with the old observations, and describing new expectations concerning spin evolution and code performance, are other goals of the paper. To set up some of these calculations has required a kind of "archeological physics" to reconstitute the detailed electron analog lattice design from a 1991 retrospective report by Plotkin as well as unpublished notes of Courant describing machine studies performed in 1954-1955. This paper describes the practical application of the eteapot code and provides sample results, with emphasis on emulating lattice optics in the AGS analog ring for comparison with the historical machine studies and to predict the electron spin evolution they would have measured if they had polarized electrons and electron polarimetry. Of greater present day interest is the performance to be expected for a proton storage ring experiment. To exhibit the eteapot code performance and confirm its symplecticity, results are also given for 30 million turn proton spin tracking in an all-electric lattice that would be appropriate for a present day measurement of the proton EDM. The accompanying paper "Symplectic orbit and spin tracking code for all-electric storage rings" documents in detail the theoretical formulation implemented in eteapot, which is a new module in the Unified Accelerator Libraries (ual) environment.

  17. Multiflavor string-net models

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Hung

    2017-05-01

    We generalize the string-net construction to multiple flavors of strings, each of which is labeled by the elements of an Abelian group Gi. The same flavor of strings can branch, while different flavors of strings can cross one another and thus they form intersecting string nets. We systematically construct the exactly soluble lattice Hamiltonians and the ground-state wave functions for the intersecting string-net condensed phases. We analyze the braiding statistics of the low-energy quasiparticle excitations and find that our model can realize all the topological phases as the string-net model with group G =∏iGi . In this respect, our construction provides various ways of building lattice models which realize topological order G , corresponding to different partitions of G and thus different flavors of string nets. In fact, our construction concretely demonstrates the Künneth formula by constructing various lattice models with the same topological order. As an example, we construct the G =Z2×Z2×Z2 string-net model which realizes a non-Abelian topological phase by properly intersecting three copies of toric codes.

  18. Visualising higher order Brillouin zones with applications

    NASA Astrophysics Data System (ADS)

    Andrew, R. C.; Salagaram, T.; Chetty, N.

    2017-05-01

    A key concept in material science is the relationship between the Bravais lattice, the reciprocal lattice and the resulting Brillouin zones (BZ). These zones are often complicated shapes that are hard to construct and visualise without the use of sophisticated software, even by professional scientists. We have used a simple sorting algorithm to construct BZ of any order for a chosen Bravais lattice that is easy to implement in any scientific programming language. The resulting zones can then be visualised using freely available plotting software. This method has pedagogical value for upper-level undergraduate students since, along with other computational methods, it can be used to illustrate how constant-energy surfaces combine with these zones to create van Hove singularities in the density of states. In this paper we apply our algorithm along with the empirical pseudopotential method and the 2D equivalent of the tetrahedron method to show how they can be used in a simple software project to investigate this interaction for a 2D crystal. This project not only enhances students’ fundamental understanding of the principles involved but also improves transferable coding skills.

  19. a Linux PC Cluster for Lattice QCD with Exact Chiral Symmetry

    NASA Astrophysics Data System (ADS)

    Chiu, Ting-Wai; Hsieh, Tung-Han; Huang, Chao-Hsi; Huang, Tsung-Ren

    A computational system for lattice QCD with overlap Dirac quarks is described. The platform is a home-made Linux PC cluster, built with off-the-shelf components. At present the system constitutes of 64 nodes, with each node consisting of one Pentium 4 processor (1.6/2.0/2.5 GHz), one Gbyte of PC800/1066 RDRAM, one 40/80/120 Gbyte hard disk, and a network card. The computationally intensive parts of our program are written in SSE2 codes. The speed of our system is estimated to be 70 Gflops, and its price/performance ratio is better than $1.0/Mflops for 64-bit (double precision) computations in quenched QCD. We discuss how to optimize its hardware and software for computing propagators of overlap Dirac quarks.

  20. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  1. Majorana fermion surface code for universal quantum computation

    DOE PAGES

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-12-10

    In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z 2 topological order with a Z 2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physicalmore » ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less

  2. WOLF: a computer code package for the calculation of ion beam trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogel, D.L.

    1985-10-01

    The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this codemore » and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.« less

  3. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    NASA Astrophysics Data System (ADS)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  4. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  5. Optimized aerodynamic design process for subsonic transport wing fitted with winglets. [wind tunnel model

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.

    1979-01-01

    The aerodynamic design of a wind-tunnel model of a wing representative of that of a subsonic jet transport aircraft, fitted with winglets, was performed using two recently developed optimal wing-design computer programs. Both potential flow codes use a vortex lattice representation of the near-field of the aerodynamic surfaces for determination of the required mean camber surfaces for minimum induced drag, and both codes use far-field induced drag minimization procedures to obtain the required spanloads. One code uses a discrete vortex wake model for this far-field drag computation, while the second uses a 2-D advanced panel wake model. Wing camber shapes for the two codes are very similar, but the resulting winglet camber shapes differ widely. Design techniques and considerations for these two wind-tunnel models are detailed, including a description of the necessary modifications of the design geometry to format it for use by a numerically controlled machine for the actual model construction.

  6. Construction of anthropomorphic hybrid, dual-lattice voxel models for optimizing image quality and dose in radiography

    NASA Astrophysics Data System (ADS)

    Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph

    2014-03-01

    In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.

  7. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  8. Karma eaters: the politics of food and fat in women's land communities in the United States.

    PubMed

    Luis, Keridwen N

    2012-01-01

    Why is thinness so important among women who have largely rejected mainstream definitions of femininity? The idea of health has great cultural power and has come to symbolize not simply bodily but also spiritual, social, and moral well-being. These ideas permeate U.S. culture, and in women's land communities, the virtue of hunger and the morality of health take on differently inflected but no less potent meanings. Ironically, in a context where women reject many gender restrictions--restrictions increasingly, as Sandra Bartky notes, focused on the female body-the importance of the thin (and thus properly feminine) body persists through the symbolism of health and virtue.

  9. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    NASA Astrophysics Data System (ADS)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  10. Modified Laser and Thermos cell calculations on microcomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.

    1987-01-01

    In the course of designing and operating nuclear reactors, many fuel pin cell calculations are required to obtain homogenized cell cross sections as a function of burnup. In the interest of convenience and cost, it would be very desirable to be able to make such calculations on microcomputers. In addition, such a microcomputer code would be very helpful for educational course work in reactor computations. To establish the feasibility of making detailed cell calculations on a microcomputer, a mainframe cell code was compiled and run on a microcomputer. The computer code Laser, originally written in Fortran IV for the IBM-7090more » class of mainframe computers, is a cylindrical, one-dimensional, multigroup lattice cell program that includes burnup. It is based on the MUFT code for epithermal and fast group calculations, and Thermos for the thermal calculations. There are 50 fast and epithermal groups and 35 thermal groups. Resonances are calculated assuming a homogeneous system and then corrected for self-shielding, Dancoff, and Doppler by self-shielding factors. The Laser code was converted to run on a microcomputer. In addition, the Thermos portion of Laser was extracted and compiled separately to have available a stand alone thermal code.« less

  11. Determination of recombination radius in Si for binary collision approximation codes

    DOE PAGES

    Vizkelethy, Gyorgy; Foiles, Stephen M.

    2015-09-11

    Displacement damage caused by ions or neutrons in microelectronic devices can have significant effect on the performance of these devices. Therefore, it is important to predict not only the displacement damage profile, but also its magnitude precisely. Analytical methods and binary collision approximation codes working with amorphous targets use the concept of displacement energy, the energy that a lattice atom has to receive to create a permanent replacement. It was found that this “displacement energy” is direction dependent; it can range from 12 to 32 eV in silicon. Obviously, this model fails in BCA codes that work with crystalline targets,more » such as Marlowe. Marlowe does not use displacement energy; instead, it uses lattice binding energy only and then pairs the interstitial atoms with vacancies. Then based on the configuration of the Frenkel pairs it classifies them as close, near, or distant pairs, and considers the distant pairs the permanent replacements. Unfortunately, this separation is an ad hoc assumption, and the results do not agree with molecular dynamics calculations. After irradiation, there is a prompt recombination of interstitials and vacancies if they are nearby, within a recombination radius. In order to implement this recombination radius in Marlowe, we used the comparison of MD and Marlowe calculation in a range of ion energies in single crystal silicon target. As a result, the calculations showed that a single recombination radius of ~7.4 Å in Marlowe for a range of ion energies gives an excellent agreement with MD.« less

  12. Quantum Engineering of Dynamical Gauge Fields on Optical Lattices

    DTIC Science & Technology

    2016-07-08

    opens the door for exciting new research directions, such as quantum simulation of the Schwinger model and of non-Abelian models. (a) Papers...exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian

  13. Lattice Boltzmann Method of Different BGA Orientations on I-Type Dispensing Method

    PubMed Central

    Gan, Z. L.; Ishak, M. H. H.; Abdullah, M. Z.; Khor, Soon Fuat

    2016-01-01

    This paper studies the three dimensional (3D) simulation of fluid flows through the ball grid array (BGA) to replicate the real underfill encapsulation process. The effect of different solder bump arrangements of BGA on the flow front, pressure and velocity of the fluid is investigated. The flow front, pressure and velocity for different time intervals are determined and analyzed for potential problems relating to solder bump damage. The simulation results from Lattice Boltzmann Method (LBM) code will be validated with experimental findings as well as the conventional Finite Volume Method (FVM) code to ensure highly accurate simulation setup. Based on the findings, good agreement can be seen between LBM and FVM simulations as well as the experimental observations. It was shown that only LBM is capable of capturing the micro-voids formation. This study also shows an increasing trend in fluid filling time for BGA with perimeter, middle empty and full orientations. The perimeter orientation has a higher pressure fluid at the middle region of BGA surface compared to middle empty and full orientation. This research would shed new light for a highly accurate simulation of encapsulation process using LBM and help to further increase the reliability of the package produced. PMID:27454872

  14. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping.

    PubMed

    Kubica, Aleksander; Beverland, Michael E; Brandão, Fernando; Preskill, John; Svore, Krysta M

    2018-05-04

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p_{3DCC}^{(1)}≃1.9% and p_{3DCC}^{(2)}≃27.6%. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  15. Portable multi-node LQCD Monte Carlo simulations using OpenACC

    NASA Astrophysics Data System (ADS)

    Bonati, Claudio; Calore, Enrico; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco; Schifano, Sebastiano Fabio; Silvi, Giorgio; Tripiccione, Raffaele

    This paper describes a state-of-the-art parallel Lattice QCD Monte Carlo code for staggered fermions, purposely designed to be portable across different computer architectures, including GPUs and commodity CPUs. Portability is achieved using the OpenACC parallel programming model, used to develop a code that can be compiled for several processor architectures. The paper focuses on parallelization on multiple computing nodes using OpenACC to manage parallelism within the node, and OpenMPI to manage parallelism among the nodes. We first discuss the available strategies to be adopted to maximize performances, we then describe selected relevant details of the code, and finally measure the level of performance and scaling-performance that we are able to achieve. The work focuses mainly on GPUs, which offer a significantly high level of performances for this application, but also compares with results measured on other processors.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design andmore » implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.« less

  17. Designer Infrared Filters using Stacked Metal Lattices

    NASA Technical Reports Server (NTRS)

    Smith, Howard A.; Rebbert, M.; Sternberg, O.

    2003-01-01

    We have designed and fabricated infrared filters for use at wavelengths greater than or equal to 15 microns. Unlike conventional dielectric filters used at the short wavelengths, ours are made from stacked metal grids, spaced at a very small fraction of the performance wavelengths. The individual lattice layers are gold, the spacers are polyimide, and they are assembled using integrated circuit processing techniques; they resemble some metallic photonic band-gap structures. We simulate the filter performance accurately, including the coupling of the propagating, near-field electromagnetic modes, using computer aided design codes. We find no anomalous absorption. The geometrical parameters of the grids are easily altered in practice, allowing for the production of tuned filters with predictable useful transmission characteristics. Although developed for astronomical instrumentation, the filters arc broadly applicable in systems across infrared and terahertz bands.

  18. Critical line of 2+1 flavor QCD

    NASA Astrophysics Data System (ADS)

    Cea, Paolo; Cosmai, Leonardo; Papa, Alessandro

    2014-04-01

    We determine the curvature of the (pseudo)critical line of QCD with nf = 2 + 1 staggered fermions at nonzero temperature and quark density by analytic continuation from imaginary chemical potentials. Monte Carlo simulations are performed by adopting the highly improved staggered quarks /tree action discretization, as implemented in the code by the MILC Collaboration, suitably modified to include a nonzero imaginary baryon chemical potential. We work on a line of constant physics, as determined in Ref. [1], adjusting the couplings so as to keep the strange quark mass ms fixed at its physical value, with a light to strange mass ratio of ml/ms=1/20. In the present investigation, we set the chemical potential at the same value for the three quark species, μl=μs≡μ. We explore lattices of different spatial extensions, 163×6 and 243×6, to check for finite size effects, and present results on a 323×8 lattice, to check for finite cutoff effects. We discuss our results for the curvature κ of the (pseudo)critical line at μ =0, which indicate κ=0.018(4), and compare them with previous lattice determinations by alternative methods and with experimental determinations of the freeze-out curve.

  19. Ab-initio study on the absorption spectrum of color change sapphire based on first-principles calculations with considering lattice relaxation-effect

    NASA Astrophysics Data System (ADS)

    Novita, Mega; Nagoshi, Hikari; Sudo, Akiho; Ogasawara, Kazuyoshi

    2018-01-01

    In this study, we performed an investigation on α-Al2O3: V3+ material, or the so-called color change sapphire, based on first-principles calculations without referring to any experimental parameter. The molecular orbital (MO) structure was estimated by the one-electron MO calculations using the discrete variational-Xα (DV-Xα) method. Next, the absorption spectra were estimated by the many-electron calculations using the discrete variational multi-electron (DVME) method. The effect of lattice relaxation on the crystal structures was estimated based on the first-principles band structure calculations. We performed geometry optimizations on the pure α-Al2O3 and with the impurity V3+ ion using Cambridge Serial Total Energy Package (CASTEP) code. The effect of energy corrections such as configuration dependence correction and correlation correction was also investigated in detail. The results revealed that the structural change on the α-Al2O3: V3+ resulted from the geometry optimization improved the calculated absorption spectra. By a combination of both the lattice relaxation-effect and the energy correction-effect improve the agreement to the experiment fact.

  20. Error-correcting pairs for a public-key cryptosystem

    NASA Astrophysics Data System (ADS)

    Pellikaan, Ruud; Márquez-Corbella, Irene

    2017-06-01

    Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.

  1. Topological Privacy: Lattice Structures and Information Bubbles for Inference and Obfuscation

    DTIC Science & Technology

    2016-12-19

    AFRL-AFOSR-VA-TR-2017-0036 Topological Privacy Michael Erdmann CARNEGIE MELLON UNIVERSITY 5000 FORBES AVENUE PITTSBURGH, PA 15213-3815 02/22/2017...PERSON 19b. TELEPHONE NUMBER (Include area code) 19-12-2016 Final 15-10-2013 - 14-10-2016 Topological Privacy Erdmann, Michael, A. Carnegie Mellon...Michael Erdmann Carnegie Mellon University me@cs.cmu.edu December 19, 2016 Abstract Information has intrinsic geometric and topological structure, arising

  2. High-fidelity gates towards a scalable superconducting quantum processor

    NASA Astrophysics Data System (ADS)

    Chow, Jerry M.; Corcoles, Antonio D.; Gambetta, Jay M.; Rigetti, Chad; Johnson, Blake R.; Smolin, John A.; Merkel, Seth; Poletto, Stefano; Rozen, Jim; Rothwell, Mary Beth; Keefe, George A.; Ketchen, Mark B.; Steffen, Matthias

    2012-02-01

    We experimentally explore the implementation of high-fidelity gates on multiple superconducting qubits coupled to multiple resonators. Having demonstrated all-microwave single and two qubit gates with fidelities > 90% on multi-qubit single-resonator systems, we expand the application to qubits across two resonators and investigate qubit coupling in this circuit. The coupled qubit-resonators are building blocks towards two-dimensional lattice networks for the application of surface code quantum error correction algorithms.

  3. Simulating the heterogeneity in braided channel belt deposits: 1. A geometric-based methodology and code

    NASA Astrophysics Data System (ADS)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.; Dominic, David F.; Freedman, Vicky L.; Scheibe, Timothy D.; Lunt, Ian A.

    2010-04-01

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the kilometer scale to the centimeter scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing of upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in part 1 of this paper. In part 2 (Guin et al., 2010), models generated by the code are presented and evaluated.

  4. Simulating the Heterogeneity in Braided Channel Belt Deposits: Part 1. A Geometric-Based Methodology and Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.

    A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less

  5. Bioethics for clinicians: 19. Hinduism and Sikhism

    PubMed Central

    Coward, Harold; Sidhu, Tejinder

    2000-01-01

    Hindus and Sikhs constitute important minority communities in Canada. Although their cultural and religious traditions have profound differences, they both traditionally take a duty-based rather than rights-based approach to ethical decision-making. These traditions also share a belief in rebirth, a concept of karma (in which experiences in one life influence experiences in future lives), an emphasis on the value of purity, and a holistic view of the person that affirms the importance of family, culture, environment and the spiritual dimension of experience. Physicians with Hindu and Sikh patients need to be sensitive to and respectful of the diversity of their cultural and religious assumptions regarding human nature, purity, health and illness, life and death, and the status of the individual. PMID:11079065

  6. Bioethics for clinicians: 19. Hinduism and Sikhism.

    PubMed

    Coward, H; Sidhu, T

    2000-10-31

    Hindus and Sikhs constitute important minority communities in Canada. Although their cultural and religious traditions have profound differences, they both traditionally take a duty-based rather than rights-based approach to ethical decision-making. These traditions also share a belief in rebirth, a concept of karma (in which experiences in one life influence experiences in future lives), an emphasis on the value of purity, and a holistic view of the person that affirms the importance of family, culture, environment and the spiritual dimension of experience. Physicians with Hindu and Sikh patients need to be sensitive to and respectful of the diversity of their cultural and religious assumptions regarding human nature, purity, health and illness, life and death, and the status of the individual.

  7. Lattice stability and thermal properties of Fe2VAl and Fe2TiSn Heusler compounds

    NASA Astrophysics Data System (ADS)

    Shastri, Shivprasad S.; Pandey, Sudhir K.

    2018-04-01

    Fe2VAl and Fe2TiSn are two full-Heusler compounds with non-magnetic ground states. They have application as potential thermoelectric materials. Along with first-principles electronic structure calculations, phonon calculation is one of the important tools in condensed matter physics and material science. Phonon calculations are important in understanding mechanical properties, thermal properties and phase transitions of periodic solids. A combination of electronic structure code and phonon calculation code - phonopy is employed in this work. The vibrational spectra, phonon DOS and thermal properties are studied for these two Heusler compounds. Two compounds are found to be dynamically stable with absence of negative frequencies (energy) in the phonon band structure.

  8. Simulations of QCD and QED with C* boundary conditions

    NASA Astrophysics Data System (ADS)

    Hansen, Martin; Lucini, Biagio; Patella, Agostino; Tantalo, Nazario

    2018-03-01

    We present exploratory results from dynamical simulations of QCD in isolation, as well as QCD coupled to QED, with C* boundary conditions. In finite volume, the use of C* boundary conditions allows for a gauge invariant and local formulation of QED without zero modes. In particular we show that the simulations reproduce known results and that masses of charged mesons can be extracted in a completely gauge invariant way. For the simulations we use a modified version of the HiRep code. The primary features of the simulation code are presented and we discuss some details regarding the implementation of C* boundary conditions and the simulated lattice action. Preprint: CP3-Origins-2017-046 DNRF90, CERN-TH-2017-214

  9. Comparison of the thermal neutron scattering treatment in MCNP6 and GEANT4 codes

    NASA Astrophysics Data System (ADS)

    Tran, H. N.; Marchix, A.; Letourneau, A.; Darpentigny, J.; Menelle, A.; Ott, F.; Schwindling, J.; Chauvin, N.

    2018-06-01

    To ensure the reliability of simulation tools, verification and comparison should be made regularly. This paper describes the work performed in order to compare the neutron transport treatment in MCNP6.1 and GEANT4-10.3 in the thermal energy range. This work focuses on the thermal neutron scattering processes for several potential materials which would be involved in the neutron source designs of Compact Accelerator-based Neutrons Sources (CANS), such as beryllium metal, beryllium oxide, polyethylene, graphite, para-hydrogen, light water, heavy water, aluminium and iron. Both thermal scattering law and free gas model, coming from the evaluated data library ENDF/B-VII, were considered. It was observed that the GEANT4.10.03-patch2 version was not able to account properly the coherent elastic process occurring in crystal lattice. This bug is treated in this work and it should be included in the next release of the code. Cross section sampling and integral tests have been performed for both simulation codes showing a fair agreement between the two codes for most of the materials except for iron and aluminium.

  10. Haag duality for Kitaev’s quantum double model for abelian groups

    NASA Astrophysics Data System (ADS)

    Fiedler, Leander; Naaijkens, Pieter

    2015-11-01

    We prove Haag duality for cone-like regions in the ground state representation corresponding to the translational invariant ground state of Kitaev’s quantum double model for finite abelian groups. This property says that if an observable commutes with all observables localized outside the cone region, it actually is an element of the von Neumann algebra generated by the local observables inside the cone. This strengthens locality, which says that observables localized in disjoint regions commute. As an application, we consider the superselection structure of the quantum double model for abelian groups on an infinite lattice in the spirit of the Doplicher-Haag-Roberts program in algebraic quantum field theory. We find that, as is the case for the toric code model on an infinite lattice, the superselection structure is given by the category of irreducible representations of the quantum double.

  11. Spin wave Feynman diagram vertex computation package

    NASA Astrophysics Data System (ADS)

    Price, Alexander; Javernick, Philip; Datta, Trinanjan

    Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.

  12. Lattice dynamics of Ru2FeX (X = Si, Ge) Full Heusler alloys

    NASA Astrophysics Data System (ADS)

    Rizwan, M.; Afaq, A.; Aneeza, A.

    2018-05-01

    In present work, the lattice dynamics of Ru2FeX (X = Si, Ge) full Heusler alloys are investigated using density functional theory (DFT) within generalized gradient approximation (GGA) in a plane wave basis, with norm-conserving pseudopotentials. Phonon dispersion curves and phonon density of states are obtained using first-principles linear response approach of density functional perturbation theory (DFPT) as implemented in Quantum ESPRESSO code. Phonon dispersion curves indicates for both Heusler alloys that there is no imaginary phonon in whole Brillouin zone, confirming dynamical stability of these alloys in L21 type structure. There is a considerable overlapping between acoustic and optical phonon modes predicting no phonon band gap exists in dispersion curves of alloys. The same result is shown by phonon density of states curves for both Heusler alloys. Reststrahlen band for Ru2FeSi is found smaller than Ru2FeGe.

  13. Estimation of Reynolds number for flows around cylinders with lattice Boltzmann methods and artificial neural networks.

    PubMed

    Carrillo, Mauricio; Que, Ulices; González, José A

    2016-12-01

    The present work investigates the application of artificial neural networks (ANNs) to estimate the Reynolds (Re) number for flows around a cylinder. The data required to train the ANN was generated with our own implementation of a lattice Boltzmann method (LBM) code performing simulations of a two-dimensional flow around a cylinder. As results of the simulations, we obtain the velocity field (v[over ⃗]) and the vorticity (∇[over ⃗]×v[over ⃗]) of the fluid for 120 different values of Re measured at different distances from the obstacle and use them to teach the ANN to predict the Re. The results predicted by the networks show good accuracy with errors of less than 4% in all the studied cases. One of the possible applications of this method is the development of an efficient tool to characterize a blocked flowing pipe.

  14. A principle of economy predicts the functional architecture of grid cells.

    PubMed

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-09-03

    Grid cells in the brain respond when an animal occupies a periodic lattice of 'grid fields' during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths.

  15. Lattice Commissioning Stretgy Simulation for the B Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M.; Whittum, D.; Yan, Y.

    2011-08-26

    To prepare for the PEP-II turn on, we have studied one commissioning strategy with simulated lattice errors. Features such as difference and absolute orbit analysis and correction are discussed. To prepare for the commissioning of the PEP-II injection line and high energy ring (HER), we have developed a system for on-line orbit analysis by merging two existing codes: LEGO and RESOLVE. With the LEGO-RESOLVE system, we can study the problem of finding quadrupole alignment and beam position (BPM) offset errors with simulated data. We have increased the speed and versatility of the orbit analysis process by using a command filemore » written in a script language designed specifically for RESOLVE. In addition, we have interfaced the LEGO-RESOLVE system to the control system of the B-Factory. In this paper, we describe online analysis features of the LEGO-RESOLVE system and present examples of practical applications.« less

  16. Hadron spectrum of quenched QCD on a 32{sup 3} {times} 64 lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Sinclair, D.K.

    1992-10-01

    Preliminary results from a hadron spectrum calculation of quenched Quantumchromodynamics on a 32{sup 3} {times} 64 lattice at {beta} = 6.5 are reported. The hadron spectrum calculation is done with staggered quarks of masses, m{sub q}a = 0.001, 0.005 and 0.0025. We use two different sources in order to be able to extract the {Delta} mass in addition to the usual local light hadron masses. The numerical simulation is executed on the Intel Touchstone Delta computer. The peak speed of the Delta for a 16 {times} 32 mesh configuration is 41 Gflops for 32 bit precision. The sustained speed formore » our updating code is 9.5 Gflops. A multihit metropolis algorithm combined with an over-relaxation method is used in the updating and the conjugate gradient method is employed for Dirac matrix inversion. Configurations are stored every 1000 sweeps.« less

  17. Hadron spectrum of quenched QCD on a 32[sup 3] [times] 64 lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Sinclair, D.K.

    1992-10-01

    Preliminary results from a hadron spectrum calculation of quenched Quantumchromodynamics on a 32[sup 3] [times] 64 lattice at [beta] = 6.5 are reported. The hadron spectrum calculation is done with staggered quarks of masses, m[sub q]a = 0.001, 0.005 and 0.0025. We use two different sources in order to be able to extract the [Delta] mass in addition to the usual local light hadron masses. The numerical simulation is executed on the Intel Touchstone Delta computer. The peak speed of the Delta for a 16 [times] 32 mesh configuration is 41 Gflops for 32 bit precision. The sustained speed formore » our updating code is 9.5 Gflops. A multihit metropolis algorithm combined with an over-relaxation method is used in the updating and the conjugate gradient method is employed for Dirac matrix inversion. Configurations are stored every 1000 sweeps.« less

  18. Fractonic line excitations: An inroad from three-dimensional elasticity theory

    NASA Astrophysics Data System (ADS)

    Pai, Shriya; Pretko, Michael

    2018-06-01

    We demonstrate the existence of a fundamentally new type of excitation, fractonic lines, which are linelike excitations with the restricted mobility properties of fractons. These excitations, described using an amalgamation of higher-form gauge theories with symmetric tensor gauge theories, see direct physical realization as the topological lattice defects of ordinary three-dimensional quantum crystals. Starting with the more familiar elasticity theory, we show how theory maps onto a rank-4 tensor gauge theory, with phonons corresponding to gapless gauge modes and disclination defects corresponding to linelike charges. We derive flux conservation laws which lock these linelike excitations in place, analogous to the higher moment charge conservation laws of fracton theories. This way of encoding mobility restrictions of lattice defects could shed light on melting transitions in three dimensions. This new type of extended object may also be a useful tool in the search for improved quantum error-correcting codes in three dimensions.

  19. Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.

  20. Efficient Cache use for Stencil Operations on Structured Discretization Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.

    2001-01-01

    We derive tight bounds on the cache misses for evaluation of explicit stencil operators on structured grids. Our lower bound is based on the isoperimetrical property of the discrete octahedron. Our upper bound is based on a good surface to volume ratio of a parallelepiped spanned by a reduced basis of the interference lattice of a grid. Measurements show that our algorithm typically reduces the number of cache misses by a factor of three, relative to a compiler optimized code. We show that stencil calculations on grids whose interference lattice have a short vector feature abnormally high numbers of cache misses. We call such grids unfavorable and suggest to avoid these in computations by appropriate padding. By direct measurements on a MIPS R10000 processor we show a good correlation between abnormally high numbers of cache misses and unfavorable three-dimensional grids.

  1. Three-Dimensional Unsteady Separation at Low Reynolds Numbers

    DTIC Science & Technology

    1990-07-01

    novel, robust adaptive- grid technique for incompressible flow (Shen & Reed 1990a "Shepard’s Interpolation for Solution-Adaptive Methods" submitted to...3-D adaptive- grid schemes developed for flat plate for full, unsteady, incompressible Navier Stokes. 4. 2-D and 3-D unsteady, vortex-lattice code...perforated to tailor suction through wall. Honeycomb and contractiong uide flow uniformly crons "a dn muwet a m Fiur32 c ic R n R ev lving -disc seals

  2. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-MOX, R2-UO2 and MORGANE/R configurations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Klann, R. T.; Nuclear Engineering Division

    2007-08-03

    An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R2-UO2 and MORGANE/R core configuration were completed. The calculation model was generated using the lattice physics code DRAGON. In addition, an initial comparison of calculated values to experimental measurements was performed based on preliminary results for the R1-MOX configuration.

  3. Study of Linear and Nonlinear Waves in Plasma Crystals Using the Box_Tree Code

    NASA Astrophysics Data System (ADS)

    Qiao, K.; Hyde, T.; Barge, L.

    Dusty plasma systems play an important role in both astrophysical and planetary environments (protostellar clouds, planetary ring systems and magnetospheres, cometary environments) and laboratory settings (plasma processing or nanofabrication). Recent research has focussed on defining (both theoretically and experimentally) the different types of wave mode propagations, which are possible within plasma crystals. This is an important topic since several of the fundamental quantities for characterizing such crystals can be obtained directly from an analysis of the wave propagation/dispersion. This paper will discuss a num rical model fore 2D-monolayer plasma crystals, which was established using a modified box tree code. Different wave modes were examined by adding a time dependent potential to the code designed to simulate a laser radiation perturbation as has been applied in many experiments. Both linear waves (for example, longitudinal and transverse dust lattice waves) and nonlinear waves (solitary waves) are examined. The output data will also be compared with the results of corresponding experiments and discussed.

  4. MC3, Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cawkwell, Marc Jon

    2016-09-09

    The MC3 code is used to perform Monte Carlo simulations in the isothermal-isobaric ensemble (constant number of particles, temperature, and pressure) on molecular crystals. The molecules within the periodic simulation cell are treated as rigid bodies, alleviating the requirement for a complex interatomic potential. Intermolecular interactions are described using generic, atom-centered pair potentials whose parameterization is taken from the literature [D. E. Williams, J. Comput. Chem., 22, 1154 (2001)] and electrostatic interactions arising from atom-centered, fixed, point partial charges. The primary uses of the MC3 code are the computation of i) the temperature and pressure dependence of lattice parameters andmore » thermal expansion coefficients, ii) tensors of elastic constants and compliances via the Parrinello and Rahman’s fluctuation formula [M. Parrinello and A. Rahman, J. Chem. Phys., 76, 2662 (1982)], and iii) the investigation of polymorphic phase transformations. The MC3 code is written in Fortran90 and requires LAPACK and BLAS linear algebra libraries to be linked during compilation. Computationally expensive loops are accelerated using OpenMP.« less

  5. Calculation of wall effects of flow on a perforated wall with a code of surface singularities

    NASA Astrophysics Data System (ADS)

    Piat, J. F.

    1994-07-01

    Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).

  6. Beam Dynamics in an Electron Lens with the Warp Particle-in-cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stancari, Giulio; Moens, Vince; Redaelli, Stefano

    2014-07-01

    Electron lenses are a mature technique for beam manipulation in colliders and storage rings. In an electron lens, a pulsed, magnetically confined electron beam with a given current-density profile interacts with the circulating beam to obtain the desired effect. Electron lenses were used in the Fermilab Tevatron collider for beam-beam compensation, for abort-gap clearing, and for halo scraping. They will be used in RHIC at BNL for head-on beam-beam compensation, and their application to the Large Hadron Collider for halo control is under development. At Fermilab, electron lenses will be implemented as lattice elements for nonlinear integrable optics. The designmore » of electron lenses requires tools to calculate the kicks and wakefields experienced by the circulating beam. We use the Warp particle-in-cell code to study generation, transport, and evolution of the electron beam. For the first time, a fully 3-dimensional code is used for this purpose.« less

  7. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  8. A proposal for self-correcting stabilizer quantum memories in 3 dimensions (or slightly less)

    NASA Astrophysics Data System (ADS)

    Brell, Courtney G.

    2016-01-01

    We propose a family of local CSS stabilizer codes as possible candidates for self-correcting quantum memories in 3D. The construction is inspired by the classical Ising model on a Sierpinski carpet fractal, which acts as a classical self-correcting memory. Our models are naturally defined on fractal subsets of a 4D hypercubic lattice with Hausdorff dimension less than 3. Though this does not imply that these models can be realized with local interactions in {{{R}}}3, we also discuss this possibility. The X and Z sectors of the code are dual to one another, and we show that there exists a finite temperature phase transition associated with each of these sectors, providing evidence that the system may robustly store quantum information at finite temperature.

  9. User Manual for the PROTEUS Mesh Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Micheal A.; Shemon, Emily R.

    2015-06-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a givenmore » mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less

  10. Pion Production from 5-15 GeV Beam for the Neutrino Factory Front-End Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prior, Gersende

    2010-03-30

    For the neutrino factory front-end study, the production of pions from a proton beam of 5-8 and 14 GeV kinetic energy on a Hg jet target has been simulated. The pion yields for two versions of the MARS15 code and two different field configurations have been compared. The particles have also been tracked from the target position down to the end of the cooling channel using the ICOOL code and the neutrino factory baseline lattice. The momentum-angle region of pions producing muons that survived until the end of the cooling channel has been compared with the region covered by HARPmore » data and the number of pions/muons as a function of the incoming beam energy is also reported.« less

  11. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  12. Beam-dynamics driven design of the LHeC energy-recovery linac

    NASA Astrophysics Data System (ADS)

    Pellegrini, Dario; Latina, Andrea; Schulte, Daniel; Bogacz, S. Alex

    2015-12-01

    The LHeC is envisioned as a natural upgrade of the LHC that aims at delivering an electron beam for collisions with the existing hadronic beams. The current baseline design for the electron facility consists of a multipass superconducting energy-recovery linac (ERL) operating in a continuous wave mode. The unprecedently high energy of the multipass ERL combined with a stringent emittance dilution budget poses new challenges for the beam optics. Here, we investigate the performances of a novel arc architecture based on a flexible momentum compaction lattice that mitigates the effects of synchrotron radiation while containing the bunch lengthening. Extensive beam-dynamics investigations have been performed with placet2, a recently developed tracking code for recirculating machines. They include the first end-to-end tracking and a simulation of the machine operation with a continuous beam. This paper briefly describes the Conceptual Design Report lattice, with an emphasis on possible and proposed improvements that emerged from the beam-dynamics studies. The detector bypass section has been integrated in the lattice, and its design choices are presented here. The stable operation of the ERL with a current up to ˜150 mA in the linacs has been validated in the presence of single- and multibunch wakefields, synchrotron radiation, and beam-beam effects.

  13. New super-computing facility in RIKEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohta, Shigemi

    1994-12-31

    A new superconductor, Fujitsu VPP500/28, was installed in the Institute of Physical and Chemical Research (RIKEN) at the end of March, 1994. It consists of 28 processing elements (PE`s) connected by a high-speed crossbar switch. The switch is a combination of GaAs and ECL circuitry with peak band width of 800 Mbyte per second. Each PE consists of a GaAs/ECL vector processor with 1.6 Gflops peak speed and 256 Mbyte SRAM local memory. In addition, there are 8 GByte DRAM space, two 100 Gbyte RAID disks and a 10 TByte archive based on SONY File Bank system. The author ranmore » three major benchmarks on this machine: modified LINPACK, lattice QCD and FFT. In the modified LINPACK benchmark, a sustained speed of about 28 Gflops is achieved, by removing the restriction on the size of the matrices. In the lattice QCD benchmark, a sustained speed of about 30 Gflops is achieved for inverting staggered fermion propagation matrix on a 32{sup 4} lattice. In the FFT benchmark, real data of 32, 128, 512, and 2048 MByte are Fourier-transformed. The sustained speed for each is respectively 21, 21, 20, and 19 Gflops. The numbers are obtained after only a few weeks of coding efforts and can be improved further.« less

  14. Unstable spiral waves and local Euclidean symmetry in a model of cardiac tissue.

    PubMed

    Marcotte, Christopher D; Grigoriev, Roman O

    2015-06-01

    This paper investigates the properties of unstable single-spiral wave solutions arising in the Karma model of two-dimensional cardiac tissue. In particular, we discuss how such solutions can be computed numerically on domains of arbitrary shape and study how their stability, rotational frequency, and spatial drift depend on the size of the domain as well as the position of the spiral core with respect to the boundaries. We also discuss how the breaking of local Euclidean symmetry due to finite size effects as well as the spatial discretization of the model is reflected in the structure and dynamics of spiral waves. This analysis allows identification of a self-sustaining process responsible for maintaining the state of spiral chaos featuring multiple interacting spirals.

  15. Adjoint eigenfunctions of temporally recurrent single-spiral solutions in a simple model of atrial fibrillation.

    PubMed

    Marcotte, Christopher D; Grigoriev, Roman O

    2016-09-01

    This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.

  16. Adjoint eigenfunctions of temporally recurrent single-spiral solutions in a simple model of atrial fibrillation

    NASA Astrophysics Data System (ADS)

    Marcotte, Christopher D.; Grigoriev, Roman O.

    2016-09-01

    This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.

  17. Magnetic field of longitudinal gradient bend

    NASA Astrophysics Data System (ADS)

    Aiba, Masamitsu; Böge, Michael; Ehrlichman, Michael; Streun, Andreas

    2018-06-01

    The longitudinal gradient bend is an effective method for reducing the natural emittance in light sources. It is, however, not a common element. We have analyzed its magnetic field and derived a set of formulae. Based on the derivation, we discuss how to model the longitudinal gradient bend in accelerator codes that are used for designing electron storage rings. Strengths of multipole components can also be evaluated from the formulae, and we investigate the impact of higher order multipole components in a very low emittance lattice.

  18. Entanglement renormalization and topological order.

    PubMed

    Aguado, Miguel; Vidal, Guifré

    2008-02-22

    The multiscale entanglement renormalization ansatz (MERA) is argued to provide a natural description for topological states of matter. The case of Kitaev's toric code is analyzed in detail and shown to possess a remarkably simple MERA description leading to distillation of the topological degrees of freedom at the top of the tensor network. Kitaev states on an infinite lattice are also shown to be a fixed point of the renormalization group flow associated with entanglement renormalization. All of these results generalize to arbitrary quantum double models.

  19. Numerically-Based Ducted Propeller Design Using Vortex Lattice Lifting Line Theory

    DTIC Science & Technology

    2008-01-01

    greatly improved data visualization which includes graphic output and three-dimensional renderings. OpenProp was designed to perform two primary ...MATLAB® Code B.1 Q2half.m %Q2half: Legendre fuction of the second kind and positive half order %Ref: Handbook of Math Functions, Abramowitz and...134035, %Q2half(6)=.0382887, Q2half(8.4)=.0229646, Q2half(10)=.0176449 B.2 Q2Mhalf.m %Q2Mhalf: Legendre fuction of the second kind and minus half

  20. Comparison/Validation Study of Lattice Boltzmann and Navier Stokes for Various Benchmark Applications: Report 1 in Discrete Nano-Scale Mechanics and Simulations Series

    DTIC Science & Technology

    2014-09-15

    solver, OpenFOAM version 2.1.‡ In particular, the incompressible laminar flow equations (Eq. 6-8) were solved in conjunction with the pressure im- plicit...central differencing and upwinding schemes, respectively. Since the OpenFOAM code is inherently transient, steady-state conditions were ob- tained...collaborative effort between Kitware and Los Alamos National Laboratory. ‡ OpenFOAM is a free, open-source computational fluid dynamics software developed

  1. Lattice QCD simulations using the OpenACC platform

    NASA Astrophysics Data System (ADS)

    Majumdar, Pushan

    2016-10-01

    In this article we will explore the OpenACC platform for programming Graphics Processing Units (GPUs). The OpenACC platform offers a directive based programming model for GPUs which avoids the detailed data flow control and memory management necessary in a CUDA programming environment. In the OpenACC model, programs can be written in high level languages with OpenMP like directives. We present some examples of QCD simulation codes using OpenACC and discuss their performance on the Fermi and Kepler GPUs.

  2. ab initio MD simulations of geomaterials with ~1000 atoms

    NASA Astrophysics Data System (ADS)

    Martin, G. B.; Kirtman, B.; Spera, F. J.

    2009-12-01

    In the last two decades, ab initio studies of materials using Density Functional Theory (DFT) have increased exponentially in popularity. DFT codes are now used routinely to simulate properties of geomaterials--mainly silicates and geochemically important metals such as Fe. These materials are ubiquitous in the Earth’s mantle and core and in terrestrial exoplanets. Because of computational limitations, most First Principles Molecular Dynamics (FPMD) calculations are done on systems of only ~100 atoms for a few picoseconds. While this approach can be useful for calculating physical quantities related to crystal structure, vibrational frequency, and other lattice-scale properties (especially in crystals), it is statistically marginal for duplicating physical properties of the liquid state like transport and structure. In MD simulations in the NEV ensemble, temperature (T), and pressure (P) fluctuations scale as N-1/2; small particle number (N) systems are therefore characterized by greater statistical state point location uncertainty than large N systems. Previous studies have used codes such as VASP where CPU time increases with N2, making calculations with N much greater than 100 impractical. SIESTA (Soler, et al. 2002) is a DFT code that enables electronic structure and MD computations on larger systems (N~103) by making some approximations, such as localized numerical orbitals, that would be useful in modeling some properties of geomaterials. Here we test the applicability of SIESTA to simulate geosilicates, both hydrous and anhydrous, in the solid and liquid state. We have used SIESTA for lattice calculations of brucite, Mg(OH)2, that compare very well to experiment and calculations using CRYSTAL, another DFT code. Good agreement between more classical DFT calculations and SIESTA is needed to justify study of geosilicates using SIESTA across a range of pressures and temperatures relevant to the Earth’s interior. Thus, it is useful to adjust parameters in SIESTA in accordance with calculations from CRYSTAL as a check on feasibility. Results are reported here that suggest SIESTA may indeed be useful to model silicate liquids at very high T and P.

  3. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less

  4. Comparison of computational results of the SABRE LMFBR pin bundle blockage code with data from well-instrumented out-of-pile test bundles (THORS bundles 3A and 5A)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dearing, J.F.

    The Subchannel Analysis of Blockages in Reactor Elements (SABRE) computer code, developed by the United Kingdom Atomic Energy Authority, is currently the only practical tool available for performing detailed analyses of velocity and temperature fields in the recirculating flow regions downstream of blockages in liquid-metal fast breeder reactor (LMFBR) pin bundles. SABRE is a subchannel analysis code; that is, it accurately represents the complex geometry of nuclear fuel pins arranged on a triangular lattice. The results of SABRE computational models are compared here with temperature data from two out-of-pile 19-pin test bundles from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) Facility atmore » Oak Ridge National Laboratory. One of these bundles has a small central flow blockage (bundle 3A), while the other has a large edge blockage (bundle 5A). Values that give best agreement with experiment for the empirical thermal mixing correlation factor, FMIX, in SABRE are suggested. These values of FMIX are Reynolds-number dependent, however, indicating that the coded turbulent mixing correlation is not appropriate for wire-wrap pin bundles.« less

  5. Suppression of Baryon Diffusion and Transport in a Baryon Rich Strongly Coupled Quark-Gluon Plasma

    NASA Astrophysics Data System (ADS)

    Rougemont, Romulo; Noronha, Jorge; Noronha-Hostler, Jacquelyn

    2015-11-01

    Five dimensional black hole solutions that describe the QCD crossover transition seen in (2 +1 ) -flavor lattice QCD calculations at zero and nonzero baryon densities are used to obtain predictions for the baryon susceptibility, baryon conductivity, baryon diffusion constant, and thermal conductivity of the strongly coupled quark-gluon plasma in the range of temperatures 130 MeV ≤T ≤300 MeV and baryon chemical potentials 0 ≤μB≤400 MeV . Diffusive transport is predicted to be suppressed in this region of the QCD phase diagram, which is consistent with the existence of a critical end point at larger baryon densities. We also calculate the fourth-order baryon susceptibility at zero baryon chemical potential and find quantitative agreement with recent lattice results. The baryon transport coefficients computed in this Letter can be readily implemented in state-of-the-art hydrodynamic codes used to investigate the dense QGP currently produced at RHIC's low energy beam scan.

  6. Pressure induced structural transitions in Lead Chalcogenides and its influence on thermoelectric properties

    NASA Astrophysics Data System (ADS)

    Petersen, John; Spinks, Michael; Borges, Pablo; Scolfaro, Luisa

    2012-03-01

    Lead chalcogenides, most notably PbTe and PbSe, have become an active area of research due to their thermoelectric (TE) properties. The high figure of merit (ZT) of these materials has brought much attention to them, due to their ability to convert waste heat into electricity, with a possible application being in engine exhaust. Here, we examine the effects of altering the lattice parameter on total ground state energy and the band gap using first principles calculations performed within Density Functional Theory and the Projector Augmented Wave approach and the Vienna Ab-initio Simulation Package (VASP-PAW) code. Both PbTe and PbSe, in NaCl, orthorhombic, and CsCl structures are considered. It is found that altering the lattice parameter, which is analogous to applying external pressure on the material experimentally, has notable effects on both ground state energy and the band gap. The implications of this behavior in the TE properties of these materials are analyzed.

  7. MHD Turbulence, div B = 0 and Lattice Boltzmann Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Nate; Keating, Brian; Vahala, George; Vahala, Linda

    2006-10-01

    The question of div B = 0 in MHD simulations is a crucial issue. Here we consider lattice Boltzmann simulations for MHD (LB-MHD). One introduces a scalar distribution function for the velocity field and a vector distribution function for the magnetic field. This asymmetry is due to the different symmetries in the tensors arising in the time evolution of these fields. The simple algorithm of streaming and local collisional relaxation is ideally parallelized and vectorized -- leading to the best sustained performance/PE of any code run on the Earth Simulator. By reformulating the BGK collision term, a simple implicit algorithm can be immediately transformed into an explicit algorithm that permits simulations at quite low viscosity and resistivity. However the div B is not an imposed constraint. Currently we are examining a new formulations of LB-MHD that impose the div B constraint -- either through an entropic like formulation or by introducing forcing terms into the momentum equations and permitting simpler forms of relaxation distributions.

  8. A principle of economy predicts the functional architecture of grid cells

    PubMed Central

    Wei, Xue-Xin; Prentice, Jason; Balasubramanian, Vijay

    2015-01-01

    Grid cells in the brain respond when an animal occupies a periodic lattice of ‘grid fields’ during navigation. Grids are organized in modules with different periodicity. We propose that the grid system implements a hierarchical code for space that economizes the number of neurons required to encode location with a given resolution across a range equal to the largest period. This theory predicts that (i) grid fields should lie on a triangular lattice, (ii) grid scales should follow a geometric progression, (iii) the ratio between adjacent grid scales should be √e for idealized neurons, and lie between 1.4 and 1.7 for realistic neurons, (iv) the scale ratio should vary modestly within and between animals. These results explain the measured grid structure in rodents. We also predict optimal organization in one and three dimensions, the number of modules, and, with added assumptions, the ratio between grid periods and field widths. DOI: http://dx.doi.org/10.7554/eLife.08362.001 PMID:26335200

  9. Matter formed at the BNL Relativistic Heavy Ion Collider.

    PubMed

    Brown, G E; Gelman, B A; Rho, Mannque

    2006-04-07

    We suggest that the "new form of matter" found just above T(c) by the Relativistic Heavy Ion Collider is made up of tightly bound quark-antiquark pairs, essentially 32 chirally restored (more precisely, nearly massless) mesons of the quantum numbers of pi, sigma, rho, and a1. Taking the results of lattice gauge simulations (LGS) for the color Coulomb potential from the work of the Bielefeld group and feeding this into a relativistic two-body code, after modifying the heavy-quark lattice results so as to include the velocity-velocity interaction, all ground-state eigenvalues of the 32 mesons go to zero at T(c) just as they do from below T(c) as predicted by the vector manifestation of hidden local symmetry. This could explain the rapid rise in entropy up to T(c) found in LGS calculations. We argue that how the dynamics work can be understood from the behavior of the hard and soft glue.

  10. Self-dual random-plaquette gauge model and the quantum toric code

    NASA Astrophysics Data System (ADS)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-05-01

    We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.

  11. Features of MCNP6 Relevant to Medical Radiation Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H. Grady III; Goorley, John T.

    2012-08-29

    MCNP (Monte Carlo N-Particle) is a general-purpose Monte Carlo code for simulating the transport of neutrons, photons, electrons, positrons, and more recently other fundamental particles and heavy ions. Over many years MCNP has found a wide range of applications in many different fields, including medical radiation physics. In this presentation we will describe and illustrate a number of significant recently-developed features in the current version of the code, MCNP6, having particular utility for medical physics. Among these are major extensions of the ability to simulate large, complex geometries, improvement in memory requirements and speed for large lattices, introduction of mesh-basedmore » isotopic reaction tallies, advances in radiography simulation, expanded variance-reduction capabilities, especially for pulse-height tallies, and a large number of enhancements in photon/electron transport.« less

  12. MCNP-model for the OAEP Thai Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallmeier, F.X.; Tang, J.S.; Primm, R.T. III

    An MCNP input was prepared for the Thai Research Reactor, making extensive use of the MCNP geometry`s lattice feature that allows a flexible and easy rearrangement of the core components and the adjustment of the control elements. The geometry was checked for overdefined or undefined zones by two-dimensional plots of cuts through the core configuration with the MCNP geometry plotting capabilities, and by a three-dimensional view of the core configuration with the SABRINA code. Cross sections were defined for a hypothetical core of 67 standard fuel elements and 38 low-enriched uranium fuel elements--all filled with fresh fuel. Three test calculationsmore » were performed with the MCNP4B-code to obtain the multiplication factor for the cases with control elements fully inserted, fully withdrawn, and at a working position.« less

  13. Topological quantum error correction in the Kitaev honeycomb model

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  14. Lattice Boltzmann Methods to Address Fundamental Boiling and Two-Phase Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uddin, Rizwan

    2012-01-01

    This report presents the progress made during the fourth (no cost extension) year of this three-year grant aimed at the development of a consistent Lattice Boltzmann formulation for boiling and two-phase flows. During the first year, a consistent LBM formulation for the simulation of a two-phase water-steam system was developed. Results of initial model validation in a range of thermo-dynamic conditions typical for Boiling Water Reactors (BWRs) were shown. Progress was made on several fronts during the second year. Most important of these included the simulation of the coalescence of two bubbles including the surface tension effects. Work during themore » third year focused on the development of a new lattice Boltzmann model, called the artificial interface lattice Boltzmann model (AILB model) for the 3 simulation of two-phase dynamics. The model is based on the principle of free energy minimization and invokes the Gibbs-Duhem equation in the formulation of non-ideal forcing function. This was reported in detail in the last progress report. Part of the efforts during the last (no-cost extension) year were focused on developing a parallel capability for the 2D as well as for the 3D codes developed in this project. This will be reported in the final report. Here we report the work carried out on testing the AILB model for conditions including the thermal effects. A simplified thermal LB model, based on the thermal energy distribution approach, was developed. The simplifications are made after neglecting the viscous heat dissipation and the work done by pressure in the original thermal energy distribution model. Details of the model are presented here, followed by a discussion of the boundary conditions, and then results for some two-phase thermal problems.« less

  15. Perceptions and Practices of Illegal Abortion among Urban Young Adults in the Philippines: A Qualitative Study

    PubMed Central

    Gipson, Jessica D.; Hirz, Alanna E.; Avila, Josephine L.

    2015-01-01

    This study draws on in-depth interviews and focus group discussions with young adults in a metropolitan area of the Philippines to examine perceptions and practices of illegal abortion. Study participants indicated that unintended pregnancies are common and may be resolved through eventual acceptance or through self-induced injury or ingestion of substances to terminate the pregnancy. Despite the illegality of abortion and the restricted status of misoprostol, substantial knowledge and use of the drug exists. Discussions mirrored broader controversies associated with abortion in this setting. Abortion was generally thought to invoke gaba (bad karma), yet some noted its acceptability under certain circumstances. This study elucidates the complexities of pregnancy decisionmaking in this restrictive environment and the need for comprehensive and confidential reproductive health services for Filipino young adults. PMID:22292245

  16. Mootrala Karma of Kusha [Imperata cylindrica Beauv.] and Darbha [Desmostachya bipinnata Stapf.] - A comparative study.

    PubMed

    Shah, Niti T; Pandya, Tarulata N; Sharma, Parameshwar P; Patel, Bhupesh R; Acharya, Rabinarayan

    2012-07-01

    Kusha (Imperata cylindrica Beauv.) and Darbha (Desmostachya bipinnata Stapf.) are enlisted among Trinapanchamoola, which is a well-known diuretic and are individually enumerated in the Mootravirechaneeya Dashemani. The article deals with the evaluation and comparison of the individual Mootrala (diuretic) action of the two drugs in healthy volunteers. In this study, 29 healthy volunteers were divided into three groups administered with Darbha Moola Churna, Kusha Moola Churna, and placebo in each group for 14 days. The volunteers were subjected to evaluation of diuretic activity by maintaining the daily total input-output charts during the course of the study. The volunteers were advised to consume a minimum 2 l of water daily. Results show that Darbha and Kusha leaded to a percentage increase in urine volume as compared to placebo group, but the result was statistically insignificant.

  17. Perceptions and practices of illegal abortion among urban young adults in the Philippines: a qualitative study.

    PubMed

    Gipson, Jessica D; Hirz, Alanna E; Avila, Josephine L

    2011-12-01

    This study draws on in-depth interviews and focus group discussions with young adults in a metropolitan area of the Philippines to examine perceptions and practices of illegal abortion. Study participants indicated that unintended pregnancies are common and may be resolved through eventual acceptance or through self-induced injury or ingestion of substances to terminate the pregnancy. Despite the illegality of abortion and the restricted status of misoprostol, substantial knowledge and use of the drug exists. Discussions mirrored broader controversies associated with abortion in this setting. Abortion was generally thought to invoke gaba (bad karma), yet some noted its acceptability under certain circumstances. This study elucidates the complexities of pregnancy decisionmaking in this restrictive environment and the need for comprehensive and confidential reproductive health services for Filipino young adults.

  18. KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2014-09-01

    KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12.4 LTS, CentOS release 5.9, Mac OSX 10.5.8 and Mac OSX 10.8.2, but should run on any system that can have a C++ compiler, MPI and a Python interpreter. Has the code been vectorized or parallelized?: Yes. From one to hundreds of processors depending on the type of input and simulation. RAM: From a few megabytes to several gigabytes depending on input parameters and the size of the system to simulate. Classification: 4.13, 16.13. External routines: KMCLib uses an external Mersenne Twister pseudo random number generator that is included in the code. A Python 2.7 interpreter and a standard C++ runtime library are needed to run the serial version of the code. For running the parallel version an MPI implementation is needed, such as e.g. MPICH from http://www.mpich.org or Open-MPI from http://www.open-mpi.org. SWIG (obtainable from http://www.swig.org/) and CMake (obtainable from http://www.cmake.org/) are needed for building the backend module, Sphinx (obtainable from http://sphinx-doc.org) for building the documentation and CPPUNIT (obtainable from http://sourceforge.net/projects/cppunit/) for building the C++ unit tests. Nature of problem: Atomic scale simulation of slowly evolving dynamics is a great challenge in many areas of computational materials science and catalysis. When the rare-events dynamics of interest is orders of magnitude slower than the typical atomic vibrational frequencies a straight-forward propagation of the equations of motions for the particles in the simulation cannot reach time scales of relevance for modeling the slow dynamics. Solution method: KMCLib provides an implementation of the kinetic Monte Carlo (KMC) method that solves the slow dynamics problem by utilizing the separation of time scales between fast vibrational motion and the slowly evolving rare-events dynamics. Only the latter is treated explicitly and the system is simulated as jumping between fully equilibrated local energy minima on the slow-dynamics potential energy surface. Restrictions: KMCLib implements the lattice KMC method and is as such restricted to geometries that can be expressed on a grid in space. Unusual features: KMCLib has been designed to be easily customized, to allow for user-defined functionality and integration with other codes. The user can define her own on-the-fly rate calculator via a Python API, so that site-specific elementary process rates, or rates depending on long-range interactions or complex geometrical features can easily be included. KMCLib also allows for on-the-fly analysis with user-defined analysis modules. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Additional comments: The full documentation of the program is distributed with the code and can also be found at http://www.github.com/leetmaa/KMCLib/manual Running time: rom a few seconds to several days depending on the type of simulation and input parameters.

  19. Three-Dimensional Wiring for Extensible Quantum Computing: The Quantum Socket

    NASA Astrophysics Data System (ADS)

    Béjanin, J. H.; McConkey, T. G.; Rinehart, J. R.; Earnest, C. T.; McRae, C. R. H.; Shiri, D.; Bateman, J. D.; Rohanizadegan, Y.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.; Mariantoni, M.

    2016-10-01

    Quantum computing architectures are on the verge of scalability, a key requirement for the implementation of a universal quantum computer. The next stage in this quest is the realization of quantum error-correction codes, which will mitigate the impact of faulty quantum information on a quantum computer. Architectures with ten or more quantum bits (qubits) have been realized using trapped ions and superconducting circuits. While these implementations are potentially scalable, true scalability will require systems engineering to combine quantum and classical hardware. One technology demanding imminent efforts is the realization of a suitable wiring method for the control and the measurement of a large number of qubits. In this work, we introduce an interconnect solution for solid-state qubits: the quantum socket. The quantum socket fully exploits the third dimension to connect classical electronics to qubits with higher density and better performance than two-dimensional methods based on wire bonding. The quantum socket is based on spring-mounted microwires—the three-dimensional wires—that push directly on a microfabricated chip, making electrical contact. A small wire cross section (approximately 1 mm), nearly nonmagnetic components, and functionality at low temperatures make the quantum socket ideal for operating solid-state qubits. The wires have a coaxial geometry and operate over a frequency range from dc to 8 GHz, with a contact resistance of approximately 150 m Ω , an impedance mismatch of approximately 10 Ω , and minimal cross talk. As a proof of principle, we fabricate and use a quantum socket to measure high-quality superconducting resonators at a temperature of approximately 10 mK. Quantum error-correction codes such as the surface code will largely benefit from the quantum socket, which will make it possible to address qubits located on a two-dimensional lattice. The present implementation of the socket could be readily extended to accommodate a quantum processor with a (10 ×10 )-qubit lattice, which would allow for the realization of a simple quantum memory.

  20. Technical Basis for Peak Reactivity Burnup Credit for BWR Spent Nuclear Fuel in Storage and Transportation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Ade, Brian J; Bowman, Stephen M

    2015-01-01

    Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission have initiated a multiyear project to investigate application of burnup credit for boiling-water reactor (BWR) fuel in storage and transportation casks. This project includes two phases. The first phase (1) investigates applicability of peak reactivity methods currently used in spent fuel pools (SFPs) to storage and transportation systems and (2) evaluates validation of both reactivity (k eff) calculations and burnup credit nuclide concentrations within these methods. The second phase will focus on extending burnup credit beyond peak reactivity. This paper documents the first phase, including an analysis of latticemore » design parameters and depletion effects, as well as both validation components. Initial efforts related to extended burnup credit are discussed in a companion paper. Peak reactivity analyses have been used in criticality analyses for licensing of BWR fuel in SFPs over the last 20 years. These analyses typically combine credit for the gadolinium burnable absorber present in the fuel with a modest amount of burnup credit. Gadolinium burnable absorbers are used in BWR assemblies to control core reactivity. The burnable absorber significantly reduces assembly reactivity at beginning of life, potentially leading to significant increases in assembly reactivity for burnups less than 15–20 GWd/MTU. The reactivity of each fuel lattice is dependent on gadolinium loading. The number of gadolinium-bearing fuel pins lowers initial lattice reactivity, but it has a small impact on the burnup and reactivity of the peak. The gadolinium concentration in each pin has a small impact on initial lattice reactivity but a significant effect on the reactivity of the peak and the burnup at which the peak occurs. The importance of the lattice parameters and depletion conditions are primarily determined by their impact on the gadolinium depletion. Criticality code validation for BWR burnup credit at peak reactivity requires a different set of experiments than for pressurized-water reactor burnup credit analysis because of differences in actinide compositions, presence of residual gadolinium absorber, and lower fission product concentrations. A survey of available critical experiments is presented along with a sample criticality code validation and determination of undercoverage penalties for some nuclides. The validation of depleted fuel compositions at peak reactivity presents many challenges which largely result from a lack of radiochemical assay data applicable to BWR fuel in this burnup range. In addition, none of the existing low burnup measurement data include residual gadolinium measurements. An example bias and uncertainty associated with validation of actinide-only fuel compositions is presented.« less

  1. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  2. Mesoscopic modelling and simulation of soft matter.

    PubMed

    Schiller, Ulf D; Krüger, Timm; Henrich, Oliver

    2017-12-20

    The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.

  3. Monte Carlo technique for very large ising models

    NASA Astrophysics Data System (ADS)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  4. Enabling Microscopic Simulators to Perform System Level Tasks: A System-Identification Based, Closure-on-Demand Toolkit for Multiscale Simulation Stability/Bifurcation Analysis, Optimization and Control

    DTIC Science & Technology

    2006-10-01

    The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W

  5. Carbon Nanostructure Examined by Lattice Fringe Analysis of High Resolution Transmission Electron Microscopy Images

    NASA Technical Reports Server (NTRS)

    VanderWal, Randy L.; Tomasek, Aaron J.; Street, Kenneth; Thompson, William K.

    2002-01-01

    The dimensions of graphitic layer planes directly affect the reactivity of soot towards oxidation and growth. Quantification of graphitic structure could be used to develop and test correlations between the soot nanostructure and its reactivity. Based upon transmission electron microscopy images, this paper provides a demonstration of the robustness of a fringe image analysis code for determining the level of graphitic structure within nanoscale carbon, i.e. soot. Results, in the form of histograms of graphitic layer plane lengths, are compared to their determination through Raman analysis.

  6. Carbon Nanostructure Examined by Lattice Fringe Analysis of High Resolution Transmission Electron Microscopy Images

    NASA Technical Reports Server (NTRS)

    VanderWal, Randy L.; Tomasek, Aaron J.; Street, Kenneth; Thompson, William K.; Hull, David R.

    2003-01-01

    The dimensions of graphitic layer planes directly affect the reactivity of soot towards oxidation and growth. Quantification of graphitic structure could be used to develop and test correlations between the soot nanostructure and its reactivity. Based upon transmission electron microscopy images, this paper provides a demonstration of the robustness of a fringe image analysis code for determining the level of graphitic structure within nanoscale carbon, i.e., soot. Results, in the form of histograms of graphitic layer plane lengths, are compared to their determination through Raman analysis.

  7. Creating a Simple Single Computational Approach to Modeling Rarefied and Continuum Flow About Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Goldstein, David B.; Varghese, Philip L.

    1997-01-01

    We proposed to create a single computational code incorporating methods that can model both rarefied and continuum flow to enable the efficient simulation of flow about space craft and high altitude hypersonic aerospace vehicles. The code was to use a single grid structure that permits a smooth transition between the continuum and rarefied portions of the flow. Developing an appropriate computational boundary between the two regions represented a major challenge. The primary approach chosen involves coupling a four-speed Lattice Boltzmann model for the continuum flow with the DSMC method in the rarefied regime. We also explored the possibility of using a standard finite difference Navier Stokes solver for the continuum flow. With the resulting code we will ultimately investigate three-dimensional plume impingement effects, a subject of critical importance to NASA and related to the work of Drs. Forrest Lumpkin, Steve Fitzgerald and Jay Le Beau at Johnson Space Center. Below is a brief background on the project and a summary of the results as of the end of the grant.

  8. Development of Ultra-Fine Multigroup Cross Section Library of the AMPX/SCALE Code Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeon, Byoung Kyu; Sik Yang, Won; Kim, Kang Seog

    The Consortium for Advanced Simulation of Light Water Reactors Virtual Environment for Reactor Applications (VERA) neutronic simulator MPACT is being developed by Oak Ridge National Laboratory and the University of Michigan for various reactor applications. The MPACT and simplified MPACT 51- and 252-group cross section libraries have been developed for the MPACT neutron transport calculations by using the AMPX and Standardized Computer Analyses for Licensing Evaluations (SCALE) code packages developed at Oak Ridge National Laboratory. It has been noted that the conventional AMPX/SCALE procedure has limited applications for fast-spectrum systems such as boiling water reactor (BWR) fuels with very highmore » void fractions and fast reactor fuels because of its poor accuracy in unresolved and fast energy regions. This lack of accuracy can introduce additional error sources to MPACT calculations, which is already limited by the Bondarenko approach for resolved resonance self-shielding calculation. To enhance the prediction accuracy of MPACT for fast-spectrum reactor analyses, the accuracy of the AMPX/SCALE code packages should be improved first. The purpose of this study is to identify the major problems of the AMPX/SCALE procedure in generating fast-spectrum cross sections and to devise ways to improve the accuracy. For this, various benchmark problems including a typical pressurized water reactor fuel, BWR fuels with various void fractions, and several fast reactor fuels were analyzed using the AMPX 252-group libraries. Isotopic reaction rates were determined by SCALE multigroup (MG) calculations and compared with continuous energy (CE) Monte Carlo calculation results. This reaction rate analysis revealed three main contributors to the observed differences in reactivity and reaction rates: (1) the limitation of the Bondarenko approach in coarse energy group structure, (2) the normalization issue of probability tables, and (3) neglect of the self-shielding effect of resonance-like cross sections at high energy range such as (n,p) cross section of Cl35. The first error source can be eliminated by an ultra-fine group (UFG) structure in which the broad scattering resonances of intermediate-weight nuclides can be represented accurately by a piecewise constant function. A UFG AMPX library was generated with modified probability tables and tested against various benchmark problems. The reactivity and reaction rates determined with the new UFG AMPX library agreed very well with respect to Monte Carlo Neutral Particle (MCNP) results. To enhance the lattice calculation accuracy without significantly increasing the computational time, performing the UFG lattice calculation in two steps was proposed. In the first step, a UFG slowing-down calculation is performed for the corresponding homogenized composition, and UFG cross sections are collapsed into an intermediate group structure. In the second step, the lattice calculation is performed for the intermediate group level using the condensed group cross sections. A preliminary test showed that the condensed library reproduces the results obtained with the UFG cross section library. This result suggests that the proposed two-step lattice calculation approach is a promising option to enhance the applicability of the AMPX/SCALE system to fast system analysis.« less

  9. Viscous investigation of a flapping foil propulsor

    NASA Astrophysics Data System (ADS)

    Posri, Attapol; Phoemsapthawee, Surasak; Thaweewat, Nonthipat

    2018-01-01

    Inspired by how fishes propel themselves, a flapping-foil device is invented as an alternative propulsion system for ships and boats. The performance of such propulsor has been formerly investigated using a potential flow code. The simulation results have shown that the device has high propulsive efficiency over a wide range of operation. However, the potential flow gives good results only when flow separation is not present. In case of high flapping frequency, the flow separation can occur over a short instant due to fluid viscosity and high angle of attack. This may cause a reduction of propulsive efficiency. A commercial CFD code based on Lattice Boltzmann Method, XFlow, is then employed in order to investigate the viscous effect over the propulsive performance of the flapping foil. The viscous results agree well with the potential flow results, confirming the high efficiency of the propulsor. As expected, viscous results show lower efficiency in high flapping frequency zone.

  10. Lattice stabilities, mechanical and thermodynamic properties of Al3Tm and Al3Lu intermetallics under high pressure from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Xu-Dong, Zhang; Wei, Jiang

    2016-02-01

    The effects of high pressure on lattice stability, mechanical and thermodynamic properties of L12 structure Al3Tm and Al3Lu are studied by first-principles calculations within the VASP code. The phonon dispersion curves and density of phonon states are calculated by using the PHONONPY code. Our results agree well with the available experimental and theoretical values. The vibrational properties indicate that Al3Tm and Al3Lu keep their dynamical stabilities in L12 structure up to 100 GPa. The elastic properties and Debye temperatures for Al3Tm and Al3Lu increase with the increase of pressure. The mechanical anisotropic properties are discussed by using anisotropic indices AG, AU, AZ, and the three-dimensional (3D) curved surface of Young’s modulus. The calculated results show that Al3Tm and Al3Lu are both isotropic at 0 GPa and anisotropic under high pressure. In the present work, the sound velocities in different directions for Al3Tm and Al3Lu are also predicted under high pressure. We also calculate the thermodynamic properties and provide the relationships between thermal parameters and temperature/pressure. These results can provide theoretical support for further experimental work and industrial applications. Project supported by the Scientific Technology Plan of the Educational Department of Liaoning Province and Liaoning Innovative Research Team in University, China (Grant No. LT2014004) and the Program for the Young Teacher Cultivation Fund of Shenyang University of Technology, China (Grant No. 005612).

  11. PALP: A Package for Analysing Lattice Polytopes with applications to toric geometry

    NASA Astrophysics Data System (ADS)

    Kreuzer, Maximilian; Skarke, Harald

    2004-02-01

    We describe our package PALP of C programs for calculations with lattice polytopes and applications to toric geometry, which is freely available on the internet. It contains routines for vertex and facet enumeration, computation of incidences and symmetries, as well as completion of the set of lattice points in the convex hull of a given set of points. In addition, there are procedures specialized to reflexive polytopes such as the enumeration of reflexive subpolytopes, and applications to toric geometry and string theory, like the computation of Hodge data and fibration structures for toric Calabi-Yau varieties. The package is well tested and optimized in speed as it was used for time consuming tasks such as the classification of reflexive polyhedra in 4 dimensions and the creation and manipulation of very large lists of 5-dimensional polyhedra. While originally intended for low-dimensional applications, the algorithms work in any dimension and our key routine for vertex and facet enumeration compares well with existing packages. Program summaryProgram obtainable form: CPC Program Library, Queen's University of Belfast, N. Ireland Title of program: PALP Catalogue identifier: ADSQ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSQ Computer for which the program is designed: Any computer featuring C Computers on which it has been tested: PCs, SGI Origin 2000, IBM RS/6000, COMPAQ GS140 Operating systems under which the program has been tested: Linux, IRIX, AIX, OSF1 Programming language used: C Memory required to execute with typical data: Negligible for most applications; highly variable for analysis of large polytopes; no minimum but strong effects on calculation time for some tasks Number of bits in a word: arbitrary Number of processors used: 1 Has the code been vectorised or parallelized?: No Number of bytes in distributed program, including test data, etc.: 138 098 Distribution format: tar gzip file Keywords: Lattice polytopes, facet enumeration, reflexive polytopes, toric geometry, Calabi-Yau manifolds, string theory, conformal field theory Nature of problem: Certain lattice polytopes called reflexive polytopes afford a combinatorial description of a very large class of Calabi-Yau manifolds in terms of toric geometry. These manifolds play an essential role for compactifications of string theory. While originally designed to handle and classify reflexive polytopes, with particular emphasis on problems relevant to string theory applications [M. Kreuzer and H. Skarke, Rev. Math. Phys. 14 (2002) 343], the package also handles standard questions (facet enumeration and similar problems) about arbitrary lattice polytopes very efficiently. Method of solution: Much of the code is straightforward programming, but certain key routines are optimized with respect to calculation time and the handling of large sets of data. A double description method (see, e.g., [D. Avis et al., Comput. Geometry 7 (1997) 265]) is used for the facet enumeration problem, lattice basis reduction for extended gcd and a binary database structure for tasks involving large numbers of polytopes, such as classification problems. Restrictions on the complexity of the program: The only hard limitation comes from the fact that fixed integer arithmetic (32 or 64 bit) is used, allowing for input data (polytope coordinates) of roughly up to 10 9. Other parameters (dimension, numbers of points and vertices, etc.) can be set before compilation. Typical running time: Most tasks (typically: analysis of a four dimensional reflexive polytope) can be perfomed interactively within milliseconds. The classification of all reflexive polytopes in four dimensions takes several processor years. The facet enumeration problem for higher (e.g., 12-20) dimensional polytopes varies strongly with the dimension and structure of the polytope; here PALP's performance is similar to that of existing packages [Avis et al., Comput. Geometry 7 (1997) 265]. Unusual features of the program: None

  12. Hippocampal Remapping Is Constrained by Sparseness rather than Capacity

    PubMed Central

    Kammerer, Axel; Leibold, Christian

    2014-01-01

    Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570

  13. Religious and Spiritual Dimensions of the Vietnamese Dementia Caregiving Experience

    PubMed Central

    Hinton, Ladson; Tran, Jane NhaUyen; Tran, Cindy; Hinton, Devon

    2010-01-01

    This paper focuses on the role of religion and spirituality in dementia caregiving among Vietnamese refugee families. In-depth qualitative interviews were conducted with nine Vietnamese caregivers of persons with dementia, then tape-recorded, transcribed, and analyzed for emergent themes. Caregivers related their spirituality/religion to three aspects of caregiving: (1) their own suffering, (2) their motivations for providing care, and (3) their understanding of the nature of the illness. Key terms or idioms were used to articulate spiritual/religious dimensions of the caregivers’ experience, which included sacrifice, compassion, karma, blessings, grace and peace of mind. In their narratives, the caregivers often combined multiple strands of different religions and/or spiritualities: Animism, Buddhism, Taoism, Confucianism and Catholicism. Case studies are presented to illustrate the relationship between religion/spirituality and the domains of caregiving. These findings have relevance for psychotherapeutic interventions with ethnically diverse populations. PMID:20930949

  14. Suppression of turbulence by heterogeneities in a cardiac model with fiber rotation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhihui; Steinbock, Oliver

    2017-09-01

    Electrical scroll wave turbulence in human ventricles is associated with ventricular fibrillation and sudden cardiac death. We perform three-dimensional simulations on the basis of the anisotropic Fenton-Karma model and show that macroscopic, insulating heterogeneities (e.g., blood vessels) can cause the spontaneous formation of pinned scroll waves. The wave field of these vortices is periodic, and their frequencies are sufficiently high to push the free, turbulent vortices into the system boundaries where they annihilate. Our study considers cylindrical heterogeneities with radii in the range of 0.1 to 2 cm that extend either in the transmural or a perpendicular direction. Thick cylinders cause the spontaneous formation of multi-armed rotors according to a radius-dependence that is explained in terms of two-dimensional dynamics. For long cylinders, local pinning contacts spread along the heterogeneity by fast and complex self-wrapping.

  15. Spatiotemporal Permutation Entropy as a Measure for Complexity of Cardiac Arrhythmia

    NASA Astrophysics Data System (ADS)

    Schlemmer, Alexander; Berg, Sebastian; Lilienkamp, Thomas; Luther, Stefan; Parlitz, Ulrich

    2018-05-01

    Permutation entropy (PE) is a robust quantity for measuring the complexity of time series. In the cardiac community it is predominantly used in the context of electrocardiogram (ECG) signal analysis for diagnoses and predictions with a major application found in heart rate variability parameters. In this article we are combining spatial and temporal PE to form a spatiotemporal PE that captures both, complexity of spatial structures and temporal complexity at the same time. We demonstrate that the spatiotemporal PE (STPE) quantifies complexity using two datasets from simulated cardiac arrhythmia and compare it to phase singularity analysis and spatial PE (SPE). These datasets simulate ventricular fibrillation (VF) on a two-dimensional and a three-dimensional medium using the Fenton-Karma model. We show that SPE and STPE are robust against noise and demonstrate its usefulness for extracting complexity features at different spatial scales.

  16. Pinned on Karma Rock: whitewater kayaking as religious experience.

    PubMed

    Sanford, A Whitney

    2007-01-01

    This paper argues that whitewater paddling constitutes religious experience, that non-western terms often best describe this experience and that these two facts are related and have much to tell us about the nature of religious experience. That many paddlers articulate their experiences using Asian and/or indigenous religious terms suggests that this language is a form of opposition to existing norms of what constitutes religious experience. So, investigating the sport as an aquatic nature religion provides the opportunity to revisit existing categories. As a "lived religion," whitewater kayaking is a ritual practice of an embodied encounter with the sacred, and the sacred encounter is mediated through the body's performance in the water. This sacred encounter-with its risk and danger-illustrates Rudolph Otto's equation of the sacred with terrifying and unfathomable mystery and provides a counterpoint to norms of North American religiosity and related scholarship.

  17. Dynamical mechanism of atrial fibrillation: A topological approach

    NASA Astrophysics Data System (ADS)

    Marcotte, Christopher D.; Grigoriev, Roman O.

    2017-09-01

    While spiral wave breakup has been implicated in the emergence of atrial fibrillation, its role in maintaining this complex type of cardiac arrhythmia is less clear. We used the Karma model of cardiac excitation to investigate the dynamical mechanisms that sustain atrial fibrillation once it has been established. The results of our numerical study show that spatiotemporally chaotic dynamics in this regime can be described as a dynamical equilibrium between topologically distinct types of transitions that increase or decrease the number of wavelets, in general agreement with the multiple wavelets' hypothesis. Surprisingly, we found that the process of continuous excitation waves breaking up into discontinuous pieces plays no role whatsoever in maintaining spatiotemporal complexity. Instead, this complexity is maintained as a dynamical balance between wave coalescence—a unique, previously unidentified, topological process that increases the number of wavelets—and wave collapse—a different topological process that decreases their number.

  18. Mootrala Karma of Kusha [Imperata cylindrica Beauv.] and Darbha [Desmostachya bipinnata Stapf.] - A comparative study

    PubMed Central

    Shah, Niti T.; Pandya, Tarulata N.; Sharma, Parameshwar P.; Patel, Bhupesh R.; Acharya, Rabinarayan

    2012-01-01

    Kusha (Imperata cylindrica Beauv.) and Darbha (Desmostachya bipinnata Stapf.) are enlisted among Trinapanchamoola, which is a well-known diuretic and are individually enumerated in the Mootravirechaneeya Dashemani. The article deals with the evaluation and comparison of the individual Mootrala (diuretic) action of the two drugs in healthy volunteers. In this study, 29 healthy volunteers were divided into three groups administered with Darbha Moola Churna, Kusha Moola Churna, and placebo in each group for 14 days. The volunteers were subjected to evaluation of diuretic activity by maintaining the daily total input–output charts during the course of the study. The volunteers were advised to consume a minimum 2 l of water daily. Results show that Darbha and Kusha leaded to a percentage increase in urine volume as compared to placebo group, but the result was statistically insignificant. PMID:23723646

  19. First principles study of the ground state properties of Si, Ga, and Ge doped Fe50Al50

    NASA Astrophysics Data System (ADS)

    Pérez, Carlos Ariel Samudio; dos Santos, Antonio Vanderlei

    2018-06-01

    The first principles calculation of the structural, electronic and associated properties of the Fe50Al50 alloy (B2 phase) doped by s-p elements (Im = Si, Ga, and Ge) are performed as a function of the atomic concentration on the basis of the Full Potential Linear Augmented Plane Wave (FP-LAPW) method as implemented in the WIEN2k code. The Al substitution by Im (Si and Ge) atoms (principally at a concentration of 6.25 at%) induces a pronounced redistribution of the electronic charge leading to a strong Fe-Im interaction with covalent bonding character. At the same time, decrease the lattice volume (V) while increase the bulk modulus (B). For the alloys containing Ga, the Fe-Ga interaction is also observed but the V and B of the alloy are very near to that of pure Fe-Al alloy. The magnetic moment and hyperfine parameters observed at the lattice sites of studied alloys also show variations, they increase or decrease in relation to that in Fe50Al50 according to the Im that substitutes Al.

  20. A model for finite-deformation nonlinear thermomechanical response of single crystal copper under shock conditions

    NASA Astrophysics Data System (ADS)

    Luscher, Darby J.; Bronkhorst, Curt A.; Alleman, Coleman N.; Addessio, Francis L.

    2013-09-01

    A physically consistent framework for combining pressure-volume-temperature equations of state with crystal plasticity models is developed for the application of modeling the response of single and polycrystals under shock conditions. The particular model is developed for copper, thus the approach focuses on crystals of cubic symmetry although many of the concepts in the approach are applicable to crystals of lower symmetry. We employ a multiplicative decomposition of the deformation gradient into isochoric elastic, thermoelastic dilation, and plastic parts leading to a definition of isochoric elastic Green-Lagrange strain. This finite deformation kinematic decomposition enables a decomposition of Helmholtz free-energy into terms reflecting dilatational thermoelasticity, strain energy due to long-range isochoric elastic deformation of the lattice and a term reflecting energy stored in short range elastic lattice deformation due to evolving defect structures. A model for the single crystal response of copper is implemented consistent with the framework into a three-dimensional Lagrangian finite element code. Simulations exhibit favorable agreement with single and bicrystal experimental data for shock pressures ranging from 3 to 110 GPa.

  1. Criticality calculations of the Very High Temperature reactor Critical Assembly benchmark with Serpent and SCALE/KENO-VI

    DOE PAGES

    Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...

    2015-12-30

    Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less

  2. Restoring canonical partition functions from imaginary chemical potential

    NASA Astrophysics Data System (ADS)

    Bornyakov, V. G.; Boyda, D.; Goy, V.; Molochkov, A.; Nakamura, A.; Nikolaev, A.; Zakharov, V. I.

    2018-03-01

    Using GPGPU techniques and multi-precision calculation we developed the code to study QCD phase transition line in the canonical approach. The canonical approach is a powerful tool to investigate sign problem in Lattice QCD. The central part of the canonical approach is the fugacity expansion of the grand canonical partition functions. Canonical partition functions Zn(T) are coefficients of this expansion. Using various methods we study properties of Zn(T). At the last step we perform cubic spline for temperature dependence of Zn(T) at fixed n and compute baryon number susceptibility χB/T2 as function of temperature. After that we compute numerically ∂χ/∂T and restore crossover line in QCD phase diagram. We use improved Wilson fermions and Iwasaki gauge action on the 163 × 4 lattice with mπ/mρ = 0.8 as a sandbox to check the canonical approach. In this framework we obtain coefficient in parametrization of crossover line Tc(µ2B) = Tc(C-ĸµ2B/T2c) with ĸ = -0.0453 ± 0.0099.

  3. Average intragranular misorientation trends in polycrystalline materials predicted by a viscoplastic self-consistent approach

    DOE PAGES

    Lebensohn, Ricardo A.; Zecevic, Miroslav; Knezevic, Marko; ...

    2015-12-15

    Here, this work presents estimations of average intragranular fluctuations of lattice rotation rates in polycrystalline materials, obtained by means of the viscoplastic self-consistent (VPSC) model. These fluctuations give a tensorial measure of the trend of misorientation developing inside each single crystal grain representing a polycrystalline aggregate. We first report details of the algorithm implemented in the VPSC code to estimate these fluctuations, which are then validated by comparison with corresponding full-field calculations. Next, we present predictions of average intragranular fluctuations of lattice rotation rates for cubic aggregates, which are rationalized by comparison with experimental evidence on annealing textures of fccmore » and bcc polycrystals deformed in tension and compression, respectively, as well as with measured intragranular misorientation distributions in a Cu polycrystal deformed in tension. The orientation-dependent and micromechanically-based estimations of intragranular misorientations that can be derived from the present implementation are necessary to formulate sound sub-models for the prediction of quantitatively accurate deformation textures, grain fragmentation, and recrystallization textures using the VPSC approach.« less

  4. S/sub n/ analysis of the TRX metal lattices with ENDF/B version III data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wheeler, F.J.; Pearlstein, S.

    1975-03-01

    Two critical assemblies, designated as thermal-reactor benchmarks TRX-1 and TRX-2 for ENDF/B data testing, were analyzed using the one-dimensional S/sub n/-theory code SCAMP. The two assemblies were simple lattices of aluminum-clad, uranium-metal fuel rods in triangular arrays with D$sub 2$O as moderator and reflector. The fuel was low-enriched (1.3 percent $sup 235$U), 0.387-inch in diameter and had an active height of 48 inches. The volume ratio of water to uranium was 2.35 for the TRX-1 lattice and 4.02 for TRX-2. Full-core S/sub n/ calculations based on Version III data were performed for these assemblies and the results obtained were comparedmore » with the measured values of the multiplication factors, the ratio of epithermal-to-thermal neutron capture in $sup 238$U, the ratio of epithermal-to-thermal fission in $sup 235$U, the ratio of $sup 238$U fission to $sup 235$U fission, and the ratio of capture in $sup 238$U to fission in $sup 235$U. Reaction rates were obtained from a central region of the full- core problems. Multigroup cross sections for the reactor calculation were obtained from S/sub n/ cell calculations with resonance self-shielding calculated using the RABBLE treatment. The results of the analyses are generally consistent with results obtained by other investigators. (auth)« less

  5. GPU-accelerated algorithms for many-particle continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo

    2017-06-01

    Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.

  6. Structural and electronic properties of GaAs and GaP semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rani, Anita; Kumar, Ranjan

    2015-05-15

    The Structural and Electronic properties of Zinc Blende phase of GaAs and GaP compounds are studied using self consistent SIESTA-code, pseudopotentials and Density Functional Theory (DFT) in Local Density Approximation (LDA). The Lattice Constant, Equillibrium Volume, Cohesive Energy per pair, Compressibility and Band Gap are calculated. The band gaps calcultated with DFT using LDA is smaller than the experimental values. The P-V data fitted to third order Birch Murnaghan equation of state provide the Bulk Modulus and its pressure derivatives. Our Structural and Electronic properties estimations are in agreement with available experimental and theoretical data.

  7. Confirmation of shutdown cooling effects

    NASA Astrophysics Data System (ADS)

    Sato, Kotaro; Tabuchi, Masato; Sugimura, Naoki; Tatsumi, Masahiro

    2015-12-01

    After the Fukushima accidents, all nuclear power plants in Japan have gradually stopped their operations and have long periods of shutdown. During those periods, reactivity of fuels continues to change significantly especially for high-burnup UO2 fuels and MOX fuels due to radioactive decays. It is necessary to consider these isotopic changes precisely, to predict neutronics characteristics accurately. In this paper, shutdown cooling (SDC) effects of UO2 and MOX fuels that have unusual operation histories are confirmed by the advanced lattice code, AEGIS. The calculation results show that the effects need to be considered even after nuclear power plants come back to normal operation.

  8. Pressure measurements on a thick cambered and twisted 58 deg delta wing at high subsonic speeds

    NASA Technical Reports Server (NTRS)

    Chu, Julio; Lamar, John E.

    1987-01-01

    A pressure experiment at high subsonic speeds was conducted by a cambered and twisted thick delta wing at the design condition (Mach number 0.80), as well as at nearby Mach numbers (0.75 and 0.83) and over an angle-of-attack range. Effects of twin vertical tails on the wing pressure measurements were also assessed. Comparisons of detailed theoretical and experimental surface pressures and sectional characteristics for the wing alone are presented. The theoretical codes employed are FLO-57, FLO-28, PAN AIR, and the Vortex Lattice Method-Suction Analogy.

  9. Exact reconstruction analysis/synthesis filter banks with time-varying filters

    NASA Technical Reports Server (NTRS)

    Arrowood, J. L., Jr.; Smith, M. J. T.

    1993-01-01

    This paper examines some of the analysis/synthesis issues associated with FIR time-varying filter banks where the filter bank coefficients are allowed to change in response to the input signal. Several issues are identified as being important in order to realize performance gains from time-varying filter banks in image coding applications. These issues relate to the behavior of the filters as transition from one set of filter banks to another occurs. Lattice structure formulations for the time varying filter bank problem are introduced and discussed in terms of their properties and transition characteristics.

  10. Study of Particle Rotation Effect in Gas-Solid Flows using Direct Numerical Simulation with a Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang

    A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less

  11. Socioeconomic, religious, spiritual and health factors associated with symptoms of common mental disorders: a cross-sectional secondary analysis of data from Bhutan’s Gross National Happiness Study, 2015

    PubMed Central

    Sithey, Gyambo; Li, Mu; Kelly, Patrick J; Clarke, Kelly

    2018-01-01

    Objective Common mental disorders (CMDs) are a major cause of the global burden of disease. Bhutan was the first country in the world to focus on happiness as a state policy; however, little is known about the prevalence and risk factors of CMDs in this setting. We aim to identify socioeconomic, religious, spiritual and health factors associated with symptoms of CMDs. Design and setting We used data from Bhutan’s 2015 Gross National Happiness (GNH) Survey, a multistage, cross-sectional nationwide household survey. Data were analysed using a hierarchical analytical framework and generalised estimating equations. Participants The GNH Survey included 7041 male and female respondents aged 15 years and above. Measures The 12-item General Health Questionnaire was used to measure symptoms of CMDs. We estimated the prevalence of CMDs using a threshold score of ≥12. Results The prevalence of CMDs was 29.3% (95% CI 26.8% to 31.8%). Factors associated with symptoms of CMDs were: older age (65+) (β=1.29, 95% CI 0.57 to 2.00), being female (β=0.70, 95% CI 0.45 to 0.95), being divorced or widowed (β=1.55, 95% CI 1.08 to 2.02), illiteracy (β=0.48, 95% CI 0.21 to 0.74), low income (β=0.37, 95% CI 0.15 to 0.59), being moderately spiritual (β=0.61, 95% CI 0.34 to 0.88) or somewhat or not spiritual (β=0.76, 95% CI 0.28 to 1.23), occasionally considering karma in daily life (β=0.53, 95% CI 0.29 to 0.77) or never considering karma (β=0.80, 95% CI 0.26 to 1.34), having poor self-reported health (β=2.59, 95% CI 2.13 to 3.06) and having a disability (β=1.01, 95% CI 0.63 to 1.40). Conclusions CMDs affect a substantial proportion of the Bhutanese population. Our findings confirm the importance of established socioeconomic risk factors for CMDs, and suggest a potential link between spiritualism and mental health in this setting. PMID:29453295

  12. The Caulobacter crescentus phage phiCbK: genomics of a canonical phage

    PubMed Central

    2012-01-01

    Background The bacterium Caulobacter crescentus is a popular model for the study of cell cycle regulation and senescence. The large prolate siphophage phiCbK has been an important tool in C. crescentus biology, and has been studied in its own right as a model for viral morphogenesis. Although a system of some interest, to date little genomic information is available on phiCbK or its relatives. Results Five novel phiCbK-like C. crescentus bacteriophages, CcrMagneto, CcrSwift, CcrKarma, CcrRogue and CcrColossus, were isolated from the environment. The genomes of phage phiCbK and these five environmental phage isolates were obtained by 454 pyrosequencing. The phiCbK-like phage genomes range in size from 205 kb encoding 318 proteins (phiCbK) to 280 kb encoding 448 proteins (CcrColossus), and were found to contain nonpermuted terminal redundancies of 10 to 17 kb. A novel method of terminal ligation was developed to map genomic termini, which confirmed termini predicted by coverage analysis. This suggests that sequence coverage discontinuities may be useable as predictors of genomic termini in phage genomes. Genomic modules encoding virion morphogenesis, lysis and DNA replication proteins were identified. The phiCbK-like phages were also found to encode a number of intriguing proteins; all contain a clearly T7-like DNA polymerase, and five of the six encode a possible homolog of the C. crescentus cell cycle regulator GcrA, which may allow the phage to alter the host cell’s replicative state. The structural proteome of phage phiCbK was determined, identifying the portal, major and minor capsid proteins, the tail tape measure and possible tail fiber proteins. All six phage genomes are clearly related; phiCbK, CcrMagneto, CcrSwift, CcrKarma and CcrRogue form a group related at the DNA level, while CcrColossus is more diverged but retains significant similarity at the protein level. Conclusions Due to their lack of any apparent relationship to other described phages, this group is proposed as the founding cohort of a new phage type, the phiCbK-like phages. This work will serve as a foundation for future studies on morphogenesis, infection and phage-host interactions in C. crescentus. PMID:23050599

  13. Physicochemical analog for modeling superimposed and coded memories

    NASA Astrophysics Data System (ADS)

    Ensanian, Minas

    1992-07-01

    The mammalian brain is distinguished by a life-time of memories being stored within the same general region of physicochemical space, and having two extraordinary features. First, memories to varying degrees are superimposed, as well as coded. Second, instantaneous recall of past events can often be affected by relatively simple, and seemingly unrelated sensory clues. For the purposes of attempting to mathematically model such complex behavior, and for gaining additional insights, it would be highly advantageous to be able to simulate or mimic similar behavior in a nonbiological entity where some analogical parameters of interest can reasonably be controlled. It has recently been discovered that in nonlinear accumulative metal fatigue memories (related to mechanical deformation) can be superimposed and coded in the crystal lattice, and that memory, that is, the total number of stress cycles can be recalled (determined) by scanning not the surfaces but the `edges' of the objects. The new scanning technique known as electrotopography (ETG) now makes the state space modeling of metallic networks possible. The author provides an overview of the new field and outlines the areas that are of immediate interest to the science of artificial neural networks.

  14. Update and evaluation of decay data for spent nuclear fuel analyses

    NASA Astrophysics Data System (ADS)

    Simeonov, Teodosi; Wemple, Charles

    2017-09-01

    Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.

  15. OpenRBC: Redefining the Frontier of Red Blood Cell Simulations at Protein Resolution

    NASA Astrophysics Data System (ADS)

    Tang, Yu-Hang; Lu, Lu; Li, He; Grinberg, Leopold; Sachdeva, Vipin; Evangelinos, Constantinos; Karniadakis, George

    We present a from-scratch development of OpenRBC, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented in silico experiment - simulating an entire mammal red blood cell lipid bilayer and cytoskeleton modeled by 4 million mesoscopic particles - on a single shared memory node. To achieve this, we invented an adaptive spatial searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of a lattice-free cell list, i.e. the key spatial searching data structure in our code, in O (N) time and space space with cells whose position and shape adapts automatically to the local density and curvature. The code implements NUMA/NUCA-aware OpenMP parallelization and achieves perfect scaling with up to hundreds of hardware threads. The code outperforms a legacy solver by more than 8 times in time-to-solution and more than 20 times in problem size, thus providing a new venue for probing the cytomechanics of red blood cells. This work was supported by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Model- ing of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award.

  16. Square lattice honeycomb reactor for space power and propulsion

    NASA Astrophysics Data System (ADS)

    Gouw, Reza; Anghaie, Samim

    2000-01-01

    The most recent nuclear design study at the Innovative Nuclear Space Power and Propulsion Institute (INSPI) is the Moderated Square-Lattice Honeycomb (M-SLHC) reactor design utilizing the solid solution of ternary carbide fuels. The reactor is fueled with solid solution of 93% enriched (U,Zr,Nb)C. The square-lattice honeycomb design provides high strength and is amenable to the processing complexities of these ultrahigh temperature fuels. The optimum core configuration requires a balance between high specific impulse and thrust level performance, and maintaining the temperature and strength limits of the fuel. The M-SLHC design is based on a cylindrical core that has critical radius and length of 37 cm and 50 cm, respectively. This design utilized zirconium hydrate to act as moderator. The fuel sub-assemblies are designed as cylindrical tubes with 12 cm in diameter and 10 cm in length. Five fuel subassemblies are stacked up axially to form one complete fuel assembly. These fuel assemblies are then arranged in the circular arrangement to form two fuel regions. The first fuel region consists of six fuel assemblies, and 18 fuel assemblies for the second fuel region. A 10-cm radial beryllium reflector in addition to 10-cm top axial beryllium reflector is used to reduce neutron leakage from the system. To perform nuclear design analysis of the M-SLHC design, a series of neutron transport and diffusion codes are used. To optimize the system design, five axial regions are specified. In each axial region, temperature and fuel density are varied. The axial and radial power distributions for the system are calculated, as well as the axial and radial flux distributions. Temperature coefficients of the system are also calculated. A water submersion accident scenario is also analyzed for these systems. Results of the nuclear design analysis indicate that a compact core can be designed based on ternary uranium carbide square-lattice honeycomb fuel, which provides a relatively high thrust to weight ratio. .

  17. A Dynamic Finite Element Method for Simulating the Physics of Faults Systems

    NASA Astrophysics Data System (ADS)

    Saez, E.; Mora, P.; Gross, L.; Weatherley, D.

    2004-12-01

    We introduce a dynamic Finite Element method using a novel high level scripting language to describe the physical equations, boundary conditions and time integration scheme. The library we use is the parallel Finley library: a finite element kernel library, designed for solving large-scale problems. It is incorporated as a differential equation solver into a more general library called escript, based on the scripting language Python. This library has been developed to facilitate the rapid development of 3D parallel codes, and is optimised for the Australian Computational Earth Systems Simulator Major National Research Facility (ACcESS MNRF) supercomputer, a 208 processor SGI Altix with a peak performance of 1.1 TFlops. Using the scripting approach we obtain a parallel FE code able to take advantage of the computational efficiency of the Altix 3700. We consider faults as material discontinuities (the displacement, velocity, and acceleration fields are discontinuous at the fault), with elastic behavior. The stress continuity at the fault is achieved naturally through the expression of the fault interactions in the weak formulation. The elasticity problem is solved explicitly in time, using the Saint Verlat scheme. Finally, we specify a suitable frictional constitutive relation and numerical scheme to simulate fault behaviour. Our model is based on previous work on modelling fault friction and multi-fault systems using lattice solid-like models. We adapt the 2D model for simulating the dynamics of parallel fault systems described to the Finite-Element method. The approach uses a frictional relation along faults that is slip and slip-rate dependent, and the numerical integration approach introduced by Mora and Place in the lattice solid model. In order to illustrate the new Finite Element model, single and multi-fault simulation examples are presented.

  18. Towards the reanalysis of void coefficients measurements at proteus for high conversion light water reactor lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hursin, M.; Koeberl, O.; Perret, G.

    2012-07-01

    High Conversion Light Water Reactors (HCLWR) allows a better usage of fuel resources thanks to a higher breeding ratio than standard LWR. Their uses together with the current fleet of LWR constitute a fuel cycle thoroughly studied in Japan and the US today. However, one of the issues related to HCLWR is their void reactivity coefficient (VRC), which can be positive. Accurate predictions of void reactivity coefficient in HCLWR conditions and their comparisons with representative experiments are therefore required. In this paper an inter comparison of modern codes and cross-section libraries is performed for a former Benchmark on Void Reactivitymore » Effect in PWRs conducted by the OECD/NEA. It shows an overview of the k-inf values and their associated VRC obtained for infinite lattice calculations with UO{sub 2} and highly enriched MOX fuel cells. The codes MCNPX2.5, TRIPOLI4.4 and CASMO-5 in conjunction with the libraries ENDF/B-VI.8, -VII.0, JEF-2.2 and JEFF-3.1 are used. A non-negligible spread of results for voided conditions is found for the high content MOX fuel. The spread of eigenvalues for the moderated and voided UO{sub 2} fuel are about 200 pcm and 700 pcm, respectively. The standard deviation for the VRCs for the UO{sub 2} fuel is about 0.7% while the one for the MOX fuel is about 13%. This work shows that an appropriate treatment of the unresolved resonance energy range is an important issue for the accurate determination of the void reactivity effect for HCLWR. A comparison to experimental results is needed to resolve the presented discrepancies. (authors)« less

  19. Simulation des fuites neutroniques a l'aide d'un modele B1 heterogene pour des reacteurs a neutrons rapides et a eau legere

    NASA Astrophysics Data System (ADS)

    Faure, Bastien

    The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.

  20. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  1. GPUs in a computational physics course

    NASA Astrophysics Data System (ADS)

    Adler, Joan; Nissim, Gal; Kiswani, Ahmad

    2017-10-01

    In an introductory computational physics class of the type that many of us give, time constraints lead to hard choices on topics. Everyone likes to include their own research in such a class but an overview of many areas is paramount. Parallel programming algorithms using MPI is one important topic. Both the principle and the need to break the “fear barrier” of using a large machine with a queuing system via ssh must be sucessfully passed on. Due to the plateau in chip development and to power considerations future HPC hardware choices will include heavy use of GPUs. Thus the need to introduce these at the level of an introductory course has arisen. Just as for parallel coding, explanation of the benefits and simple examples to guide the hesitant first time user should be selected. Several student projects using GPUs that include how-to pages were proposed at the Technion. Two of the more successful ones were lattice Boltzmann and a finite element code, and we present these in detail.

  2. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, Michael F.; Bruhwiler, David L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach.« less

  3. PyCOOL — A Cosmological Object-Oriented Lattice code written in Python

    NASA Astrophysics Data System (ADS)

    Sainio, J.

    2012-04-01

    There are a number of different phenomena in the early universe that have to be studied numerically with lattice simulations. This paper presents a graphics processing unit (GPU) accelerated Python program called PyCOOL that solves the evolution of scalar fields in a lattice with very precise symplectic integrators. The program has been written with the intention to hit a sweet spot of speed, accuracy and user friendliness. This has been achieved by using the Python language with the PyCUDA interface to make a program that is easy to adapt to different scalar field models. In this paper we derive the symplectic dynamics that govern the evolution of the system and then present the implementation of the program in Python and PyCUDA. The functionality of the program is tested in a chaotic inflation preheating model, a single field oscillon case and in a supersymmetric curvaton model which leads to Q-ball production. We have also compared the performance of a consumer graphics card to a professional Tesla compute card in these simulations. We find that the program is not only accurate but also very fast. To further increase the usefulness of the program we have equipped it with numerous post-processing functions that provide useful information about the cosmological model. These include various spectra and statistics of the fields. The program can be additionally used to calculate the generated curvature perturbation. The program is publicly available under GNU General Public License at https://github.com/jtksai/PyCOOL. Some additional information can be found from http://www.physics.utu.fi/tiedostot/theory/particlecosmology/pycool/.

  4. COMOC: Three dimensional boundary region variant, programmer's manual

    NASA Technical Reports Server (NTRS)

    Orzechowski, J. A.; Baker, A. J.

    1974-01-01

    The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.

  5. The tensor network theory library

    NASA Astrophysics Data System (ADS)

    Al-Assam, S.; Clark, S. R.; Jaksch, D.

    2017-09-01

    In this technical paper we introduce the tensor network theory (TNT) library—an open-source software project aimed at providing a platform for rapidly developing robust, easy to use and highly optimised code for TNT calculations. The objectives of this paper are (i) to give an overview of the structure of TNT library, and (ii) to help scientists decide whether to use the TNT library in their research. We show how to employ the TNT routines by giving examples of ground-state and dynamical calculations of one-dimensional bosonic lattice system. We also discuss different options for gaining access to the software available at www.tensornetworktheory.org.

  6. Large-area metallic photonic lattices for military applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luk, Ting Shan

    2007-11-01

    In this project we developed photonic crystal modeling capability and fabrication technology that is scaleable to large area. An intelligent optimization code was developed to find the optimal structure for the desired spectral response. In terms of fabrication, an exhaustive survey of fabrication techniques that would meet the large area requirement was reduced to Deep X-ray Lithography (DXRL) and nano-imprint. Using DXRL, we fabricated a gold logpile photonic crystal in the <100> plane. For the nano-imprint technique, we fabricated a cubic array of gold squares. These two examples also represent two classes of metallic photonic crystal topologies, the connected networkmore » and cermet arrangement.« less

  7. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  8. Performance Portability Strategies for Grid C++ Expression Templates

    NASA Astrophysics Data System (ADS)

    Boyle, Peter A.; Clark, M. A.; DeTar, Carleton; Lin, Meifeng; Rana, Verinder; Vaquero Avilés-Casco, Alejandro

    2018-03-01

    One of the key requirements for the Lattice QCD Application Development as part of the US Exascale Computing Project is performance portability across multiple architectures. Using the Grid C++ expression template as a starting point, we report on the progress made with regards to the Grid GPU offloading strategies. We present both the successes and issues encountered in using CUDA, OpenACC and Just-In-Time compilation. Experimentation and performance on GPUs with a SU(3)×SU(3) streaming test will be reported. We will also report on the challenges of using current OpenMP 4.x for GPU offloading in the same code.

  9. On Traveling Waves in Lattices: The Case of Riccati Lattices

    NASA Astrophysics Data System (ADS)

    Dimitrova, Zlatinka

    2012-09-01

    The method of simplest equation is applied for analysis of a class of lattices described by differential-difference equations that admit traveling-wave solutions constructed on the basis of the solution of the Riccati equation. We denote such lattices as Riccati lattices. We search for Riccati lattices within two classes of lattices: generalized Lotka-Volterra lattices and generalized Holling lattices. We show that from the class of generalized Lotka-Volterra lattices only the Wadati lattice belongs to the class of Riccati lattices. Opposite to this many lattices from the Holling class are Riccati lattices. We construct exact traveling wave solutions on the basis of the solution of Riccati equation for three members of the class of generalized Holling lattices.

  10. Transverse beam dynamics in non-linear Fixed Field Alternating Gradient accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haj, Tahar M.; Meot, F.

    2016-03-02

    In this paper, we present some aspects of the transverse beam dynamics in Fixed Field Ring Accelerators (FFRA): we start from the basic principles in order to derive the linearized transverse particle equations of motion for FFRA, essentially FFAGs and cyclotrons are considered here. This is a simple extension of a previous work valid for linear lattices that we generalized by including the bending terms to ensure its correctness for FFAG lattice. The space charge term (contribution of the internal coulombian forces of the beam) is contained as well, although it is not discussed here. The emphasis is on themore » scaling FFAG type: a collaboration work is undertaken in view of better understanding the properties of the 150 MeV scaling FFAG at KURRI in Japan, and progress towards high intensity operation. Some results of the benchmarking work between different codes are presented. Analysis of certain type of field imperfections revealed some interesting features about this machine that explain some of the experimental results and generalize the concept of a scaling FFAG to a non-scaling one for which the tune variations obey a well-defined law.« less

  11. Effect of image scaling and segmentation in digital rock characterisation

    NASA Astrophysics Data System (ADS)

    Jones, B. D.; Feng, Y. T.

    2016-04-01

    Digital material characterisation from microstructural geometry is an emerging field in computer simulation. For permeability characterisation, a variety of studies exist where the lattice Boltzmann method (LBM) has been used in conjunction with computed tomography (CT) imaging to simulate fluid flow through microscopic rock pores. While these previous works show that the technique is applicable, the use of binary image segmentation and the bounceback boundary condition results in a loss of grain surface definition when the modelled geometry is compared to the original CT image. We apply the immersed moving boundary (IMB) condition of Noble and Torczynski as a partial bounceback boundary condition which may be used to better represent the geometric definition provided by a CT image. The IMB condition is validated against published work on idealised porous geometries in both 2D and 3D. Following this, greyscale image segmentation is applied to a CT image of Diemelstadt sandstone. By varying the mapping of CT voxel densities to lattice sites, it is shown that binary image segmentation may underestimate the true permeability of the sample. A CUDA-C-based code, LBM-C, was developed specifically for this work and leverages GPU hardware in order to carry out computations.

  12. The University of Maryland Electron Ring: A Model Recirculator for Intense Beam Physics Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernal, S.; Li, H.; Cui, Y.

    2004-12-07

    The University of Maryland Electron Ring (UMER), designed for transport studies of space-charge dominated beams in a strong focusing lattice, is nearing completion. Low energy, high intensity electron beams provide an excellent model system for experimental studies with relevance to all areas that require high quality, intense charged-particle beams. In addition, UMER constitutes an important tool for benchmarking of computer codes. When completed, the UMER lattice will consist of 36 alternating-focusing (FODO) periods over an 11.5-m circumference. Current studies in UMER over about 2/3 of the ring include beam-envelope matching, halo formation, asymmetrical focusing, and longitudinal dynamics (beam bunch erosionmore » and wave propagation.) Near future, multi-turn operation of the ring will allow us to address important additional issues such as resonance-traversal, energy spread and others. The main diagnostics are phosphor screens and capacitive beam position monitors placed at the center of each 200 bending section. In addition, pepper-pot and slit-wire emittance meters are in operation. The range of beam currents used corresponds to space charge tune depressions from 0.2 to 0.8, which is unprecedented for a circular machine.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dokhane, A.; Canepa, S.; Ferroukhi, H.

    For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less

  14. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes.

    PubMed

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-21

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  15. Strength and stress: Positive and negative impacts on caregivers for older adults in Thailand.

    PubMed

    Gray, Rossarin Soottipong; Hahn, Laura; Thapsuwan, Sasinee; Thongcharoenchupong, Natjera

    2016-06-01

    To understand the experiences of caregivers with older people living in Thailand, particularly as related to quality of life and stress management. In-depth interviews with 17 family caregivers were conducted and then data were thematically analysed. Carers experience not only negative impacts but also positive impacts from caregiving. Negative impacts include emotional stress, financial struggles and worry due to lack of knowledge. Positive impacts include affection from care recipients, good relationships with caregivers before needing care themselves and encouragement from the wider community. Opportunities to show gratitude, build karma (from good deeds) and ideas shaped largely by Buddhist teachings result in positive experiences. Negotiating between the extremes of bliss and suffering and understanding suffering as a part of life may help carers manage their stress. Temples and centres for older people could be engaged to develop caregiving programs. © 2016 The Authors. Australasian Journal on Ageing published by Wiley Publishing Asia Pty Ltd on behalf of AJA Inc.

  16. Stennis engineer part of LCROSS moon mission

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Karma Snyder, a project manager at NASA's John C. Stennis Space Center, was a senior design engineer on the RL10 liquid rocket engine that powered the Centaur, the upper stage of the rocket used in NASA's Lunar CRater Observation and Sensing Satellite (LCROSS) mission in October 2009. Part of the LCROSS mission was to search for water on the moon by striking the lunar surface with a rocket stage, creating a plume of debris that could be analyzed for water ice and vapor. Snyder's work on the RL10 took place from 1995 to 2001 when she was a senior design engineer with Pratt & Whitney Rocketdyne. Years later, she sees the project as one of her biggest accomplishments in light of the LCROSS mission. 'It's wonderful to see it come into full service,' she said. 'As one of my co-workers said, the original dream was to get that engine to the moon, and we're finally realizing that dream.'

  17. Impact of culture on healthcare seeking behavior of Asian Indians.

    PubMed

    Gupta, Vidya Bhushan

    2010-01-01

    Healthcare seeking behavior is a dynamic process that evolves through the stages of self evaluation of symptoms, self treatment, seeking professional advice and acting on professional advice. (Weaver, 1970) This article explores the influence of culture at each of these stages in the context of Asian Indian culture. Although Asian-Indians constitute only 1.5% of the US population they are among the fastest growing minorities in the United States. Through the example of Asian Indian culture this article informs the clinicians that at the initial visit they should explore what the symptoms mean to the patient and what modalities including complementary and alternative (CAM) were used by the patient to address them and at subsequent visits they should explore how their advise was filtered through the prism of the patient's culture and what was adhered to and what was not. In the case of disability and death the clinicians should explore religious beliefs such as karma that help the patient in coping.

  18. Religious Relationships with the Environment in a Tibetan Rural Community: Interactions and Contrasts with Popular Notions of Indigenous Environmentalism.

    PubMed

    Woodhouse, Emily; Mills, Martin A; McGowan, Philip J K; Milner-Gulland, E J

    Representations of Green Tibetans connected to Buddhism and indigenous wisdom have been deployed by a variety of actors and persist in popular consciousness. Through interviews, participatory mapping and observation, we explored how these ideas relate to people's notions about the natural environment in a rural community on the Eastern Tibetan plateau, in Sichuan Province, China. We found people to be orienting themselves towards the environment by means of three interlinked religious notions: (1) local gods and spirits in the landscape, which have become the focus of conservation efforts in the form of 'sacred natural sites;' (2) sin and karma related to killing animals and plants; (3) Buddhist moral precepts especially non-violence. We highlight the gaps between externally generated representations and local understandings, but also the dynamic, contested and plural nature of local relationships with the environment, which have been influenced and reshaped by capitalist development and commodification of natural resources, state environmental policies, and Buddhist modernist ideas.

  19. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  20. A Mechanical Lattice Aid for Crystallography Teaching.

    ERIC Educational Resources Information Center

    Amezcua-Lopez, J.; Cordero-Borboa, A. E.

    1988-01-01

    Introduces a 3-dimensional mechanical lattice with adjustable telescoping mechanisms. Discusses the crystalline state, the 14 Bravais lattices, operational principles of the mechanical lattice, construction methods, and demonstrations in classroom. Provides lattice diagrams, schemes of the lattice, and various pictures of the lattice. (YP)

  1. Ab initio and shell model studies of structural, thermoelastic and vibrational properties of SnO2 under pressure

    NASA Astrophysics Data System (ADS)

    Casali, R. A.; Lasave, J.; Caravaca, M. A.; Koval, S.; Ponce, C. A.; Migoni, R. L.

    2013-04-01

    The pressure dependences of the structural, thermoelastic and vibrational properties of SnO2 in its rutile phase are studied, as well as the pressure-induced transition to a CaCl2-type phase. These studies have been performed by means of ab initio (AI) density functional theory calculations using the localized basis code SIESTA. The results are employed to develop a shell model (SM) for application in future studies of nanostructured SnO2. A good agreement of the SM results for the pressure dependences of the above properties with the ones obtained from present and previous AI calculations as well as from experiments is achieved. The transition is characterized by a rotation of the Sn-centered oxygen octahedra around the tetragonal axis through the Sn. This rotation breaks the tetragonal symmetry of the lattice and an orthorhombic distortion appears above the critical pressure Pc. A zone-center phonon of B1g symmetry in the rutile phase involves such rotation and softens on approaching Pc. It becomes an Ag mode which stabilizes with increasing pressure in the CaCl2 phase. This behavior, together with the softening of the shear modulus (C11-C12)/2 related to the orthorhombic distortion, allows a precise determination of a value for Pc. An additional determination is provided by the splitting of the basal plane lattice parameters. Both the AI and the experimentally observed softening of the B1g mode are incomplete, indicating a small discontinuity at the transition. However, all results show continuous changes in volume and lattice parameters, indicating a second-order transition. All these results indicate that there should be sufficient confidence for the future employment of the shell model.

  2. First-principles study of structural and electronic properties of Be0.25Zn0.75S mixed compound

    NASA Astrophysics Data System (ADS)

    Paliwal, U.; Joshi, K. B.

    2018-05-01

    In this work the first-principles study of structural and electronic properties of Be0.25Zn0.75S mixed compound is presented. The calculations are performed applying the QUANTUM ESPRESSO code utilizing the Perdew, Becke, Ernzerhof generalized gradient approximation in the framework of density functional theory. Adopting standard optimization strategy, the ground state equilibrium lattice constant and bulk modulus are calculated. After settling the structure the electronic band structure, bandgap and static dielectric constant are evaluated. In absence of any experimental work on this system our findings are compared with the available theoretical calculations which are found to follow well anticipated general trends.

  3. OBJECT KINETIC MONTE CARLO SIMULATIONS OF RADIATION DAMAGE IN BULK TUNGSTEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.

    2015-09-22

    We used our recently developed lattice based OKMC code; KSOME [1] to carryout simulations of radiation damage in bulk W. We study the effect of dimensionality of self interstitial atom (SIA) diffusion i.e. 1D versus 3D on the defect accumulation during irradiation with a primary knock-on atom (PKA) energy of 100 keV at 300 K for the dose rates of 10-5 and 10-6 dpa/s. As expected 3D SIA diffusion significantly reduces damage accumulation due to increased probability of recombination events. In addition, dose rate, over the limited range examined here, appears to have no effect in both cases of SIAmore » diffusion.« less

  4. A collision probability analysis of the double-heterogeneity problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hebert, A.

    1993-10-01

    A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less

  5. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  6. Nuclear design analysis of square-lattice honeycomb space nuclear rocket engine

    NASA Astrophysics Data System (ADS)

    Widargo, Reza; Anghaie, Samim

    1999-01-01

    The square-lattice honeycomb reactor is designed based on a cylindrical core that is determined to have critical diameter and length of 0.50 m and 0.50 c, respectively. A 0.10-cm thick radial graphite reflector, in addition to a 0.20-m thick axial graphite reflector are used to reduce neutron leakage from the reactor. The core is fueled with solid solution of 93% enriched (U, Zr, Nb)C, which is one of several ternary uranium carbides that are considered for this concept. The fuel is to be fabricated as 2 mm grooved (U, Zr, Nb)C wafers. The fuel wafers are used to form square-lattice honeycomb fuel assemblies, 0.10 m in length with 30% cross-sectional flow area. Five fuel assemblies are stacked up axially to form the reactor core. Based on the 30% void fraction, the width of the square flow channel is about 1.3 mm. The hydrogen propellant is passed through these flow channels and removes the heat from the reactor core. To perform nuclear design analysis, a series of neutron transport and diffusion codes are used. The preliminary results are obtained using a simple four-group cross-section model. To optimize the nuclear design, the fuel densities are varied for each assembly. Tantalum, hafnium and tungsten are considered and used as a replacement for niobium in fuel material to provide water submersion sub-criticality for the reactor. Axial and radial neutron flux and power density distributions are calculated for the core. Results of the neutronic analysis indicate that the core has a relatively fast spectrum. From the results of the thermal hydraulic analyses, eight axial temperature zones are chosen for the calculation of group average cross-sections. An iterative process is conducted to couple the neutronic calculations with the thermal hydraulics calculations. Results of the nuclear design analysis indicate that a compact core can be designed based on ternary uranium carbide square-lattice honeycomb fuel. This design provides a relatively high thrust to weight ratio.

  7. Integer lattice gas with Monte Carlo collision operator recovers the lattice Boltzmann method with Poisson-distributed fluctuations

    NASA Astrophysics Data System (ADS)

    Blommel, Thomas; Wagner, Alexander J.

    2018-02-01

    We examine a new kind of lattice gas that closely resembles modern lattice Boltzmann methods. This new kind of lattice gas, which we call a Monte Carlo lattice gas, has interesting properties that shed light on the origin of the multirelaxation time collision operator, and it derives the equilibrium distribution for an entropic lattice Boltzmann. Furthermore these lattice gas methods have Galilean invariant fluctuations given by a Poisson statistics, giving further insight into the properties that we should expect for fluctuating lattice Boltzmann methods.

  8. Principles of protein folding--a perspective from simple exact models.

    PubMed Central

    Dill, K. A.; Bromberg, S.; Yue, K.; Fiebig, K. M.; Yee, D. P.; Thomas, P. D.; Chan, H. S.

    1995-01-01

    General principles of protein structure, stability, and folding kinetics have recently been explored in computer simulations of simple exact lattice models. These models represent protein chains at a rudimentary level, but they involve few parameters, approximations, or implicit biases, and they allow complete explorations of conformational and sequence spaces. Such simulations have resulted in testable predictions that are sometimes unanticipated: The folding code is mainly binary and delocalized throughout the amino acid sequence. The secondary and tertiary structures of a protein are specified mainly by the sequence of polar and nonpolar monomers. More specific interactions may refine the structure, rather than dominate the folding code. Simple exact models can account for the properties that characterize protein folding: two-state cooperativity, secondary and tertiary structures, and multistage folding kinetics--fast hydrophobic collapse followed by slower annealing. These studies suggest the possibility of creating "foldable" chain molecules other than proteins. The encoding of a unique compact chain conformation may not require amino acids; it may require only the ability to synthesize specific monomer sequences in which at least one monomer type is solvent-averse. PMID:7613459

  9. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H; Williams, Samuel; Datta, Kaushik

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we developmore » a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.« less

  10. An investigation of voxel geometries for MCNP-based radiation dose calculations.

    PubMed

    Zhang, Juying; Bednarz, Bryan; Xu, X George

    2006-11-01

    Voxelized geometry such as those obtained from medical images is increasingly used in Monte Carlo calculations of absorbed doses. One useful application of calculated absorbed dose is the determination of fluence-to-dose conversion factors for different organs. However, confusion still exists about how such a geometry is defined and how the energy deposition is best computed, especially involving a popular code, MCNP5. This study investigated two different types of geometries in the MCNP5 code, cell and lattice definitions. A 10 cm x 10 cm x 10 cm test phantom, which contained an embedded 2 cm x 2 cm x 2 cm target at its center, was considered. A planar source emitting parallel photons was also considered in the study. The results revealed that MCNP5 does not calculate total target volume for multi-voxel geometries. Therefore, tallies which involve total target volume must be divided by the user by the total number of voxels to obtain a correct dose result. Also, using planar source areas greater than the phantom size results in the same fluence-to-dose conversion factor.

  11. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  12. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  13. Simulation of Rotary-Wing Near-Wake Vortex Structures Using Navier-Stokes CFD Methods

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Strawn, Roger; Ahmad, Jasim; Duque, Earl; Warmbrodt, William (Technical Monitor)

    1997-01-01

    This paper will use high-resolution Navier-Stokes computational fluid dynamics (CFD) simulations to model the near-wake vortex roll-up behind rotor blades. The locations and strengths of the trailing vortices will be determined from newly-developed visualization and analysis software tools applied to the CFD solutions. Computational results for rotor nearwake vortices will be used to study the near-wake vortex roll up for highly-twisted tiltrotor blades. These rotor blades typically have combinations of positive and negative spanwise loading and complex vortex wake interactions. Results of the computational studies will be compared to vortex-lattice wake models that are frequently used in rotorcraft comprehensive codes. Information from these comparisons will be used to improve the rotor wake models in the Tilt-Rotor Acoustic Code (TRAC) portion of NASA's Short Haul Civil Transport program (SHCT). Accurate modeling of the rotor wake is an important part of this program and crucial to the successful design of future civil tiltrotor aircraft. The rotor wake system plays an important role in blade-vortex interaction noise, a major problem for all rotorcraft including tiltrotors.

  14. The mechanical, optoelectronic and thermoelectric properties of NiYSn (Y = Zr and Hf) alloys

    NASA Astrophysics Data System (ADS)

    Hamioud, Farida; Mubarak, A. A.

    2017-09-01

    First-principle calculations are performed using DFT as implemented in Wien2k code to compute the mechanical, electronic, optical and thermoelectric properties of NiYSn (Y = Zr and Hf) alloys. The computed lattice constants, bulk modulus and cohesive energy of these alloys at 0 K and 0 GPa are performed. NiZrSn and NiHfSn are found to be anisotropic and elastically stable. Furthermore, both alloys are confirmed to be thermodynamically stable by the calculated values of the standard enthalpy of formation. The Young’s and shear moduli values show that NiZrSn seems to be stiffer than NiHfSn. The optical properties are performed using the dielectric function. Some beneficial optoelectronic applications are found as exposed in the optical spectra. Moreover, the alloys are classified as good insulators for solar heating. The thermoelectric properties as a function of temperature are computed utilizing BoltzTrap code. The major charge carriers are found to be electrons and the alloys are classified as p-type doping alloys.

  15. MaMiCo: Software design for parallel molecular-continuum flow simulations

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim

    2016-03-01

    The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.

  16. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, M.F.; Bruhwiler, D.L.

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. {copyright} {ital 1997 American Institute of Physics.}« less

  17. Etude des performances de solveurs deterministes sur un coeur rapide a caloporteur sodium

    NASA Astrophysics Data System (ADS)

    Bay, Charlotte

    The reactors of next generation, in particular SFR model, represent a true challenge for current codes and solvers, used mainly for thermic cores. There is no guarantee that their competences could be straight adapted to fast neutron spectrum, or to major design differences. Thus it is necessary to assess the validity of solvers and their potential shortfall in the case of fast neutron reactors. As part of an internship with CEA (France), and at the instigation of EPM Nuclear Institute, this study concerns the following codes : DRAGON/DONJON, ERANOS, PARIS and APOLLO3. The precision assessment has been performed using Monte Carlo code TRIPOLI4. Only core calculation was of interest, namely numerical methods competences in precision and rapidity. Lattice code was not part of the study, that is to say nuclear data, self-shielding, or isotopic compositions. Nor was tackled burnup or time evolution effects. The study consists in two main steps : first evaluating the sensitivity of each solver to calculation parameters, and obtain its optimal calculation set ; then compare their competences in terms of precision and rapidity, by collecting usual quantities (effective multiplication factor, reaction rates map), but also more specific quantities which are crucial to the SFR design, namely control rod worth and sodium void effect. The calculation time is also a key factor. Whatever conclusion or recommendation that could be drawn from this study, they must first of all be applied within similar frameworks, that is to say small fast neutron cores with hexagonal geometry. Eventual adjustments for big cores will have to be demonstrated in developments of this study.

  18. New methods in WARP, a particle-in-cell code for space-charge dominated beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D., LLNL

    1998-01-12

    The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less

  19. Nearly metastable rhombohedral phases of bcc metals

    NASA Astrophysics Data System (ADS)

    Mehl, Michael J.; Finkenstadt, Daniel

    2008-02-01

    The energy E(c/a) for a bcc element stretched along its [001] axis (the Bain path) has a minimum at c/a=1 , a maximum at c/a=2 , and an elastically unstable local minimum at c/a>2 . An alternative path connecting the bcc and fcc structures is the rhombohedral lattice. The primitive lattice has R3¯m symmetry, with the angle α changing from 109.4° (bcc), to 90° (simple cubic), to 60 ° (fcc). We study this path for the non-magnetic bcc transition metals (V, Nb, Mo, Ta, and W) using both all-electron linearized augmented plane wave and projector augmented wave VASP codes. Except for Ta, the energy E(α) has a local maximum at α=60° , with local minima near 55° and 70° , the latter having lower energy, suggesting the possibility of a metastable rhombohedral state for these materials. We first examine the elastic stability of the 70° minimum structure, and determine that only W is elastically stable in this structure, with the smallest eigenvalue of the elastic tensor at 4GPa . We then consider the possibility that tungsten is actually metastable in this structure by looking at its vibrational and third-order elastic stability.

  20. Performance of Transuranic-Loaded Fully Ceramic Micro-Encapsulated Fuel in LWRs Final Report, Including Void Reactivity Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael A. Pope; R. Sonat Sen; Brian Boer

    2011-09-01

    The current focus of the Deep Burn Project is on once-through burning of transuranics (TRU) in light-water reactors (LWRs). The fuel form is called Fully-Ceramic Micro-encapsulated (FCM) fuel, a concept that borrows the tri-isotropic (TRISO) fuel particle design from high-temperature reactor technology. In the Deep Burn LWR (DB-LWR) concept, these fuel particles are pressed into compacts using SiC matrix material and loaded into fuel pins for use in conventional LWRs. The TRU loading comes from the spent fuel of a conventional LWR after 5 years of cooling. Unit cell and assembly calculations have been performed using the DRAGON-4 code tomore » assess the physics attributes of TRU-only FCM fuel in an LWR lattice. Depletion calculations assuming an infinite lattice condition were performed with calculations of various reactivity coefficients performed at each step. Unit cells and assemblies containing typical UO2 and mixed oxide (MOX) fuel were analyzed in the same way to provide a baseline against which to compare the TRU-only FCM fuel. Then, assembly calculations were performed evaluating the performance of heterogeneous arrangements of TRU-only FCM fuel pins along with UO2 pins.« less

  1. Higgs mechanism in higher-rank symmetric U(1) gauge theories

    NASA Astrophysics Data System (ADS)

    Bulmash, Daniel; Barkeshli, Maissam

    2018-06-01

    We use the Higgs mechanism to investigate connections between higher-rank symmetric U(1 ) gauge theories and gapped fracton phases. We define two classes of rank-2 symmetric U(1 ) gauge theories: the (m ,n ) scalar and vector charge theories, for integer m and n , which respect the symmetry of the square (cubic) lattice in two (three) spatial dimensions. We further provide local lattice rotor models whose low-energy dynamics are described by these theories. We then describe in detail the Higgs phases obtained when the U(1 ) gauge symmetry is spontaneously broken to a discrete subgroup. A subset of the scalar charge theories indeed have X-cube fracton order as their Higgs phase, although we find that this can only occur if the continuum higher-rank gauge theory breaks continuous spatial rotational symmetry. However, not all higher-rank gauge theories have fractonic Higgs phases; other Higgs phases possess conventional topological order. Nevertheless, they yield interesting novel exactly solvable models of conventional topological order, somewhat reminiscent of the color code models in both two and three spatial dimensions. We also investigate phase transitions in these models and find a possible direct phase transition between four copies of Z2 gauge theory in three spatial dimensions and X-cube fracton order.

  2. Majorana spin liquids, topology, and superconductivity in ladders

    NASA Astrophysics Data System (ADS)

    Le Hur, Karyn; Soret, Ariane; Yang, Fan

    2017-11-01

    We theoretically address spin chain analogs of the Kitaev quantum spin model on the honeycomb lattice. The emergent quantum spin-liquid phases or Anderson resonating valence-bond (RVB) states can be understood, as an effective model, in terms of p -wave superconductivity and Majorana fermions. We derive a generalized phase diagram for the two-leg ladder system with tunable interaction strengths between chains allowing us to vary the shape of the lattice (from square to honeycomb ribbon or brickwall ladder). We evaluate the winding number associated with possible emergent (topological) gapless modes at the edges. In the Az phase, as a result of the emergent Z2 gauge fields and π -flux ground state, one may build spin-1/2 (loop) qubit operators by analogy to the toric code. In addition, we show how the intermediate gapless B phase evolves in the generalized ladder model. For the brick-wall ladder, the B phase is reduced to one line, which is analyzed through perturbation theory in a rung tensor product states representation and bosonization. Finally, we show that doping with a few holes can result in the formation of hole pairs and leads to a mapping with the Su-Schrieffer-Heeger model in polyacetylene; a superconducting-insulating quantum phase transition for these hole pairs is accessible, as well as related topological properties.

  3. Fabrication of ZnO photonic crystals by nanosphere lithography using inductively coupled-plasma reactive ion etching with CH{sub 4}/H{sub 2}/Ar plasma on the ZnO/GaN heterojunction light emitting diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Shr-Jia; Chang, Chun-Ming; Kao, Jiann-Shiun

    2010-07-15

    This article reports fabrication of n-ZnO photonic crystal/p-GaN light emitting diode (LED) by nanosphere lithography to further booster the light efficiency. In this article, the fabrication of ZnO photonic crystals is carried out by nanosphere lithography using inductively coupled plasma reactive ion etching with CH{sub 4}/H{sub 2}/Ar plasma on the n-ZnO/p-GaN heterojunction LEDs. The CH{sub 4}/H{sub 2}/Ar mixed gas gives high etching rate of n-ZnO film, which yields a better surface morphology and results less plasma-induced damages of the n-ZnO film. Optimal ZnO lattice parameters of 200 nm and air fill factor from 0.35 to 0.65 were obtained from fittingmore » the spectrum of n-ZnO/p-GaN LED using a MATLAB code. In this article, we will show our recent result that a ZnO photonic crystal cylinder has been fabricated using polystyrene nanosphere mask with lattice parameter of 200 nm and radius of hole around 70 nm. Surface morphology of ZnO photonic crystal was examined by scanning electron microscope.« less

  4. Noise prediction of a subsonic turbulent round jet using the lattice-Boltzmann method

    PubMed Central

    Lew, Phoi-Tack; Mongeau, Luc; Lyrintzis, Anastasios

    2010-01-01

    The lattice-Boltzmann method (LBM) was used to study the far-field noise generated from a Mach, Mj=0.4, unheated turbulent axisymmetric jet. A commercial code based on the LBM kernel was used to simulate the turbulent flow exhausting from a pipe which is 10 jet radii in length. Near-field flow results such as jet centerline velocity decay rates and turbulence intensities were in agreement with experimental results and results from comparable LES studies. The predicted far field sound pressure levels were within 2 dB from published experimental results. Weak unphysical tones were present at high frequency in the computed radiated sound pressure spectra. These tones are believed to be due to spurious sound wave reflections at boundaries between regions of varying voxel resolution. These “VR tones” did not appear to bias the underlying broadband noise spectrum, and they did not affect the overall levels significantly. The LBM appears to be a viable approach, comparable in accuracy to large eddy simulations, for the problem considered. The main advantages of this approach over Navier–Stokes based finite difference schemes may be a reduced computational cost, ease of including the nozzle in the computational domain, and ease of investigating nozzles with complex shapes. PMID:20815448

  5. A comparison of the lattice discrete particle method to the finite-element method and the K&C material model for simulating the static and dynamic response of concrete.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jovanca J.; Bishop, Joseph E.

    2013-11-01

    This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less

  6. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  7. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  8. Efficient LBM visual simulation on face-centered cubic lattices.

    PubMed

    Petkov, Kaloian; Qiu, Feng; Fan, Zhe; Kaufman, Arie E; Mueller, Klaus

    2009-01-01

    The Lattice Boltzmann method (LBM) for visual simulation of fluid flow generally employs cubic Cartesian (CC) lattices such as the D3Q13 and D3Q19 lattices for the particle transport. However, the CC lattices lead to suboptimal representation of the simulation space. We introduce the face-centered cubic (FCC) lattice, fD3Q13, for LBM simulations. Compared to the CC lattices, the fD3Q13 lattice creates a more isotropic sampling of the simulation domain and its single lattice speed (i.e., link length) simplifies the computations and data storage. Furthermore, the fD3Q13 lattice can be decomposed into two independent interleaved lattices, one of which can be discarded, which doubles the simulation speed. The resulting LBM simulation can be efficiently mapped to the GPU, further increasing the computational performance. We show the numerical advantages of the FCC lattice on channeled flow in 2D and the flow-past-a-sphere benchmark in 3D. In both cases, the comparison is against the corresponding CC lattices using the analytical solutions for the systems as well as velocity field visualizations. We also demonstrate the performance advantages of the fD3Q13 lattice for interactive simulation and rendering of hot smoke in an urban environment using thermal LBM.

  9. GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA

    NASA Astrophysics Data System (ADS)

    Ren, Qinlong

    Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1,000 US dollars. The release of NVIDIA's CUDA architecture which includes both hardware and programming environment in 2007 makes GPU computing attractive. Due to its highly parallel nature, lattice Boltzmann method is successfully ported into GPU with a performance benefit during the recent years. In the current work, LBM CUDA code is developed for different fluid flow and heat transfer problems. In this dissertation, lattice Boltzmann method and immersed boundary method are used to study natural convection in an enclosure with an array of conduting obstacles, double-diffusive convection in a vertical cavity with Soret and Dufour effects, PCM melting process in a latent heat thermal energy storage system with internal fins, mixed convection in a lid-driven cavity with a sinusoidal cylinder, and AC electrothermal pumping in microfluidic systems on a CUDA computational platform. It is demonstrated that LBM is an efficient method to simulate complex heat transfer problems using GPU on CUDA.

  10. Area of Lattice Polygons

    ERIC Educational Resources Information Center

    Scott, Paul

    2006-01-01

    A lattice is a (rectangular) grid of points, usually pictured as occurring at the intersections of two orthogonal sets of parallel, equally spaced lines. Polygons that have lattice points as vertices are called lattice polygons. It is clear that lattice polygons come in various shapes and sizes. A very small lattice triangle may cover just 3…

  11. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  12. Theoretical study of aerodynamic characteristics of wings having vortex flow

    NASA Technical Reports Server (NTRS)

    Reddy, C. S.

    1979-01-01

    The aerodynamic characteristics of slender wings having separation induced vortex flows are investigated by employing three different computer codes--free vortex sheet, quasi vortex lattice, and suction analogy methods. Their capabilities and limitations are examined, and modifications are discussed. Flat wings of different configurations: arrow, delta, and diamond shapes, as well as cambered delta wings, are studied. The effect of notch ratio on the load distributions and the longitudinal characteristics of a family of arrow and diamond wings is explored. The sectional lift coefficients and the accumulated span loadings are determined for an arrow wing and are seen to be unusual in comparison with the attached flow results. The theoretically predicted results are compared with the existing experimental values.

  13. Pressure induced phase transition in CdTe nanowire: A DFT study

    NASA Astrophysics Data System (ADS)

    Bhatia, Manjeet; Khan, Md. Shahzad; Srivastava, Anurag

    2018-05-01

    We have studied structural phase transition and electronic properties of CdTe nanowires in their wurtzite (B4) to rocksalt (B1) phase by first principles density functional calculations using SIESTA code. Nanowires are derived from wurtzite and rocksalt phase of bulk CdTe with growth direction along 100 planes. We observed structural phase transition from B4→B1 at 4.79 GPa. Wurtzite structure is found to have band gap 2.30 eV while rocksalt is metallic in nature. Our calculated lattice constant (4.55 Å for B4 and 5.84 Å for B1), transition pressure (4.79 GPa) and electronic structure results are in close agreement with the previous calculations on bulk and nanostructures.

  14. Hofstadter butterfly evolution in the space of two-dimensional Bravais lattices

    NASA Astrophysics Data System (ADS)

    Yılmaz, F.; Oktel, M. Ö.

    2017-06-01

    The self-similar energy spectrum of a particle in a periodic potential under a magnetic field, known as the Hofstadter butterfly, is determined by the lattice geometry as well as the external field. Recent realizations of artificial gauge fields and adjustable optical lattices in cold-atom experiments necessitate the consideration of these self-similar spectra for the most general two-dimensional lattice. In a previous work [F. Yılmaz et al., Phys. Rev. A 91, 063628 (2015), 10.1103/PhysRevA.91.063628], we investigated the evolution of the spectrum for an experimentally realized lattice which was tuned by changing the unit-cell structure but keeping the square Bravais lattice fixed. We now consider all possible Bravais lattices in two dimensions and investigate the structure of the Hofstadter butterfly as the lattice is deformed between lattices with different point-symmetry groups. We model the optical lattice with a sinusoidal real-space potential and obtain the tight-binding model for any lattice geometry by calculating the Wannier functions. We introduce the magnetic field via Peierls substitution and numerically calculate the energy spectrum. The transition between the two most symmetric lattices, i.e., the triangular and the square lattices, displays the importance of bipartite symmetry featuring deformation as well as closing of some of the major energy gaps. The transitions from the square to rectangular lattice and from the triangular to centered rectangular lattices are analyzed in terms of coupling of one-dimensional chains. We calculate the Chern numbers of the major gaps and Chern number transfer between bands during the transitions. We use gap Chern numbers to identify distinct topological regions in the space of Bravais lattices.

  15. Mean-Field Scaling of the Superfluid to Mott Insulator Transition in a 2D Optical Superlattice.

    PubMed

    Thomas, Claire K; Barter, Thomas H; Leung, Tsz-Him; Okano, Masayuki; Jo, Gyu-Boong; Guzman, Jennie; Kimchi, Itamar; Vishwanath, Ashvin; Stamper-Kurn, Dan M

    2017-09-08

    The mean-field treatment of the Bose-Hubbard model predicts properties of lattice-trapped gases to be insensitive to the specific lattice geometry once system energies are scaled by the lattice coordination number z. We test this scaling directly by comparing coherence properties of ^{87}Rb gases that are driven across the superfluid to Mott insulator transition within optical lattices of either the kagome (z=4) or the triangular (z=6) geometries. The coherent fraction measured for atoms in the kagome lattice is lower than for those in a triangular lattice with the same interaction and tunneling energies. A comparison of measurements from both lattices agrees quantitatively with the scaling prediction. We also study the response of the gas to a change in lattice geometry, and observe the dynamics as a strongly interacting kagome-lattice gas is suddenly "hole doped" by introducing the additional sites of the triangular lattice.

  16. Alternate Lattice Design for Advanced Photon Source Multi-Bend Achromat Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng; Borland, Michael

    2015-01-01

    A 67-pm hybrid-seven-bend achromat (H7BA) lattice is proposed for a futureAdvanced Photon Source (APS)multibend- achromat (MBA) upgrade. This lattice requires use of a swap-out (on-axis) injection scheme. Alternate lattice design work has also been performed to achieve better beam dynamics performance than the nominal APS MBA lattice, in order to allow beam accumulation. One of such alternate H7BA lattice designs, which still targets a very low emittance of 76 pm, is discussed in this paper. With these lattices, existing APS injector complex can be employed without the requirement of a very high charge operation. Studies show that an emittance belowmore » 76 pm can be achieved with the employment of reverse bends in an alternate lattice. We discuss the predicted performance and requirements for these lattices and compare them to the nominal lattice.« less

  17. Reorientations, relaxations, metastabilities, and multidomains of skyrmion lattices

    NASA Astrophysics Data System (ADS)

    Bannenberg, L. J.; Qian, F.; Dalgliesh, R. M.; Martin, N.; Chaboussant, G.; Schmidt, M.; Schlagel, D. L.; Lograsso, T. A.; Wilhelm, H.; Pappas, C.

    2017-11-01

    Magnetic skyrmions are nanosized topologically protected spin textures with particlelike properties. They can form lattices perpendicular to the magnetic field, and the orientation of these skyrmion lattices with respect to the crystallographic lattice is governed by spin-orbit coupling. By performing small-angle neutron scattering measurements, we investigate the coupling between the crystallographic and skyrmion lattices in both Cu2OSeO3 and the archetype chiral magnet MnSi. The results reveal that the orientation of the skyrmion lattice is primarily determined by the magnetic field direction with respect to the crystallographic lattice. In addition, it is also influenced by the magnetic history of the sample, which can induce metastable lattices. Kinetic measurements show that these metastable skyrmion lattices may or may not relax to their equilibrium positions under macroscopic relaxation times. Furthermore, multidomain lattices may form when two or more equivalent crystallographic directions are favored by spin-orbit coupling and oriented perpendicular to the magnetic field.

  18. Basic properties of lattices of cubes, algorithms for their construction, and application capabilities in discrete optimization

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2015-01-01

    The basic properties of a new type of lattices—a lattice of cubes—are described. It is shown that, with a suitable choice of union and intersection operations, the set of all subcubes of an N-cube forms a lattice, which is called a lattice of cubes. Algorithms for constructing such lattices are described, and the results produced by these algorithms in the case of lattices of various dimensions are illustrated. It is proved that a lattice of cubes is a lattice with supplements, which makes it possible to minimize and maximize supermodular functions on it. Examples of such functions are given. The possibility of applying previously developed efficient optimization algorithms to the formulation and solution of new classes of problems on lattices of cubes.

  19. Exact coherent structures and chaotic dynamics in a model of cardiac tissue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Greg; Marcotte, Christopher D.; Grigoriev, Roman O., E-mail: roman.grigoriev@physics.gatech.edu

    Unstable nonchaotic solutions embedded in the chaotic attractor can provide significant new insight into chaotic dynamics of both low- and high-dimensional systems. In particular, in turbulent fluid flows, such unstable solutions are referred to as exact coherent structures (ECS) and play an important role in both initiating and sustaining turbulence. The nature of ECS and their role in organizing spatiotemporally chaotic dynamics, however, is reasonably well understood only for systems on relatively small spatial domains lacking continuous Euclidean symmetries. Construction of ECS on large domains and in the presence of continuous translational and/or rotational symmetries remains a challenge. This ismore » especially true for models of excitable media which display spiral turbulence and for which the standard approach to computing ECS completely breaks down. This paper uses the Karma model of cardiac tissue to illustrate a potential approach that could allow computing a new class of ECS on large domains of arbitrary shape by decomposing them into a patchwork of solutions on smaller domains, or tiles, which retain Euclidean symmetries locally.« less

  20. Topological magnon bands in ferromagnetic star lattice.

    PubMed

    Owerre, S A

    2017-05-10

    The experimental observation of topological magnon bands and thermal Hall effect in a kagomé lattice ferromagnet Cu(1-3, bdc) has inspired the search for topological magnon effects in various insulating ferromagnets that lack an inversion center allowing a Dzyaloshinskii-Moriya (DM) spin-orbit interaction. The star lattice (also known as the decorated honeycomb lattice) ferromagnet is an ideal candidate for this purpose because it is a variant of the kagomé lattice with additional links that connect the up-pointing and down-pointing triangles. This gives rise to twice the unit cell of the kagomé lattice, and hence more interesting topological magnon effects. In particular, the triangular bridges on the star lattice can be coupled either ferromagnetically or antiferromagnetically which is not possible on the kagomé lattice ferromagnets. Here, we study DM-induced topological magnon bands, chiral edge modes, and thermal magnon Hall effect on the star lattice ferromagnet in different parameter regimes. The star lattice can also be visualized as the parent material from which topological magnon bands can be realized for the kagomé and honeycomb lattices in some limiting cases.

  1. Physical Realization of von Neumann Lattices in Rotating Bose Gases with Dipole Interatomic Interactions.

    PubMed

    Cheng, Szu-Cheng; Jheng, Shih-Da

    2016-08-22

    This paper reports a novel type of vortex lattice, referred to as a bubble crystal, which was discovered in rapidly rotating Bose gases with long-range interactions. Bubble crystals differ from vortex lattices which possess a single quantum flux per unit cell, while atoms in bubble crystals are clustered periodically and surrounded by vortices. No existing model is able to describe the vortex structure of bubble crystals; however, we identified a mathematical lattice, which is a subset of coherent states and exists periodically in the physical space. This lattice is called a von Neumann lattice, and when it possesses a single vortex per unit cell, it presents the same geometrical structure as an Abrikosov lattice. In this report, we extend the von Neumann lattice to one with an integral number of flux quanta per unit cell and demonstrate that von Neumann lattices well reproduce the translational properties of bubble crystals. Numerical simulations confirm that, as a generalized vortex, a von Neumann lattice can be physically realized using vortex lattices in rapidly rotating Bose gases with dipole interatomic interactions.

  2. Invariant patterns in crystal lattices: Implications for protein folding algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.; ISTRAIL,SORIN

    2000-06-01

    Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specificmore » sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.« less

  3. Reorientations, relaxations, metastabilities, and multidomains of skyrmion lattices

    DOE PAGES

    Bannenberg, L. J.; Qian, F.; Dalgliesh, R. M.; ...

    2017-11-13

    Magnetic skyrmions are nanosized topologically protected spin textures with particlelike properties. They can form lattices perpendicular to the magnetic field, and the orientation of these skyrmion lattices with respect to the crystallographic lattice is governed by spin-orbit coupling. By performing small-angle neutron scattering measurements, we investigate the coupling between the crystallographic and skyrmion lattices in both Cu 2OSeO 3 and the archetype chiral magnet MnSi. The results reveal that the orientation of the skyrmion lattice is primarily determined by the magnetic field direction with respect to the crystallographic lattice. In addition, it is also influenced by the magnetic history ofmore » the sample, which can induce metastable lattices. Kinetic measurements show that these metastable skyrmion lattices may or may not relax to their equilibrium positions under macroscopic relaxation times. Moreover, multidomain lattices may form when two or more equivalent crystallographic directions are favored by spin-orbit coupling and oriented perpendicular to the magnetic field.« less

  4. Nuclear Physics and Lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beane, Silas

    2003-11-01

    Impressive progress is currently being made in computing properties and interac- tions of the low-lying hadrons using lattice QCD. However, cost limitations will, for the foreseeable future, necessitate the use of quark masses, Mq, that are signif- icantly larger than those of nature, lattice spacings, a, that are not significantly smaller than the physical scale of interest, and lattice sizes, L, that are not sig- nificantly larger than the physical scale of interest. Extrapolations in the quark masses, lattice spacing and lattice volume are therefore required. The hierarchy of mass scales is: L 1 j Mq j â ºC jmore » a 1 . The appropriate EFT for incorporating the light quark masses, the finite lattice spacing and the lattice size into hadronic observables is C-PT, which provides systematic expansions in the small parame- ters e m L, 1/ Lâ ºC, p/â ºC, Mq/â ºC and aâ ºC . The lattice introduces other unphysical scales as well. Lattice QCD quarks will increasingly be artificially separated« less

  5. Growth of coincident site lattice matched semiconductor layers and devices on crystalline substrates

    DOEpatents

    Norman, Andrew G; Ptak, Aaron J

    2013-08-13

    Methods of fabricating a semiconductor layer or device and said devices are disclosed. The methods include but are not limited to providing a substrate having a crystalline surface with a known lattice parameter (a). The method further includes growing a crystalline semiconductor layer on the crystalline substrate surface by coincident site lattice matched epitaxy, without any buffer layer between the crystalline semiconductor layer and the crystalline surface of the substrate. The crystalline semiconductor layer will be prepared to have a lattice parameter (a') that is related to the substrate lattice parameter (a). The lattice parameter (a') maybe related to the lattice parameter (a) by a scaling factor derived from a geometric relationship between the respective crystal lattices.

  6. APS-U LATTICE DESIGN FOR OFF-AXIS ACCUMULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng; Borland, M.; Lindberg, R.

    2017-06-25

    A 67-pm hybrid-seven-bend achromat (H7BA) lattice is being proposed for a future Advanced Photon Source (APS) multi-bend-achromat (MBA) upgrade project. This lattice design pushes for smaller emittance and requires use of a swap-out (on-axis) injection scheme due to limited dynamic acceptance. Alternate lattice design work has also been performed for the APS upgrade to achieve better beam dynamics performance than the nominal APS MBA lattice, in order to allow off-axis accumulation. Two such alternate H7BA lattice designs, which target a still-low emittance of 90 pm, are discussed in detail in this paper. Although the single-particle-dynamics performance is good, simulations ofmore » collective effects indicate that surprising difficulty would be expected accumulating high single-bunch charge in this lattice. The brightness of the 90-pm lattice is also a factor of two lower than the 67-pm H7BA lattice.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bannenberg, L. J.; Qian, F.; Dalgliesh, R. M.

    Magnetic skyrmions are nanosized topologically protected spin textures with particlelike properties. They can form lattices perpendicular to the magnetic field, and the orientation of these skyrmion lattices with respect to the crystallographic lattice is governed by spin-orbit coupling. By performing small-angle neutron scattering measurements, we investigate the coupling between the crystallographic and skyrmion lattices in both Cu 2OSeO 3 and the archetype chiral magnet MnSi. The results reveal that the orientation of the skyrmion lattice is primarily determined by the magnetic field direction with respect to the crystallographic lattice. In addition, it is also influenced by the magnetic history ofmore » the sample, which can induce metastable lattices. Kinetic measurements show that these metastable skyrmion lattices may or may not relax to their equilibrium positions under macroscopic relaxation times. Moreover, multidomain lattices may form when two or more equivalent crystallographic directions are favored by spin-orbit coupling and oriented perpendicular to the magnetic field.« less

  8. Lattice topology dictates photon statistics.

    PubMed

    Kondakci, H Esat; Abouraddy, Ayman F; Saleh, Bahaa E A

    2017-08-21

    Propagation of coherent light through a disordered network is accompanied by randomization and possible conversion into thermal light. Here, we show that network topology plays a decisive role in determining the statistics of the emerging field if the underlying lattice is endowed with chiral symmetry. In such lattices, eigenmode pairs come in skew-symmetric pairs with oppositely signed eigenvalues. By examining one-dimensional arrays of randomly coupled waveguides arranged on linear and ring topologies, we are led to a remarkable prediction: the field circularity and the photon statistics in ring lattices are dictated by its parity while the same quantities are insensitive to the parity of a linear lattice. For a ring lattice, adding or subtracting a single lattice site can switch the photon statistics from super-thermal to sub-thermal, or vice versa. This behavior is understood by examining the real and imaginary fields on a lattice exhibiting chiral symmetry, which form two strands that interleave along the lattice sites. These strands can be fully braided around an even-sited ring lattice thereby producing super-thermal photon statistics, while an odd-sited lattice is incommensurate with such an arrangement and the statistics become sub-thermal.

  9. An Intelligent Strain Gauge with Debond Detection and Temperature Compensation

    NASA Technical Reports Server (NTRS)

    Jensen, Scott L.

    2012-01-01

    The harsh rocket propulsion test environment will expose any inadequacies associated with preexisting instrumentation technologies, and the criticality for collecting reliable test data justifies investigating any encountered data anomalies. Novel concepts for improved systems are often conceived during the high scrutiny investigations by individuals with an in-depth knowledge from maintaining critical test operations. The Intelligent Strain Gauge concept was conceived while performing these kinds of activities. However, the novel concepts are often unexplored even if it has the potential for advancing the current state of the art. Maturing these kinds of concepts is often considered to be a tangential development or a research project which are both normally abandoned within the propulsion-oriented environment. It is also difficult to justify these kinds of projects as a facility enhancement because facility developments are only accepted for mature and proven technologies. Fortunately, the CIF program has provided an avenue for bringing the Intelligent Strain Gauge to fruition. Two types of fully functional smart strain gauges capable of performing reliable and sensitive debond detection have been successfully produced. Ordinary gauges are designed to provide test article data and they lack the ability to supply information concerning the gauge itself. A gauge is considered to be a smart gauge when it provides supplementary data relating other relevant attributes for performing diagnostic function or producing enhanced data. The developed strain gauges provide supplementary signals by measuring strain and temperature through embedded Karma and nickel chromium (NiCr) alloy elements. Intelligently interpreting the supplementary data into valuable information can be performed manually, however, integrating this functionality into an automatic system is considered to be an intelligent gauge. This was achieved while maintaining a very low mass. The low mass enables debond detection and temperature compensation to be performed when the gauge is utilized on small test articles. It was also found that the element's mass must be relatively small to avoid overbearing the desired thermal dissipation characteristics. Detecting the degradation of a gauge s bond was reliably achieved by correlating thermal dissipation with the bond s integrity. This was accomplished by precisely coupling a NiCr element with a Karma element for accurately interjecting and quantifying thermal energy. A finite amount of thermal energy is consistently placed in the gauge by electrically powering the NiCr element. The energy will only be temporarily stored before it begins to dissipate into the surrounding structure through the gauge bond. The ability to transmit the energy into the structure becomes greatly inhibited by any discontinuity in the bond s substrate. Therefore, the way the thermal dissipation occurs will reveal even the slightest change in the integrity of the bond.

  10. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling.

    PubMed

    Xiao, Bo; Huang, Chewei; Imel, Zac E; Atkins, David C; Georgiou, Panayiotis; Narayanan, Shrikanth S

    2016-04-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy-a key therapy quality index-from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training.

  11. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling

    PubMed Central

    Xiao, Bo; Huang, Chewei; Imel, Zac E.; Atkins, David C.; Georgiou, Panayiotis; Narayanan, Shrikanth S.

    2016-01-01

    Scaling up psychotherapy services such as for addiction counseling is a critical societal need. One challenge is ensuring quality of therapy, due to the heavy cost of manual observational assessment. This work proposes a speech technology-based system to automate the assessment of therapist empathy—a key therapy quality index—from audio recordings of the psychotherapy interactions. We designed a speech processing system that includes voice activity detection and diarization modules, and an automatic speech recognizer plus a speaker role matching module to extract the therapist's language cues. We employed Maximum Entropy models, Maximum Likelihood language models, and a Lattice Rescoring method to characterize high vs. low empathic language. We estimated therapy-session level empathy codes using utterance level evidence obtained from these models. Our experiments showed that the fully automated system achieved a correlation of 0.643 between expert annotated empathy codes and machine-derived estimations, and an accuracy of 81% in classifying high vs. low empathy, in comparison to a 0.721 correlation and 86% accuracy in the oracle setting using manual transcripts. The results show that the system provides useful information that can contribute to automatic quality insurance and therapist training. PMID:28286867

  12. Validation of the new code package APOLLO2.8 for accurate PWR neutronics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santamarina, A.; Bernard, D.; Blaise, P.

    2013-07-01

    This paper summarizes the Qualification work performed to demonstrate the accuracy of the new APOLLO2.S/SHEM-MOC package based on JEFF3.1.1 nuclear data file for the prediction of PWR neutronics parameters. This experimental validation is based on PWR mock-up critical experiments performed in the EOLE/MINERVE zero-power reactors and on P.I. Es on spent fuel assemblies from the French PWRs. The Calculation-Experiment comparison for the main design parameters is presented: reactivity of UOX and MOX lattices, depletion calculation and fuel inventory, reactivity loss with burnup, pin-by-pin power maps, Doppler coefficient, Moderator Temperature Coefficient, Void coefficient, UO{sub 2}-Gd{sub 2}O{sub 3} poisoning worth, Efficiency ofmore » Ag-In-Cd and B4C control rods, Reflector Saving for both standard 2-cm baffle and GEN3 advanced thick SS reflector. From this qualification process, calculation biases and associated uncertainties are derived. This code package APOLLO2.8 is already implemented in the ARCADIA new AREVA calculation chain for core physics and is currently under implementation in the future neutronics package of the French utility Electricite de France. (authors)« less

  13. Optical-analog-to-digital conversion based on successive-like approximations in octagonal-shape photonic crystal ring resonators

    NASA Astrophysics Data System (ADS)

    Tavousi, A.; Mansouri-Birjandi, M. A.

    2018-02-01

    Implementing intensity-dependent Kerr-like nonlinearity in octagonal-shape photonic crystal ring resonators (OSPCRRs), a new class of optical analog-to-digital converters (ADCs) with low power consumption is presented. Due to its size dependent refractive index, Silicon (Si) nanocrystal is used as nonlinear medium in the proposed ADC. Coding system of optical ADC is based on successive-like approximations which requires only one quantization level to represent each single bit, despite of conventional ADCs that require at least two distinct levels for each bit. Each is representing bit of optical ADC is formed by vertically alignment of double rings of OSPCRRs (DR-OSPCRR) and cascading m number of DR-OSPCRR, forms an m bit ADC. Investigating different parameters of DR-OSPCRR such as refractive indices of rings, lattice refractive index, and coupling coefficients of waveguide-to-ring and ring-to-ring, the ADC's threshold power is tuned. Increasing the number of bits of ADC, increases the overall power consumption of ADC. One can arrange to have any number of bits for this ADC, as long as the power levels are treated carefully. Finite difference time domain (FDTD) in-house codes were used to evaluate the ADC's effectiveness.

  14. An integrated simulator of structure and anisotropic flow in gas diffusion layers with hydrophobic additives

    NASA Astrophysics Data System (ADS)

    Burganos, Vasilis N.; Skouras, Eugene D.; Kalarakis, Alexandros N.

    2017-10-01

    The lattice-Boltzmann (LB) method is used in this work to reproduce the controlled addition of binder and hydrophobicity-promoting agents, like polytetrafluoroethylene (PTFE), into gas diffusion layers (GDLs) and to predict flow permeabilities in the through- and in-plane directions. The present simulator manages to reproduce spreading of binder and hydrophobic additives, sequentially, into the neat fibrous layer using a two-phase flow model. Gas flow simulation is achieved by the same code, sidestepping the need for a post-processing flow code and avoiding the usual input/output and data interface problems that arise in other techniques. Compression effects on flow anisotropy of the impregnated GDL are also studied. The permeability predictions for different compression levels and for different binder or PTFE loadings are found to compare well with experimental data for commercial GDL products and with computational fluid dynamics (CFD) predictions. Alternatively, the PTFE-impregnated structure is reproduced from Scanning Electron Microscopy (SEM) images using an independent, purely geometrical approach. A comparison of the two approaches is made regarding their adequacy to reproduce correctly the main structural features of the GDL and to predict anisotropic flow permeabilities at different volume fractions of binder and hydrophobic additives.

  15. Coherent and radiative couplings through two-dimensional structured environments

    NASA Astrophysics Data System (ADS)

    Galve, F.; Zambrini, R.

    2018-03-01

    We study coherent and radiative interactions induced among two or more quantum units by coupling them to two-dimensional (2D) lattices acting as structured environments. This model can be representative of atoms trapped near photonic crystal slabs, trapped ions in Coulomb crystals, or to surface acoustic waves on piezoelectric materials, cold atoms on state-dependent optical lattices, or even circuit QED architectures, to name a few. We compare coherent and radiative contributions for the isotropic and directional regimes of emission into the lattice, for infinite and finite lattices, highlighting their differences and existing pitfalls, e.g., related to long-time or large-lattice limits. We relate the phenomenon of directionality of emission with linear-shaped isofrequency manifolds in the dispersion relation, showing a simple way to disrupt it. For finite lattices, we study further details such as the scaling of resonant number of lattice modes for the isotropic and directional regimes, and relate this behavior with known van Hove singularities in the infinite lattice limit. Furthermore, we export the understanding of emission dynamics with the decay of entanglement for two quantum, atomic or bosonic, units coupled to the 2D lattice. We analyze in some detail completely subradiant configurations of more than two atoms, which can occur in the finite lattice scenario, in contrast with the infinite lattice case. Finally, we demonstrate that induced coherent interactions for dark states are zero for the finite lattice.

  16. Potts and percolation models on bowtie lattices

    NASA Astrophysics Data System (ADS)

    Ding, Chengxiang; Wang, Yancheng; Li, Yang

    2012-08-01

    We give the exact critical frontier of the Potts model on bowtie lattices. For the case of q=1, the critical frontier yields the thresholds of bond percolation on these lattices, which are exactly consistent with the results given by Ziff [J. Phys. A0305-447010.1088/0305-4470/39/49/003 39, 15083 (2006)]. For the q=2 Potts model on a bowtie A lattice, the critical point is in agreement with that of the Ising model on this lattice, which has been exactly solved. Furthermore, we do extensive Monte Carlo simulations of the Potts model on a bowtie A lattice with noninteger q. Our numerical results, which are accurate up to seven significant digits, are consistent with the theoretical predictions. We also simulate the site percolation on a bowtie A lattice, and the threshold is sc=0.5479148(7). In the simulations of bond percolation and site percolation, we find that the shape-dependent properties of the percolation model on a bowtie A lattice are somewhat different from those of an isotropic lattice, which may be caused by the anisotropy of the lattice.

  17. Ion channeling study of defects in compound crystals using Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Turos, A.; Jozwik, P.; Nowicki, L.; Sathish, N.

    2014-08-01

    Ion channeling is a well-established technique for determination of structural properties of crystalline materials. Defect depth profiles have been usually determined basing on the two-beam model developed by Bøgh (1968) [1]. As long as the main research interest was focused on single element crystals it was considered as sufficiently accurate. New challenge emerged with growing technological importance of compound single crystals and epitaxial heterostructures. Overlap of partial spectra due to different sublattices and formation of complicated defect structures makes the two beam method hardly applicable. The solution is provided by Monte Carlo computer simulations. Our paper reviews principal aspects of this approach and the recent developments in the McChasy simulation code. The latter made it possible to distinguish between randomly displaced atoms (RDA) and extended defects (dislocations, loops, etc.). Hence, complex defect structures can be characterized by the relative content of these two components. The next refinement of the code consists of detailed parameterization of dislocations and dislocation loops. Defect profiles for variety of compound crystals (GaN, ZnO, SrTiO3) have been measured and evaluated using the McChasy code. Damage accumulation curves for RDA and extended defects revealed non monotonous defect buildup with some characteristic steps. Transition to each stage is governed by the different driving force. As shown by the complementary high resolution XRD measurements lattice strain plays here the crucial role and can be correlated with the concentration of extended defects.

  18. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presentedmore » in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.« less

  19. Preliminary Analysis of the BASALA-H Experimental Programme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaise, Patrick; Fougeras, Philippe; Philibert, Herve

    2002-07-01

    This paper is focused on the preliminary analysis of results obtained on the first cores of the first phase of the BASALA (Boiling water reactor Advanced core physics Study Aimed at mox fuel Lattice) programme, aimed at studying the neutronic parameters in ABWR core in hot conditions, currently under investigation in the French EOLE critical facility, within the framework of a cooperation between NUPEC, CEA and Cogema. The first 'on-line' analysis of the results has been made, using a new preliminary design and safety scheme based on the French APOLLO-2 code in its 2.4 qualified version and associated CEA-93 V4more » (JEF-2.2) Library, that will enable the Experimental Physics Division (SPEx) to perform future core designs. It describes the scheme adopted and the results obtained in various cases, going to the critical size determination to the reactivity worth of the perturbed configurations (voided, over-moderated, and poisoned with Gd{sub 2}O{sub 3}-UO{sub 2} pins). A preliminary study on the experimental results on the MISTRAL-4 is resumed, and the comparison of APOLLO-2 versus MCNP-4C calculations on these cores is made. The results obtained show very good agreements between the two codes, and versus the experiment. This work opens the way to the future full analysis of the experimental results of the qualifying teams with completely validated schemes, based on the new 2.5 version of the APOLLO-2 code. (authors)« less

  20. Interference Lattice-based Loop Nest Tilings for Stencil Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Frumkin, Michael

    2000-01-01

    A common method for improving performance of stencil operations on structured multi-dimensional discretization grids is loop tiling. Tile shapes and sizes are usually determined heuristically, based on the size of the primary data cache. We provide a lower bound on the numbers of cache misses that must be incurred by any tiling, and a close achievable bound using a particular tiling based on the grid interference lattice. The latter tiling is used to derive highly efficient loop orderings. The total number of cache misses of a code is the sum of (necessary) cold misses and misses caused by elements being dropped from the cache between successive loads (replacement misses). Maximizing temporal locality is equivalent to minimizing replacement misses. Temporal locality of loop nests implementing stencil operations is optimized by tilings that avoid data conflicts. We divide the loop nest iteration space into conflict-free tiles, derived from the cache miss equation. The tiling involves the definition of the grid interference lattice an equivalence class of grid points whose images in main memory map to the same location in the cache-and the construction of a special basis for the lattice. Conflicts only occur on the boundaries of the tiles, unless the tiles are too thin. We show that the surface area of the tiles is bounded for grids of any dimensionality, and for caches of any associativity, provided the eccentricity of the fundamental parallelepiped (the tile spanned by the basis) of the lattice is bounded. Eccentricity is determined by two factors, aspect ratio and skewness. The aspect ratio of the parallelepiped can be bounded by appropriate array padding. The skewness can be bounded by the choice of a proper basis. Combining these two strategies ensures that pathologically thin tiles are avoided. They do not, however, minimize replacement misses per se. The reason is that tile visitation order influences the number of data conflicts on the tile boundaries. If two adjacent tiles are visited successively, there will be no replacement misses on the shared boundary. The iteration space may be covered with pencils larger than the size of the cache while avoiding data conflicts if the pencils are traversed by a scanning-face method. Replacement misses are incurred only on the boundaries of the pencils, and the number of misses is minimized by maximizing the volume of the scanning face, not the volume of the tile. We present an algorithm for constructing the most efficient scanning face for a given grid and stencil operator. In two dimensions it is based on a continued fraction algorithm. In three dimensions it follows Voronoi's successive minima algorithm. We show experimental results of using the scanning face, and compare with canonical loop orderings.

  1. PyFly: A fast, portable aerodynamics simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  2. Assessing the role of the Kelvin-Helmholtz instability at the QCD cosmological transition

    NASA Astrophysics Data System (ADS)

    Mourão Roque, V. R. C.; Lugones, G.

    2018-03-01

    We performed numerical simulations with the PLUTO code in order to analyze the non-linear behavior of the Kelvin-Helmholtz instability in non-magnetized relativistic fluids. The relevance of the instability at the cosmological QCD phase transition was explored using an equation of state based on lattice QCD results with the addition of leptons. The results of the simulations were compared with the theoretical predictions of the linearized theory. For small Mach numbers up to Ms ~ 0.1 we find that both results are in good agreement. However, for higher Mach numbers, non-linear effects are significant. In particular, many initial conditions that look stable according to the linear analysis are shown to be unstable according to the full calculation. Since according to lattice calculations the cosmological QCD transition is a smooth crossover, violent fluid motions are not expected. Thus, in order to assess the role of the Kelvin-Helmholtz instability at the QCD epoch, we focus on simulations with low shear velocity and use monochromatic as well as random perturbations to trigger the instability. We find that the Kelvin-Helmholtz instability can strongly amplify turbulence in the primordial plasma and as a consequence it may increase the amount of primordial gravitational radiation. Such turbulence may be relevant for the evolution of the Universe at later stages and may have an impact in the stochastic gravitational wave background.

  3. Unifying neural-network quantum states and correlator product states via tensor networks

    NASA Astrophysics Data System (ADS)

    Clark, Stephen R.

    2018-04-01

    Correlator product states (CPS) are a powerful and very broad class of states for quantum lattice systems whose (unnormalised) amplitudes in a fixed basis can be sampled exactly and efficiently. They work by gluing together states of overlapping clusters of sites on the lattice, called correlators. Recently Carleo and Troyer (2017 Science 355 602) introduced a new type sampleable ansatz called neural-network quantum states (NQS) that are inspired by the restricted Boltzmann model used in machine learning. By employing the formalism of tensor networks we show that NQS are a special form of CPS with novel properties. Diagramatically a number of simple observations become transparent. Namely, that NQS are CPS built from extensively sized GHZ-form correlators making them uniquely unbiased geometrically. The appearance of GHZ correlators also relates NQS to canonical polyadic decompositions of tensors. Another immediate implication of the NQS equivalence to CPS is that we are able to formulate exact NQS representations for a wide range of paradigmatic states, including superpositions of weighed-graph states, the Laughlin state, toric code states, and the resonating valence bond state. These examples reveal the potential of using higher dimensional hidden units and a second hidden layer in NQS. The major outlook of this study is the elevation of NQS to correlator operators allowing them to enhance conventional well-established variational Monte Carlo approaches for strongly correlated fermions.

  4. QCD equation of state for heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Zhao, A.-Meng; Shi, Yuan-Mei; Li, Jian-Feng; Zong, Hong-Shi

    2017-10-01

    In this work, we calculate the equation of state (EoS) of quark gluon-plasma (QGP) using the Cornwall-Jackiw-Tomboulis (CJT) effective action. We get the quark propagator by using the rank-1 separable model within the framework of the Dyson-Schwinger equations (DSEs). The results from CJT effective action are compared with lattice QCD data. We find that, when μ is small, our results generally fit the lattice QCD data when T > T c, but show deviations at and below T c. It can be concluded that the EoS of CJT is reliable when T > T c. Then, by adopting the hydrodynamic code UVH2+1, we compare the CJT results of the multiplicity and elliptic flow ν 2 with the PHENIX data and the results from the original EoS in UVH2+1. While the CJT results of multiplicities generally match the original UVH2+1 results and fit the experimental data, the CJT results of ν 2 are slightly larger than the original UVH2+1 results for centralities smaller than 40% and smaller than the original UVH2+1 results for higher centralities. Supported by National Natural Science Foundation of China (11447121, 11475085, 11535005, 11690030), Fundamental Research Funds for the Central Universities (020414380074), Jiangsu Planned Projects for Postdoctoral Research Funds (1501035B) and Natural Science Foundation of Jiangsu Province (BK20130078, BK20130387)

  5. Numerical analysis of the angular motion of a neutrally buoyant spheroid in shear flow at small Reynolds numbers.

    PubMed

    Rosén, T; Einarsson, J; Nordmark, A; Aidun, C K; Lundell, F; Mehlig, B

    2015-12-01

    We numerically analyze the rotation of a neutrally buoyant spheroid in a shear flow at small shear Reynolds number. Using direct numerical stability analysis of the coupled nonlinear particle-flow problem, we compute the linear stability of the log-rolling orbit at small shear Reynolds number Re(a). As Re(a)→0 and as the box size of the system tends to infinity, we find good agreement between the numerical results and earlier analytical predictions valid to linear order in Re(a) for the case of an unbounded shear. The numerical stability analysis indicates that there are substantial finite-size corrections to the analytical results obtained for the unbounded system. We also compare the analytical results to results of lattice Boltzmann simulations to analyze the stability of the tumbling orbit at shear Reynolds numbers of order unity. Theory for an unbounded system at infinitesimal shear Reynolds number predicts a bifurcation of the tumbling orbit at aspect ratio λ(c)≈0.137 below which tumbling is stable (as well as log rolling). The simulation results show a bifurcation line in the λ-Re(a) plane that reaches λ≈0.1275 at the smallest shear Reynolds number (Re(a)=1) at which we could simulate with the lattice Boltzmann code, in qualitative agreement with the analytical results.

  6. PyFly: A fast, portable aerodynamics simulator

    DOE PAGES

    Garcia, Daniel; Ghommem, M.; Collier, Nathaniel O.; ...

    2018-03-14

    Here, we present a fast, user-friendly implementation of a potential flow solver based on the unsteady vortex lattice method (UVLM), namely PyFly. UVLM computes the aerodynamic loads applied on lifting surfaces while capturing the unsteady effects such as the added mass forces, the growth of bound circulation, and the wake while assuming that the flow separation location is known a priori. This method is based on discretizing the body surface into a lattice of vortex rings and relies on the Biot–Savart law to construct the velocity field at every point in the simulated domain. We introduce the pointwise approximation approachmore » to simulate the interactions of the far-field vortices to overcome the computational burden associated with the classical implementation of UVLM. The computational framework uses the Python programming language to provide an easy to handle user interface while the computational kernels are written in Fortran. The mixed language approach enables high performance regarding solution time and great flexibility concerning easiness of code adaptation to different system configurations and applications. The computational tool predicts the unsteady aerodynamic behavior of multiple moving bodies (e.g., flapping wings, rotating blades, suspension bridges) subject to incoming air. The aerodynamic simulator can also deal with enclosure effects, multi-body interactions, and B-spline representation of body shapes. Finally, we simulate different aerodynamic problems to illustrate the usefulness and effectiveness of PyFly.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehtomäki, Jouko; Makkonen, Ilja; Harju, Ari

    We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wave method (PAW) in combination with real-space methods, we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are twofold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in othermore » OFDFT implementations based on Kohn-Sham (KS) codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10 meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.« less

  8. Pathogenic mutations of TGFBI and CHST6 genes in Chinese patients with Avellino, lattice, and macular corneal dystrophies.

    PubMed

    Huo, Ya-nan; Yao, Yu-feng; Yu, Ping

    2011-09-01

    To investigate gene mutations associated with three different types of corneal dystrophies (CDs), and to establish a phenotype-genotype correlation. Two patients with Avellino corneal dystrophy (ACD), four patients with lattice corneal dystrophy type I (LCD I) from one family, and three patients with macular corneal dystrophy type I (MCD I) were subjected to both clinical and genetic examinations. Slit lamp examination was performed for all the subjects to assess their corneal phenotypes. Genomic DNA was extracted from peripheral blood leukocytes. The coding regions of the human transforming growth factor β-induced (TGFBI) gene and carbohydrate sulfotransferase 6 (CHST6) gene were amplified by polymerase chain reaction (PCR) and subjected to direct sequencing. DNA samples from 50 healthy volunteers were used as controls. Clinical examination showed three different phenotypes of CDs. Genetic examination identified that two ACD subjects were associated with homozygous R124H mutation of TGFBI, and four LCD I subjects were all associated with R124C heterozygous mutation. One MCD I subject was associated with a novel S51X homozygous mutation in CHST6, while the other two MCD I subjects harbored a previously reported W232X homozygous mutation. Our study highlights the prevalence of codon 124 mutations in the TGFBI gene among the Chinese ACD and LCD I patients. Moreover, we found a novel mutation among MCD I patients.

  9. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  10. Cation distribution in NiZn-ferrite films determined using x-ray absorption fine structure

    NASA Astrophysics Data System (ADS)

    Harris, V. G.; Koon, N. C.; Williams, C. M.; Zhang, Q.; Abe, M.

    1996-04-01

    We have applied extended x-ray absorption fine structure (EXAFS) spectroscopy to study the cation distribution in a series of spin-sprayed NiZn-ferrite films, Ni0.15ZnyFe2.85-yO4 (y=0.16, 0.23, 0.40, 0.60). The Ni, Zn, and Fe EXAFS were collected from each sample and analyzed to Fourier transforms. Samples of Ni-ferrite, Zn-ferrite, and magnetite were similarly studied as empirical standards. These standards, together with EXAFS data generated from the theoretical EXAFS FEFF codes, allowed the correlation of features in the Fourier transforms with specific lattice sites in the spinel unit cell. We find that the Ni ions reside mostly on the octahedral (B) sites whereas the Zn ions are predominantly on the tetrahedral (A) sites. The Fe ions reside on both A and B sites in a ratio determined by the ratio of Zn/Fe. The addition of Zn displaces a larger fraction of Fe cations onto the B sites serving to increase the net magnetization. The fraction of A site Ni ions is measured to increase peaking at ≊25% for y=0.6. At higher Zn concentrations (y≥0.5) the lattice experiences local distortions around the Zn sites causing a decrease in the superexchange resulting in a decrease in the net magnetization.

  11. Single-trabecula building block for large-scale finite element models of cancellous bone.

    PubMed

    Dagan, D; Be'ery, M; Gefen, A

    2004-07-01

    Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.

  12. Reanalysis of the gas-cooled fast reactor experiments at the zero power facility proteus - Spectral indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perret, G.; Pattupara, R. M.; Girardin, G.

    2012-07-01

    The gas-cooled fast reactor (GCFR) concept was investigated experimentally in the PROTEUS zero power facility at the Paul Scherrer Inst. during the 1970's. The experimental program was aimed at neutronics studies specific to the GCFR and at the validation of nuclear data in fast spectra. A significant part of the program used thorium oxide and thorium metal fuel either distributed quasi-homogeneously in the reference PuO{sub 2}/UO{sub 2} lattice or introduced in the form of radial and axial blanket zones. Experimental results obtained at the time are still of high relevance in view of the current consideration of the Gas-cooled Fastmore » Reactor (GFR) as a Generation-IV nuclear system, as also of the renewed interest in the thorium cycle. In this context, some of the experiments have been modeled with modern Monte Carlo codes to better account for the complex PROTEUS whole-reactor geometry and to allow validating recent continuous neutron cross-section libraries. As a first step, the MCNPX model was used to test the JEFF-3.1, JEFF-3.1.1, ENDF/B-VII.0 and JENDL-3.3 libraries against spectral indices, notably involving fission and capture of {sup 232}Th and {sup 237}Np, measured in GFR-like lattices. (authors)« less

  13. An Alternative Lattice Field Theory Formulation Inspired by Lattice Supersymmetry-Summary of the Formulation-

    NASA Astrophysics Data System (ADS)

    D'Adda, Alessandro; Kawamoto, Noboru; Saito, Jun

    2018-03-01

    We propose a lattice field theory formulation which overcomes some fundamental diffculties in realizing exact supersymmetry on the lattice. The Leibniz rule for the difference operator can be recovered by defining a new product on the lattice, the star product, and the chiral fermion species doublers degrees of freedom can be avoided consistently. This framework is general enough to formulate non-supersymmetric lattice field theory without chiral fermion problem. This lattice formulation has a nonlocal nature and is essentially equivalent to the corresponding continuum theory. We can show that the locality of the star product is recovered exponentially in the continuum limit. Possible regularization procedures are proposed.The associativity of the product and the lattice translational invariance of the formulation will be discussed.

  14. Deterministic composite nanophotonic lattices in large area for broadband applications

    NASA Astrophysics Data System (ADS)

    Xavier, Jolly; Probst, Jürgen; Becker, Christiane

    2016-12-01

    Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.

  15. Still states of bistable lattices, compatibility, and phase transition

    NASA Astrophysics Data System (ADS)

    Cherkaev, Andrej; Kouznetsov, Andrei; Panchenko, Alexander

    2010-09-01

    We study a two-dimensional triangular lattice made of bistable rods. Each rod has two equilibrium lengths, and thus its energy has two equal minima. A rod undergoes a phase transition when its elongation exceeds a critical value. The lattice is subject to a homogeneous strain and is periodic with a sufficiently large period. The effective strain of a periodic element is defined. After phase transitions, the lattice rods are in two different states and lattice strain is inhomogeneous, the Cauchy-Born rule is not applicable. We show that the lattice has a number of deformed still states that carry no stresses. These states densely cover a neutral region in the space of entries of effective strains. In this region, the minimal energy of the periodic lattice is asymptotically close to zero. When the period goes to infinity, the effective energy of such lattices has the “flat bottom” which we explicitly describe. The compatibility of the partially transited lattice is studied. We derive compatibility conditions for lattices and demonstrate a family of compatible lattices (strips) that densely covers the flat bottom region. Under an additional assumption of the small difference of two equilibrium lengths, we demonstrate that the still structures continuously vary with the effective strain and prove a linear dependence of the average strain on the concentration of transited rods.

  16. Deterministic composite nanophotonic lattices in large area for broadband applications

    PubMed Central

    Xavier, Jolly; Probst, Jürgen; Becker, Christiane

    2016-01-01

    Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates. PMID:27941869

  17. QCD equation of state to O (μB6) from lattice QCD

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Ding, H.-T.; Hegde, P.; Kaczmarek, O.; Karsch, F.; Laermann, E.; Maezawa, Y.; Mukherjee, Swagato; Ohno, H.; Petreczky, P.; Sandmeyer, H.; Steinbrecher, P.; Schmidt, C.; Sharma, S.; Soeldner, W.; Wagner, M.

    2017-03-01

    We calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ∈[135 MeV ,330 MeV ] using up to four different sets of lattice cutoffs corresponding to lattices of size Nσ3×Nτ with aspect ratio Nσ/Nτ=4 and Nτ=6 - 16 . The strange quark mass is tuned to its physical value, and we use two strange to light quark mass ratios ms/ml=20 and 27, which in the continuum limit correspond to a pion mass of about 160 and 140 MeV, respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (μB≤2 T ). The fourth-order equation of state thus is suitable for the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √{sN N}˜12 GeV . We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth-order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -μB plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. We argue that results on sixth-order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for μB/T ≤2 and T /Tc(μB=0 )>0.9 .

  18. Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques

    NASA Astrophysics Data System (ADS)

    Canbakan, Axel

    Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.

  19. Local lattice distortion in high-entropy alloys

    NASA Astrophysics Data System (ADS)

    Song, Hongquan; Tian, Fuyang; Hu, Qing-Miao; Vitos, Levente; Wang, Yandong; Shen, Jiang; Chen, Nanxian

    2017-07-01

    The severe local lattice distortion, induced mainly by the large atomic size mismatch of the alloy components, is one of the four core effects responsible for the unprecedented mechanical behaviors of high-entropy alloys (HEAs). In this work, we propose a supercell model, in which every lattice site has similar local atomic environment, to describe the random distributions of the atomic species in HEAs. Using these supercells in combination with ab initio calculations, we investigate the local lattice distortion of refractory HEAs with body-centered-cubic structure and 3 d HEAs with face-centered-cubic structure. Our results demonstrate that the local lattice distortion of the refractory HEAs is much more significant than that of the 3 d HEAs. We show that the atomic size mismatch evaluated with the empirical atomic radii is not accurate enough to describe the local lattice distortion. Both the lattice distortion energy and the mixing entropy contribute significantly to the thermodynamic stability of HEAs. However the local lattice distortion has negligible effect on the equilibrium lattice parameter and bulk modulus.

  20. Accurate and efficient spin integration for particle accelerators

    DOE PAGES

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less

  1. SIMULATIONS OF BOOSTER INJECTION EFFICIENCY FOR THE APS-UPGRADE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvey, J.; Borland, M.; Harkay, K.

    2017-06-25

    The APS-Upgrade will require the injector chain to provide high single bunch charge for swap-out injection. One possible limiting factor to achieving this is an observed reduction of injection efficiency into the booster synchrotron at high charge. We have simulated booster injection using the particle tracking code elegant, including a model for the booster impedance and beam loading in the RF cavities. The simulations point to two possible causes for reduced efficiency: energy oscillations leading to losses at high dispersion locations, and a vertical beam size blowup caused by ions in the Particle Accumulator Ring. We also show that themore » efficiency is much higher in an alternate booster lattice with smaller vertical beta function and zero dispersion in the straight sections.« less

  2. Boltzmann transport properties of ultra thin-layer of h-CX monolayers

    NASA Astrophysics Data System (ADS)

    Kansara, Shivam; Gupta, Sanjeev K.; Sonvane, Yogesh

    2018-04-01

    Structural, electronic and thermoelectric properties of monolayer h-CX (X= Al, As, B, Bi, Ga, In, P, N, Sb and Tl) have been computed using density functional theory (DFT). The structural, electronic band structure, phonon dispersion curves and thermoelectric properties have been investigated. h-CGa and h-CTl show the periodically lattice vibrations and h-CB and h-CIn show small imaginary ZA frequencies. Thermoelectric properties are obtained using BoltzTrap code with the constant relaxation time (τ) approximation such as electronic, thermal and electrical conductivity calculated for various temperatures. The results indicate that h-CGa, h-CIn, h-CTl and h-CAl have direct band gaps with minimum electronic thermal and electrical conductivity while h-CB and h-CN show the high electronic thermal and electrical conductivity with highest cohesive energy.

  3. Stochastic kinetic mean field model

    NASA Astrophysics Data System (ADS)

    Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.

    2016-07-01

    This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.

  4. Investigation on the reflector/moderator geometry and its effect on the neutron beam design in BNCT.

    PubMed

    Kasesaz, Y; Rahmani, F; Khalafi, H

    2015-12-01

    In order to provide an appropriate neutron beam for Boron Neutron Capture Therapy (BNCT), a special Beam Shaping Assembly (BSA) must be designed based on the neutron source specifications. A typical BSA includes moderator, reflector, collimator, thermal neutron filter, and gamma filter. In common BSA, the reflector is considered as a layer which covers the sides of the moderator materials. In this paper, new reflector/moderator geometries including multi-layer and hexagonal lattice have been suggested and the effect of them has been investigated by MCNP4C Monte Carlo code. It was found that the proposed configurations have a significant effect to improve the thermal to epithermal neutron flux ratio which is an important neutron beam parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Aeroelastic modeling for the FIT team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.

  6. Intra-Beam and Touschek Scattering Computations for Beam with Non-Gaussian Longitudinal Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, A.; Borland, M.

    Both intra-beamscattering (IBS) and the Touschek effect become prominent formulti-bend-achromat- (MBA-) based ultra-low-emittance storage rings. To mitigate the transverse emittance degradation and obtain a reasonably long beam lifetime, a higher harmonic rf cavity (HHC) is often proposed to lengthen the bunch. The use of such a cavity results in a non-gaussian longitudinal distribution. However, common methods for computing IBS and Touschek scattering assume Gaussian distributions. Modifications have been made to several simulation codes that are part of the elegant [1] toolkit to allow these computations for arbitrary longitudinal distributions. After describing thesemodifications, we review the results of detailed simulations formore » the proposed hybrid seven-bend-achromat (H7BA) upgrade lattice [2] for the Advanced Photon Source.« less

  7. Finite-temperature mechanical instability in disordered lattices.

    PubMed

    Zhang, Leyou; Mao, Xiaoming

    2016-02-01

    Mechanical instability takes different forms in various ordered and disordered systems and little is known about how thermal fluctuations affect different classes of mechanical instabilities. We develop an analytic theory involving renormalization of rigidity and coherent potential approximation that can be used to understand finite-temperature mechanical stabilities in various disordered systems. We use this theory to study two disordered lattices: a randomly diluted triangular lattice and a randomly braced square lattice. These two lattices belong to two different universality classes as they approach mechanical instability at T=0. We show that thermal fluctuations stabilize both lattices. In particular, the triangular lattice displays a critical regime in which the shear modulus scales as G∼T(1/2), whereas the square lattice shows G∼T(2/3). We discuss generic scaling laws for finite-T mechanical instabilities and relate them to experimental systems.

  8. Potential for a Near Term Very Low Energy Antiproton Source at Brookhaven National Laboratory.

    DTIC Science & Technology

    1989-04-01

    9 Table III-1: Cost Summary . . . . * . . .. . * 10 IV. Lattice and Stretcher Properties . . . . . . .............. 11 Fig. IV-1 Cell... lattice functions . . . . . . . . . . 12 Fig. IV-2 Insertion region lattice . . . . . . . . . 12 Fig. IV-3 Superperiod lattice functions . . . . . . 12...8217 * . . . 13 Table IV-Ib Parameters after lattice matching . . . . 13 Table IV-lc Components specification. . . 13 Table IV-2 Random multipoles. .. . . .. 15

  9. Issues and opportunities: beam simulations for heavy ion fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A

    1999-07-15

    UCRL- JC- 134975 PREPRINT code offering 3- D, axisymmetric, and ''transverse slice'' (steady flow) geometries, with a hierarchy of models for the ''lattice'' of focusing, bending, and accelerating elements. Interactive and script- driven code steering is afforded through an interpreter interface. The code runs with good parallel scaling on the T3E. Detailed simulations of machine segments and of complete small experiments, as well as simplified full- system runs, have been carried out, partially benchmarking the code. A magnetoinductive model, with module impedance and multi- beam effects, is under study. experiments, including an injector scalable to multi- beam arrays, a high-more » current beam transport and acceleration experiment, and a scaled final- focusing experiment. These ''phase I'' projects are laying the groundwork for the next major step in HIF development, the Integrated Research Experiment (IRE). Simulations aimed directly at the IRE must enable us to: design a facility with maximum power on target at minimal cost; set requirements for hardware tolerances, beam steering, etc.; and evaluate proposed chamber propagation modes. Finally, simulations must enable us to study all issues which arise in the context of a fusion driver, and must facilitate the assessment of driver options. In all of this, maximum advantage must be taken of emerging terascale computer architectures, requiring an aggressive code development effort. An organizing principle should be pursuit of the goal of integrated and detailed source- to- target simulation. methods for analysis of the beam dynamics in the various machine concepts, using moment- based methods for purposes of design, waveform synthesis, steering algorithm synthesis, etc. Three classes of discrete- particle models should be coupled: (1) electrostatic/ magnetoinductive PIC simulations should track the beams from the source through the final- focusing optics, passing details of the time- dependent distribution function to (2) electromagnetic or magnetoinductive PIC or hybrid PIG/ fluid simulations in the fusion chamber (which would finally pass their particle trajectory information to the radiation- hydrodynamics codes used for target design); in parallel, (3) detailed PIC, delta- f, core/ test- particle, and perhaps continuum Vlasov codes should be used to study individual sections of the driver and chamber very carefully; consistency may be assured by linking data from the PIC sequence, and knowledge gained may feed back into that sequence.« less

  10. Toward lattice fractional vector calculus

    NASA Astrophysics Data System (ADS)

    Tarasov, Vasily E.

    2014-09-01

    An analog of fractional vector calculus for physical lattice models is suggested. We use an approach based on the models of three-dimensional lattices with long-range inter-particle interactions. The lattice analogs of fractional partial derivatives are represented by kernels of lattice long-range interactions, where the Fourier series transformations of these kernels have a power-law form with respect to wave vector components. In the continuum limit, these lattice partial derivatives give derivatives of non-integer order with respect to coordinates. In the three-dimensional description of the non-local continuum, the fractional differential operators have the form of fractional partial derivatives of the Riesz type. As examples of the applications of the suggested lattice fractional vector calculus, we give lattice models with long-range interactions for the fractional Maxwell equations of non-local continuous media and for the fractional generalization of the Mindlin and Aifantis continuum models of gradient elasticity.

  11. Space of symmetry matrices with elements 0, ±1 and complete geometric description; its properties and application.

    PubMed

    Stróż, Kazimierz

    2011-09-01

    A fixed set, that is the set of all lattice metrics corresponding to the arithmetic holohedry of a primitive lattice, is a natural tool for keeping track of the symmetry changes that may occur in a deformable lattice [Ericksen (1979). Arch. Rat. Mech. Anal. 72, 1-13; Michel (1995). Symmetry and Structural Properties of Condensed Matter, edited by T. Lulek, W. Florek & S. Walcerz. Singapore: Academic Press; Pitteri & Zanzotto (1996). Acta Cryst. A52, 830-838; and references quoted therein]. For practical applications it is desirable to limit the infinite number of arithmetic holohedries, and simplify their classification and construction of the fixed sets. A space of 480 matrices with cyclic consecutive powers, determinant 1, elements from {0, ±1} and geometric description were analyzed and offered as the framework for dealing with the symmetry of reduced lattices. This matrix space covers all arithmetic holohedries of primitive lattice descriptions related to the three shortest lattice translations in direct or reciprocal spaces, and corresponds to the unique list of 39 fixed points with integer coordinates in six-dimensional space of lattice metrics. Matrices are presented by the introduced dual symbol, which sheds some light on the lattice and its symmetry-related properties, without further digging into matrices. By the orthogonal lattice distortion the lattice group-subgroup relations are easily predicted. It was proven and exemplified that new symbols enable classification of lattice groups on an absolute basis, without metric considerations. In contrast to long established but sophisticated methods for assessing the metric symmetry of a lattice, simple filtering of the symmetry operations from the predefined set is proposed. It is concluded that the space of symmetry matrices with elements from {0, ±1} is the natural environment of lattice symmetries related to the reduced cells and that complete geometric characterization of matrices in the arithmetic holohedry provides a useful tool for solving practical lattice-related problems, especially in the context of lattice deformation. © 2011 International Union of Crystallography

  12. Multilayer DNA origami packed on hexagonal and hybrid lattices.

    PubMed

    Ke, Yonggang; Voigt, Niels V; Gothelf, Kurt V; Shih, William M

    2012-01-25

    "Scaffolded DNA origami" has been proven to be a powerful and efficient approach to construct two-dimensional or three-dimensional objects with great complexity. Multilayer DNA origami has been demonstrated with helices packing along either honeycomb-lattice geometry or square-lattice geometry. Here we report successful folding of multilayer DNA origami with helices arranged on a close-packed hexagonal lattice. This arrangement yields a higher density of helical packing and therefore higher resolution of spatial addressing than has been shown previously. We also demonstrate hybrid multilayer DNA origami with honeycomb-lattice, square-lattice, and hexagonal-lattice packing of helices all in one design. The availability of hexagonal close-packing of helices extends our ability to build complex structures using DNA nanotechnology. © 2011 American Chemical Society

  13. Real-space observation of magnetic excitations and avalanche behavior in artificial quasicrystal lattices

    DOE PAGES

    Brajuskovic, V.; Barrows, F.; Phatak, C.; ...

    2016-10-03

    Artificial spin ice lattices have emerged as model systems for studying magnetic frustration in recent years. Most work to date has looked at periodic artificial spin ice lattices. In this paper, we observe frustration effects in quasicrystal artificial spin ice lattices that lack translational symmetry and contain vertices with different numbers of interacting elements. We find that as the lattice state changes following demagnetizing and annealing, specific vertex motifs retain low-energy configurations, which excites other motifs into higher energy configurations. In addition, we find that unlike the magnetization reversal process for periodic artificial spin ice lattices, which occurs through 1Dmore » avalanches, quasicrystal lattices undergo reversal through a dendritic 2D avalanche mechanism.« less

  14. Real-space observation of magnetic excitations and avalanche behavior in artificial quasicrystal lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brajuskovic, V.; Barrows, F.; Phatak, C.

    Artificial spin ice lattices have emerged as model systems for studying magnetic frustration in recent years. Most work to date has looked at periodic artificial spin ice lattices. In this paper, we observe frustration effects in quasicrystal artificial spin ice lattices that lack translational symmetry and contain vertices with different numbers of interacting elements. We find that as the lattice state changes following demagnetizing and annealing, specific vertex motifs retain low-energy configurations, which excites other motifs into higher energy configurations. In addition, we find that unlike the magnetization reversal process for periodic artificial spin ice lattices, which occurs through 1Dmore » avalanches, quasicrystal lattices undergo reversal through a dendritic 2D avalanche mechanism.« less

  15. Unimodular lattices in dimensions 14 and 15 over the Eisenstein integers

    NASA Astrophysics Data System (ADS)

    Abdukhalikov, Kanat; Scharlau, Rudolf

    2009-03-01

    All indecomposable unimodular hermitian lattices in dimensions 14 and 15 over the ring of integers in mathbb{Q}(sqrt{-3}) are determined. Precisely one lattice in dimension 14 and two lattices in dimension 15 have minimal norm 3.

  16. Computing nucleon EDM on a lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramczyk, Michael; Izubuchi, Taku

    I will discuss briefly recent changes in the methodology of computing the baryon EDM on a lattice. The associated correction substantially reduces presently existing lattice values for the proton and neutron theta-induced EDMs, so that even the most precise previous lattice results become consistent with zero. On one hand, this change removes previous disagreements between these lattice results and the phenomenological estimates of the nucleon EDM. On the other hand, the nucleon EDM becomes much harder to compute on a lattice. In addition, I will review the progress in computing quark chromo-EDM-induced nucleon EDM using chiral quark action.

  17. Three-wave electron vortex lattices for measuring nanofields.

    PubMed

    Dwyer, C; Boothroyd, C B; Chang, S L Y; Dunin-Borkowski, R E

    2015-01-01

    It is demonstrated how an electron-optical arrangement consisting of two electron biprisms can be used to generate three-wave vortex lattices with effective lattice spacings between 0.1 and 1 nm. The presence of vortices in these lattices was verified by using a third biprism to perform direct phase measurements via off-axis electron holography. The use of three-wave lattices for nanoscale electromagnetic field measurements via vortex interferometry is discussed, including the accuracy of vortex position measurements and the interpretation of three-wave vortex lattices in the presence of partial spatial coherence. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Critical temperature of the Ising ferromagnet on the fcc, hcp, and dhcp lattices

    NASA Astrophysics Data System (ADS)

    Yu, Unjong

    2015-02-01

    By an extensive Monte-Carlo calculation together with the finite-size-scaling and the multiple histogram method, the critical coupling constant (Kc = J /kBTc) of the Ising ferromagnet on the fcc, hcp, and double hcp (dhcp) lattices were obtained with unprecedented precision: Kcfcc= 0.1020707(2) , Kchcp= 0.1020702(1) , and Kcdhcp= 0.1020706(2) . The critical temperature Tc of the hcp lattice is found to be higher than those of the fcc and the dhcp lattice. The dhcp lattice seems to have higher Tc than the fcc lattice, but the difference is within error bars.

  19. Signatures of two-step impurity mediated vortex lattice melting in Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Dey, Bishwajyoti

    2017-04-01

    We study impurity mediated vortex lattice melting in a rotating two-dimensional Bose-Einstein condensate (BEC). Impurities are introduced either through a protocol in which vortex lattice is produced in an impurity potential or first creating the vortex lattice in the absence of random pinning and then cranking up the impurity potential. These two protocols have obvious relation with the two commonly known protocols of creating vortex lattice in a type-II superconductor: zero field cooling protocol and the field cooling protocol respectively. Time-splitting Crank-Nicolson method has been used to numerically simulate the vortex lattice dynamics. It is shown that the vortex lattice follows a two-step melting via loss of positional and orientational order. This vortex lattice melting process in BEC closely mimics the recently observed two-step melting of vortex matter in weakly pinned type-II superconductor Co-intercalated NbSe2. Also, using numerical perturbation analysis, we compare between the states obtained in two protocols and show that the vortex lattice states are metastable and more disordered when impurities are introduced after the formation of an ordered vortex lattice. The author would like to thank SERB, Govt. of India and BCUD-SPPU for financial support through research Grants.

  20. Near integrability of kink lattice with higher order interactions

    NASA Astrophysics Data System (ADS)

    Jiang, Yun-Guo; Liu, Jia-Zhen; He, Song

    2017-11-01

    We make use of Manton’s analytical method to investigate the force between kinks and anti-kinks at large distances in 1+1 dimensional field theory. The related potential has infinite order corrections of exponential pattern, and the coefficients for each order are determined. These coefficients can also be obtained by solving the equation of the fluctuations around the vacuum. At the lowest order, the kink lattice represents the Toda lattice. With higher order correction terms, the kink lattice can represent one kind of generic Toda lattice. With only two sites, the kink lattice is classically integrable. If the number of sites of the lattice is larger than two, the kink lattice is not integrable but is a near integrable system. We make use of Flaschka’s variables to study the Lax pair of the kink lattice. These Flaschka’s variables have interesting algebraic relations and non-integrability can be manifested. We also discuss the higher Hamiltonians for the deformed open Toda lattice, which has a similar result to the ordinary deformed Toda. Supported by Shandong Provincial Natural Science Foundation (ZR2014AQ007), National Natural Science Foundation of China (11403015, U1531105), S. He is supported by Max-Planck fellowship in Germany and National Natural Science Foundation of China (11305235)

  1. cis-trans Germanium chains in the intermetallic compounds ALi{sub 1-x}In{sub x}Ge{sub 2} and A{sub 2}(Li{sub 1-x}In{sub x}){sub 2}Ge{sub 3} (A=Sr, Ba, Eu)-experimental and theoretical studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Tae-Soo; Bobev, Svilen, E-mail: bobev@udel.ed

    Two types of strontium-, barium- and europium-containing germanides have been synthesized using high temperature reactions and characterized by single-crystal X-ray diffraction. All reported compounds also contain mixed-occupied Li and In atoms, resulting in quaternary phases with narrow homogeneity ranges. The first type comprises EuLi{sub 0.91(1)}In{sub 0.09}Ge{sub 2}, SrLi{sub 0.95(1)}In{sub 0.05}Ge{sub 2} and BaLi{sub 0.99(1)}In{sub 0.01}Ge{sub 2}, which crystallize in the orthorhombic space group Pnma (BaLi{sub 0.9}Mg{sub 0.1}Si{sub 2} structure type, Pearson code oP16). The lattice parameters are a=7.129(4)-7.405(4) A; b=4.426(3)-4.638(2) A; and c=11.462(7)-11.872(6) A. The second type includes Eu{sub 2}Li{sub 1.36(1)}In{sub 0.64}Ge{sub 3} and Sr{sub 2}Li{sub 1.45(1)}In{sub 0.55}Ge{sub 3}, whichmore » adopt the orthorhombic space group Cmcm (Ce{sub 2}Li{sub 2}Ge{sub 3} structure type, Pearson code oC28) with lattice parameters a=4.534(2)-4.618(2) A; b=19.347(8)-19.685(9) A; and c=7.164(3)-7.260(3) A. The polyanionic sub-structures in both cases feature one-dimensional Ge chains with alternating Ge-Ge bonds in cis- and trans-conformation. Theoretical studies using the tight-binding linear muffin-tin orbital (LMTO) method provide the rationale for optimizing the overall bonding by diminishing the {pi}-p delocalization along the Ge chains, accounting for the experimentally confirmed substitution of Li forIn. -- Graphical abstract: Presented are the single-crystal structures of two types of closely related intermetallics, as well as their band structures, calculated using tight-binding linear muffin-tin orbital (TB-LMTO-ASA) method. Display Omitted« less

  2. Random elements on lattices: Review and statistical applications

    NASA Astrophysics Data System (ADS)

    Potocký, Rastislav; Villarroel, Claudia Navarro; Sepúlveda, Maritza; Luna, Guillermo; Stehlík, Milan

    2017-07-01

    We discuss important contributions to random elements on lattices. We relate to both algebraic and probabilistic properties. Several applications and concepts are discussed, e.g. positive dependence, Random walks and distributions on lattices, Super-lattices, learning. The application to Chilean Ecology is given.

  3. GPU accelerated population annealing algorithm

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.

    2017-11-01

    Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature steps and multi-histogram reweighting. Additional comments: Code repository at https://github.com/LevBarash/PAising. The system size and size of the population of replicas are limited depending on the memory of the GPU device used. For the default parameter values used in the sample programs, L = 64, θ = 100, β0 = 0, βf = 1, Δβ = 0 . 005, R = 20 000, a typical run time on an NVIDIA Tesla K80 GPU is 151 seconds for the single spin coded (SSC) and 17 seconds for the multi-spin coded (MSC) program (see Section 2 for a description of these parameters).

  4. Fractional-order difference equations for physical lattices and some applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru

    2015-10-15

    Fractional-order operators for physical lattice models based on the Grünwald-Letnikov fractional differences are suggested. We use an approach based on the models of lattices with long-range particle interactions. The fractional-order operators of differentiation and integration on physical lattices are represented by kernels of lattice long-range interactions. In continuum limit, these discrete operators of non-integer orders give the fractional-order derivatives and integrals with respect to coordinates of the Grünwald-Letnikov types. As examples of the fractional-order difference equations for physical lattices, we give difference analogs of the fractional nonlocal Navier-Stokes equations and the fractional nonlocal Maxwell equations for lattices with long-range interactions.more » Continuum limits of these fractional-order difference equations are also suggested.« less

  5. Northeast Parallel Architectures Center (NPAC) at Syracuse University

    DTIC Science & Technology

    1990-12-01

    lattice models. On the CM-2 we will fun a lattice gauge theory simulation of quantum chromodynamics ( QCD ), and on the CM-1 we will investigate the...into a three-dimensional grid with the stipulation that adjacent processors in the lattice correspond to proximate regions of space. Light paths will...be constrained to follow lattice links and the sum over all paths from light sources to each lattice site will be computed inductively by all

  6. Dynamic Behavior of Engineered Lattice Materials

    PubMed Central

    Hawreliak, J. A.; Lind, J.; Maddox, B.; Barham, M.; Messner, M.; Barton, N.; Jensen, B. J.; Kumar, M.

    2016-01-01

    Additive manufacturing (AM) is enabling the fabrication of materials with engineered lattice structures at the micron scale. These mesoscopic structures fall between the length scale associated with the organization of atoms and the scale at which macroscopic structures are constructed. Dynamic compression experiments were performed to study the emergence of behavior owing to the lattice periodicity in AM materials on length scales that approach a single unit cell. For the lattice structures, both bend and stretch dominated, elastic deflection of the structure was observed ahead of the compaction of the lattice, while no elastic deformation was observed to precede the compaction in a stochastic, random structure. The material showed lattice characteristics in the elastic response of the material, while the compaction was consistent with a model for compression of porous media. The experimental observations made on arrays of 4 × 4 × 6 lattice unit cells show excellent agreement with elastic wave velocity calculations for an infinite periodic lattice, as determined by Bloch wave analysis, and finite element simulations. PMID:27321697

  7. Bulk diffusion in a kinetically constrained lattice gas

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  8. Mechanical and electrical strain response of a piezoelectric auxetic PZT lattice structure

    NASA Astrophysics Data System (ADS)

    Fey, Tobias; Eichhorn, Franziska; Han, Guifang; Ebert, Kathrin; Wegener, Moritz; Roosen, Andreas; Kakimoto, Ken-ichi; Greil, Peter

    2016-01-01

    A two-dimensional auxetic lattice structure was fabricated from a PZT piezoceramic. Tape casted and sintered sheets with a thickness of 530 μm were laser cut into inverted honeycomb lattice structure with re-entrant cell geometry (θ = -25°) and poling direction oriented perpendicular to the lattice plane. The in-plane strain response upon applying an uniaxial compression load as well as an electric field perpendicular to the lattice plane were analyzed by a 2D image data detection analysis. The auxetic lattice structure exhibits orthotropic deformation behavior with a negative in-plane Poisson’s ratio of -2.05. Compared to PZT bulk material the piezoelectric auxetic lattice revealed a strain amplification by a factor of 30-70. Effective transversal coupling coefficients {{d}al}31 of the PZT lattice exceeding 4 × 103 pm V-1 were determined which result in an effective hydrostatic coefficient {{d}al}h 66 times larger than that of bulk PZT.

  9. Relationships between lattice energies of inorganic ionic solids

    NASA Astrophysics Data System (ADS)

    Kaya, Savaş

    2018-06-01

    Lattice energy, which is a measure of the stabilities of inorganic ionic solids, is the energy required to decompose a solid into its constituent independent gaseous ions. In the present work, the relationships between lattice energies of many diatomic and triatomic inorganic ionic solids are revealed and a simple rule that can be used for the prediction of the lattice energies of inorganic ionic solids is introduced. According to this rule, the lattice energy of an AB molecule can be predicted with the help of the lattice energies of AX, BY and XY molecules in agreement with the experimental data. This rule is valid for not only diatomic molecules but also triatomic molecules. The lattice energy equations proposed in this rule provides compatible results with previously published lattice energy equations by Jenkins, Kaya, Born-Lande, Born-Mayer, Kapustinskii and Reddy. For a large set of tested molecules, calculated percent standard deviation values considering experimental data and the results of the equations proposed in this work are in general between %1-2%.

  10. Discrete-to-continuum modelling of weakly interacting incommensurate two-dimensional lattices.

    PubMed

    Español, Malena I; Golovaty, Dmitry; Wilber, J Patrick

    2018-01-01

    In this paper, we derive a continuum variational model for a two-dimensional deformable lattice of atoms interacting with a two-dimensional rigid lattice. The starting point is a discrete atomistic model for the two lattices which are assumed to have slightly different lattice parameters and, possibly, a small relative rotation. This is a prototypical example of a three-dimensional system consisting of a graphene sheet suspended over a substrate. We use a discrete-to-continuum procedure to obtain the continuum model which recovers both qualitatively and quantitatively the behaviour observed in the corresponding discrete model. The continuum model predicts that the deformable lattice develops a network of domain walls characterized by large shearing, stretching and bending deformation that accommodates the misalignment and/or mismatch between the deformable and rigid lattices. Two integer-valued parameters, which can be identified with the components of a Burgers vector, describe the mismatch between the lattices and determine the geometry and the details of the deformation associated with the domain walls.

  11. The kink-soliton and antikink-soliton in quasi-one-dimensional nonlinear monoatomic lattice

    NASA Astrophysics Data System (ADS)

    Xu, Quan; Tian, Qiang

    2005-04-01

    The quasi-one-dimensional nonlinear monoatomic lattice is analyzed. The kink-soliton and antikink-soliton are presented. When the interaction of the lattice is strong in the x-direction and weak in the y-direction, the two-dimensional (2D) lattice changes to a quasi-one-dimensional lattice. Taking nearest-neighbor interaction into account, the vibration equation can be transformed into the KPI, KPII and MKP equation. Considering the cubic nonlinear potential of the vibration in the lattice, the kink-soliton solution is presented. Considering the quartic nonlinear potential and the cubic interaction potential, the kink-soliton and antikink-soliton solutions are presented.

  12. Renal myofibroblasts contract collagen I matrix lattices in vitro.

    PubMed

    Kelynack, K J; Hewitson, T D; Pedagogos, E; Nicholls, K M; Becker, G J

    1999-01-01

    Myofibroblasts, cells with both fibroblastic and smooth muscle cell features, have been implicated in renal scarring. In addition to synthetic properties, contractile features and integrin expression may allow myofibroblasts to rearrange and contract interstitial collagenous proteins. Myofibroblasts from normal rat kidneys were grown in cell-populated collagen lattices to measure cell generated contraction. Following detachment of cell populated collagen lattices, myofibroblasts progressively contracted collagen lattices, reducing lattice diameter by 42% at 24 h. Alignment of myofibroblasts, rearrangement of fibrils and beta(1) integrin expression were observed within lattices. We postulate that interstitial myofibroblasts contribute to renal scarring through manipulation of collagenous proteins. Copyright 1999 S. Karger AG, Basel

  13. Stripes and honeycomb lattice of quantized vortices in rotating two-component Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Kasamatsu, Kenichi; Sakashita, Kouhei

    2018-05-01

    We study numerically the structure of a vortex lattice in rotating two-component Bose-Einstein condensates with equal atomic masses and equal intra- and intercomponent coupling strengths. The numerical simulations of the Gross-Pitaevskii equation show that the quantized vortices in this situation form lattice configuration accompanying vortex stripes, honeycomb lattices, and their complexes. This is a result of the degeneracy of the system for the SU(2) symmetric operation, which causes a continuous transformation between the above structures. In terms of the pseudospin representation, the complex lattice structures are identified as a hexagonal lattice of doubly winding half skyrmions.

  14. Ising antiferromagnet on the Archimedean lattices.

    PubMed

    Yu, Unjong

    2015-06-01

    Geometric frustration effects were studied systematically with the Ising antiferromagnet on the 11 Archimedean lattices using the Monte Carlo methods. The Wang-Landau algorithm for static properties (specific heat and residual entropy) and the Metropolis algorithm for a freezing order parameter were adopted. The exact residual entropy was also found. Based on the degree of frustration and dynamic properties, ground states of them were determined. The Shastry-Sutherland lattice and the trellis lattice are weakly frustrated and have two- and one-dimensional long-range-ordered ground states, respectively. The bounce, maple-leaf, and star lattices have the spin ice phase. The spin liquid phase appears in the triangular and kagome lattices.

  15. Ising antiferromagnet on the Archimedean lattices

    NASA Astrophysics Data System (ADS)

    Yu, Unjong

    2015-06-01

    Geometric frustration effects were studied systematically with the Ising antiferromagnet on the 11 Archimedean lattices using the Monte Carlo methods. The Wang-Landau algorithm for static properties (specific heat and residual entropy) and the Metropolis algorithm for a freezing order parameter were adopted. The exact residual entropy was also found. Based on the degree of frustration and dynamic properties, ground states of them were determined. The Shastry-Sutherland lattice and the trellis lattice are weakly frustrated and have two- and one-dimensional long-range-ordered ground states, respectively. The bounce, maple-leaf, and star lattices have the spin ice phase. The spin liquid phase appears in the triangular and kagome lattices.

  16. Exact diffusion constant in a lattice-gas wind-tree model on a Bethe lattice

    NASA Astrophysics Data System (ADS)

    Zhang, Guihua; Percus, J. K.

    1992-02-01

    Kong and Cohen [Phys. Rev. B 40, 4838 (1989)] obtained the diffusion constant of a lattice-gas wind-tree model in the Boltzmann approximation. The result is consistent with computer simulations for low tree concentration. In this Brief Report we find the exact diffusion constant of the model on a Bethe lattice, which turns out to be identical with the Kong-Cohen and Gunn-Ortuño results. Our interpretation is that the Boltzmann approximation is exact for this type of diffusion on a Bethe lattice in the same sense that the Bethe-Peierls approximation is exact for the Ising model on a Bethe lattice.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dogan, Fulya; Vaughey, John T.; Iddir, Hakim

    Direct observations of local lattice aluminum environments have been a major challenge for aluminum -bearing Li ion battery materials, such as LiNi1-y-zCoyAlzO2 Al(NCA) and aluminum-doped LiNixMnyCozO2 (NMC). Al-27 magic angle spinning (MAS) nuclear magnetic resonance (NMR) spectroscopy is the only structural probe currently available that can qualitatively and quantitatively characterize lattice and nonlattice (i.e., surface, coatings, segregation, secondary phase etc.) aluminum coordination and provide information that helps discern its effect in the lattice. In the present study, we use NMR to gain new insights into transition metal (TM)-O-Al coordination and evolution of lattice aluminum sites upon cycling. With the aidmore » of first-principles DFT calculations, we show direct evidence of lattice Al sites, nonpreferential Ni/Co-O-Al ordering in NCA, and the lack of bulk lattice aluminum in aluminum -"doped" NMC. Aluminum coordination of the paramagnetic (lattice) and diamagnetic (nonlattice) nature is investigated for Al-doped NMC and NCA. For the latter, the evolution of the lattice site(s) upon cycling is also studied. A clear reordering of lattice aluminum environments due to nickel migration is observed in NCA upon extended cycling.« less

  18. The triangular kagomé lattices revisited

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyun; Yan, Weigen

    2013-11-01

    The dimer problem, Ising spins and bond percolation on the triangular kagomé lattice have been studied extensively by physicists. In this paper, based on the fact the triangular kagomé lattice with toroidal boundary condition can be regarded as the line graph of 3.12.12 lattice with toroidal boundary condition, we derive the formulae of the number of spanning trees, the energy, and the Kirchhoff index of the triangular kagomé lattice with toroidal boundary condition.

  19. Direct Observation of Lattice Aluminum Environments in Li Ion Cathodes LiNi1-y-zCoyAlzO2 and Al-Doped LiNixMnyCozO2 via (27)Al MAS NMR Spectroscopy.

    PubMed

    Dogan, Fulya; Vaughey, John T; Iddir, Hakim; Key, Baris

    2016-07-06

    Direct observations of local lattice aluminum environments have been a major challenge for aluminum-bearing Li ion battery materials, such as LiNi1-y-zCoyAlzO2 (NCA) and aluminum-doped LiNixMnyCozO2 (NMC). (27)Al magic angle spinning (MAS) nuclear magnetic resonance (NMR) spectroscopy is the only structural probe currently available that can qualitatively and quantitatively characterize lattice and nonlattice (i.e., surface, coatings, segregation, secondary phase etc.) aluminum coordination and provide information that helps discern its effect in the lattice. In the present study, we use NMR to gain new insights into transition metal (TM)-O-Al coordination and evolution of lattice aluminum sites upon cycling. With the aid of first-principles DFT calculations, we show direct evidence of lattice Al sites, nonpreferential Ni/Co-O-Al ordering in NCA, and the lack of bulk lattice aluminum in aluminum-"doped" NMC. Aluminum coordination of the paramagnetic (lattice) and diamagnetic (nonlattice) nature is investigated for Al-doped NMC and NCA. For the latter, the evolution of the lattice site(s) upon cycling is also studied. A clear reordering of lattice aluminum environments due to nickel migration is observed in NCA upon extended cycling.

  20. Lattice-Induced Frequency Shifts in Sr Optical Lattice Clocks at the 10{sup -17} Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westergaard, P. G.; Lodewyck, J.; Lecallier, A.

    2011-05-27

    We present a comprehensive study of the frequency shifts associated with the lattice potential in a Sr lattice clock by comparing two such clocks with a frequency stability reaching 5x10{sup -17} after a 1 h integration time. We put the first experimental upper bound on the multipolar M1 and E2 interactions, significantly smaller than the recently predicted theoretical upper limit, and give a 30-fold improved upper limit on the effect of hyperpolarizability. Finally, we report on the first observation of the vector and tensor shifts in a Sr lattice clock. Combining these measurements, we show that all known lattice relatedmore » perturbations will not affect the clock accuracy down to the 10{sup -17} level, even for lattices as deep as 150 recoil energies.« less

  1. The dissociation and recombination rates of CH4 through the Ni(111) surface: The effect of lattice motion

    NASA Astrophysics Data System (ADS)

    Wang, Wenji; Zhao, Yi

    2017-07-01

    Methane dissociation is a prototypical system for the study of surface reaction dynamics. The dissociation and recombination rates of CH4 through the Ni(111) surface are calculated by using the quantum instanton method with an analytical potential energy surface. The Ni(111) lattice is treated rigidly, classically, and quantum mechanically so as to reveal the effect of lattice motion. The results demonstrate that it is the lateral displacements rather than the upward and downward movements of the surface nickel atoms that affect the rates a lot. Compared with the rigid lattice, the classical relaxation of the lattice can increase the rates by lowering the free energy barriers. For instance, at 300 K, the dissociation and recombination rates with the classical lattice exceed the ones with the rigid lattice by 6 and 10 orders of magnitude, respectively. Compared with the classical lattice, the quantum delocalization rather than the zero-point energy of the Ni atoms further enhances the rates by widening the reaction path. For instance, the dissociation rate with the quantum lattice is about 10 times larger than that with the classical lattice at 300 K. On the rigid lattice, due to the zero-point energy difference between CH4 and CD4, the kinetic isotope effects are larger than 1 for the dissociation process, while they are smaller than 1 for the recombination process. The increasing kinetic isotope effect with decreasing temperature demonstrates that the quantum tunneling effect is remarkable for the dissociation process.

  2. Dimer covering and percolation frustration.

    PubMed

    Haji-Akbari, Amir; Haji-Akbari, Nasim; Ziff, Robert M

    2015-09-01

    Covering a graph or a lattice with nonoverlapping dimers is a problem that has received considerable interest in areas, such as discrete mathematics, statistical physics, chemistry, and materials science. Yet, the problem of percolation on dimer-covered lattices has received little attention. In particular, percolation on lattices that are fully covered by nonoverlapping dimers has not evidently been considered. Here, we propose a procedure for generating random dimer coverings of a given lattice. We then compute the bond percolation threshold on random and ordered coverings of the square and the triangular lattices on the remaining bonds connecting the dimers. We obtain p_{c}=0.367713(2) and p_{c}=0.235340(1) for random coverings of the square and the triangular lattices, respectively. We observe that the percolation frustration induced as a result of dimer covering is larger in the low-coordination-number square lattice. There is also no relationship between the existence of long-range order in a covering of the square lattice and its percolation threshold. In particular, an ordered covering of the square lattice, denoted by shifted covering in this paper, has an unusually low percolation threshold and is topologically identical to the triangular lattice. This is in contrast to the other ordered dimer coverings considered in this paper, which have higher percolation thresholds than the random covering. In the case of the triangular lattice, the percolation thresholds of the ordered and random coverings are very close, suggesting the lack of sensitivity of the percolation threshold to microscopic details of the covering in highly coordinated networks.

  3. Coupled Monte Carlo neutronics and thermal hydraulics for power reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernnat, W.; Buck, M.; Mattes, M.

    The availability of high performance computing resources enables more and more the use of detailed Monte Carlo models even for full core power reactors. The detailed structure of the core can be described by lattices, modeled by so-called repeated structures e.g. in Monte Carlo codes such as MCNP5 or MCNPX. For cores with mainly uniform material compositions, fuel and moderator temperatures, there is no problem in constructing core models. However, when the material composition and the temperatures vary strongly a huge number of different material cells must be described which complicate the input and in many cases exceed code ormore » memory limits. The second problem arises with the preparation of corresponding temperature dependent cross sections and thermal scattering laws. Only if these problems can be solved, a realistic coupling of Monte Carlo neutronics with an appropriate thermal-hydraulics model is possible. In this paper a method for the treatment of detailed material and temperature distributions in MCNP5 is described based on user-specified internal functions which assign distinct elements of the core cells to material specifications (e.g. water density) and temperatures from a thermal-hydraulics code. The core grid itself can be described with a uniform material specification. The temperature dependency of cross sections and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. Applications will be shown for the stationary part of the Purdue PWR benchmark using ATHLET for thermal- hydraulics and for a generic Modular High Temperature reactor using THERMIX for thermal- hydraulics. (authors)« less

  4. Experimental evidence for the lattice instability of Bi-based superconducting systems

    NASA Astrophysics Data System (ADS)

    Yusheng, He; Jiong, Xiang; Hsin, Wang; Aisheng, He; Jincang, Zhang; Fanggao, Chang

    1989-11-01

    Ultrasonic measurements, specific heat and thermal analysis experiments, X-ray diffraction study and infrared investigation revealed that there are anomalous structural changes or lattice instabilities near 200 K in single 2212 or 2223 phase samples of Bi(Pb)-Sr-Ca-Cu-O system. Detailed study showed that anomalous changes or lattice instabilities are isothermal-like processes and have the characteristics of a structural phase transition, accompanying with increases in lattice constants. Possible mechanism for this lattice instability is discussed.

  5. Transmission Electron Microscope Measures Lattice Parameters

    NASA Technical Reports Server (NTRS)

    Pike, William T.

    1996-01-01

    Convergent-beam microdiffraction (CBM) in thermionic-emission transmission electron microscope (TEM) is technique for measuring lattice parameters of nanometer-sized specimens of crystalline materials. Lattice parameters determined by use of CBM accurate to within few parts in thousand. Technique developed especially for use in quantifying lattice parameters, and thus strains, in epitaxial mismatched-crystal-lattice multilayer structures in multiple-quantum-well and other advanced semiconductor electronic devices. Ability to determine strains in indivdual layers contributes to understanding of novel electronic behaviors of devices.

  6. Optical trapping via guided resonance modes in a Slot-Suzuki-phase photonic crystal lattice.

    PubMed

    Ma, Jing; Martínez, Luis Javier; Povinelli, Michelle L

    2012-03-12

    A novel photonic crystal lattice is proposed for trapping a two-dimensional array of particles. The lattice is created by introducing a rectangular slot in each unit cell of the Suzuki-Phase lattice to enhance the light confinement of guided resonance modes. Large quality factors on the order of 10⁵ are predicted in the lattice. A significant decrease of the optical power required for optical trapping can be achieved compared to our previous design.

  7. Nonlinear dust-lattice waves: a modified Toda lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cramer, N. F.

    Charged dust grains in a plasma interact with a Coulomb potential, but also with an exponential component to the potential, due to Debye shielding in the background plasma. Here we investigate large-amplitude oscillations and waves in dust-lattices, employing techniques used in Toda lattice analysis. The lattice consists of a linear chain of particles, or a periodic ring as occurs in experimentally observed dust particle clusters. The particle motion has a triangular waveform, and chaotic motion for large amplitude motion of a grain.

  8. The crystal structure of the new ternary antimonide Dy 3Cu 20+xSb 11-x ( x≈2)

    NASA Astrophysics Data System (ADS)

    Fedyna, L. O.; Bodak, O. I.; Fedorchuk, A. O.; Tokaychuk, Ya. O.

    2005-06-01

    New ternary antimonide Dy 3Cu 20+xSb 11-x ( x≈2) was synthesized and its crystal structure was determined by direct methods from X-ray powder diffraction data (diffractometer DRON-3M, Cu Kα-radiation, R=6.99%,R=12.27%,R=11.55%). The compound crystallizes with the own cubic structure type: space group F 4¯ 3m, Pearson code cF272, a=16.6150(2) Å,Z=8. The structure of the Dy 3Cu 20Sb 11-x ( x≈2) can be obtained from the structure type BaHg 11 by doubling of the lattice parameter and subtraction of 16 atoms. The studied structure was compared with the structures of known compounds, which crystallize in the same space group with similar cell parameters.

  9. Structures and properties of materials recovered from high shock pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nellis, W.J.

    1994-03-01

    Shock compression produces high dynamic pressures, densities, temperatures, and their quench rates. Because of these extreme conditions, shock compression produces materials with novel crystal structures, microstructures, and physical properties. Using a 6.5-m-long two-stage gun, we perform experiments with specimens up to 10 mm in diameter and 0.001--1 mm thick. For example, oriented disks of melt-textured superconducting YBa{sub 2}Cu{sub 3}O{sub 7} were shocked to 7 GPa without macroscopic fracture. Lattice defects are deposited in the crystal, which improve magnetic hysteresis at {approximately}1 kOe. A computer code has been developed to simulate shock compaction of 100 powder particles. Computations will be comparedmore » with experiments with 15--20 {mu}m Cu powders. The method is applicable to other powders and dynamic conditions.« less

  10. Recognition of an obstacle in a flow using artificial neural networks.

    PubMed

    Carrillo, Mauricio; Que, Ulices; González, José A; López, Carlos

    2017-08-01

    In this work a series of artificial neural networks (ANNs) has been developed with the capacity to estimate the size and location of an obstacle obstructing the flow in a pipe. The ANNs learn the size and location of the obstacle by reading the profiles of the dynamic pressure q or the x component of the velocity v_{x} of the fluid at a certain distance from the obstacle. Data to train the ANN were generated using numerical simulations with a two-dimensional lattice Boltzmann code. We analyzed various cases varying both the diameter and the position of the obstacle on the y axis, obtaining good estimations using the R^{2} coefficient for the cases under study. Although the ANN showed problems with the classification of very small obstacles, the general results show a very good capacity for prediction.

  11. ISO Guest Observer Data Analysis and LWS Instrument Team Activities

    NASA Technical Reports Server (NTRS)

    Oliversen, Ronald J. (Technical Monitor); Smith, Howard A.

    2003-01-01

    We have designed and fabricated infrared filters for use at wavelengths greater than or equal to 15 microns. Unlike conventional dielectric filters used at the short wavelengths, ours are made from stacked metal grids, spaced at a very small fraction of the performance wavelengths. The individual lattice layers are gold, the spacers are polyimide, and they are assembled using integrated circuit processing techniques; they resemble some metallic photonic band-gap structures. We simulate the filter performance accurately, including the coupling of the propagating, near-field electromagnetic modes, using computer aided design codes. We find no anomalous absorption. The geometrical parameters of the grids are easily altered in practice, allowing for the production of tuned filters with predictable useful transmission characteristics. Although developed for astronomical instrumentation, the filters are broadly applicable in systems across infrared and terahertz bands.

  12. Gadolinia depletion analysis by CASMO-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Y.; Saji, E.; Toba, A.

    1993-01-01

    CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less

  13. LEGO - A Class Library for Accelerator Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunhai

    1998-11-19

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulatemore » the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given.« less

  14. Surface passivation of InP solar cells with InAlAs layers

    NASA Technical Reports Server (NTRS)

    Jain, Raj K.; Flood, Dennis J.; Landis, Geoffrey A.

    1993-01-01

    The efficiency of indium phosphide solar cells is limited by high values of surface recombination. The effect of a lattice-matched In(0.52)Al(0.48)As window layer material for InP solar cells, using the numerical code PC-1D is investigated. It was found that the use of InAlAs layer significantly enhances the p(+)n cell efficiency, while no appreciable improvement is seen for n(+)p cells. The conduction band energy discontinuity at the heterojunction helps in improving the surface recombination. An optimally designed InP cell efficiency improves from 15.4 percent to 23 percent AMO for a 10 nm thick InAlAs layer. The efficiency improvement reduces with increase in InAlAs layer thickness, due to light absorption in the window layer.

  15. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  16. The influence of Thai culture on diabetes perceptions and management.

    PubMed

    Sowattanangoon, Napaporn; Kotchabhakdi, Naipinich; Petrie, Keith J

    2009-06-01

    To explore the way Thai patients perceive and manage their diabetes. Using a focused ethnographic approach, face-to-face interviews were conducted at two public hospitals in Bangkok. All interviews (n=27) were audio-taped and transcribed verbatim. Analysis of the interview transcripts was completed thematically. The findings showed that Thai patients manage their diabetes according to their beliefs about diabetes. These beliefs are constructed using both modern and traditional knowledge. For example, some patients explained the cause of their illness as being due to biomedical factors such as genetics, and also cultural factors such as karma from either previous or current lifetimes. The analysis also revealed that some aspects of Thai life facilitate diabetes self-management while other aspects hamper good control of the illness. For example, Buddhist values of moderation contribute positively to dietary change, while, on the other hand, the importance of rice in the Thai diet can impede successful self-management strategies. The results of this research indicate that Thai culture influences diabetes perceptions and management. Culturally appropriate treatment guidelines should be established for diabetes management that give special consideration to the significance and meaning of food and to Buddhist beliefs.

  17. Exploring Partonic Structure of Hadrons Using ab initio Lattice QCD Calculations.

    PubMed

    Ma, Yan-Qing; Qiu, Jian-Wei

    2018-01-12

    Following our previous proposal, we construct a class of good "lattice cross sections" (LCSs), from which we can study the partonic structure of hadrons from ab initio lattice QCD calculations. These good LCSs, on the one hand, can be calculated directly in lattice QCD, and on the other hand, can be factorized into parton distribution functions (PDFs) with calculable coefficients, in the same way as QCD factorization for factorizable hadronic cross sections. PDFs could be extracted from QCD global analysis of the lattice QCD generated data of LCSs. We also show that the proposed functions for lattice QCD calculation of PDFs in the literature are special cases of these good LCSs.

  18. Complex photonic lattices embedded with tailored intrinsic defects by a dynamically reconfigurable single step interferometric approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, Jolly, E-mail: jolly.xavierp@physics.iitd.ac.in; Joseph, Joby, E-mail: joby@physics.iitd.ac.in

    2014-02-24

    We report sculptured diverse photonic lattices simultaneously embedded with intrinsic defects of tunable type, number, shape as well as position by a single-step dynamically reconfigurable fabrication approach based on a programmable phase spatial light modulator-assisted interference lithography. The presented results on controlled formation of intrinsic defects in periodic as well as transversely quasicrystallographic lattices, irrespective and independent of their designed lattice geometry, portray the flexibility and versatility of the approach. The defect-formation in photonic lattices is also experimentally analyzed. Further, we also demonstrate the feasibility of fabrication of such defects-embedded photonic lattices in a photoresist, aiming concrete integrated photonic applications.

  19. Upon Generating Discrete Expanding Integrable Models of the Toda Lattice Systems and Infinite Conservation Laws

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Zhang, Xiangzhi; Wang, Yan; Liu, Jiangen

    2017-01-01

    With the help of R-matrix approach, we present the Toda lattice systems that have extensive applications in statistical physics and quantum physics. By constructing a new discrete integrable formula by R-matrix, the discrete expanding integrable models of the Toda lattice systems and their Lax pairs are generated, respectively. By following the constructing formula again, we obtain the corresponding (2+1)-dimensional Toda lattice systems and their Lax pairs, as well as their (2+1)-dimensional discrete expanding integrable models. Finally, some conservation laws of a (1+1)-dimensional generalised Toda lattice system and a new (2+1)-dimensional lattice system are generated, respectively.

  20. High-Precision Monte Carlo Simulation of the Ising Models on the Penrose Lattice and the Dual Penrose Lattice

    NASA Astrophysics Data System (ADS)

    Komura, Yukihiro; Okabe, Yutaka

    2016-04-01

    We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.

  1. Lattice matched crystalline substrates for cubic nitride semiconductor growth

    DOEpatents

    Norman, Andrew G; Ptak, Aaron J; McMahon, William E

    2015-02-24

    Disclosed embodiments include methods of fabricating a semiconductor layer or device and devices fabricated thereby. The methods include, but are not limited to, providing a substrate having a cubic crystalline surface with a known lattice parameter and growing a cubic crystalline group III-nitride alloy layer on the cubic crystalline substrate by coincident site lattice matched epitaxy. The cubic crystalline group III-nitride alloy may be prepared to have a lattice parameter (a') that is related to the lattice parameter of the substrate (a). The group III-nitride alloy may be a cubic crystalline In.sub.xGa.sub.yAl.sub.1-x-yN alloy. The lattice parameter of the In.sub.xGa.sub.yAl.sub.1-x-yN or other group III-nitride alloy may be related to the substrate lattice parameter by (a')= 2(a) or (a')=(a)/ 2. The semiconductor alloy may be prepared to have a selected band gap.

  2. Exploring photonic topological insulator states in a circuit-QED lattice

    NASA Astrophysics Data System (ADS)

    Li, Jing-Ling; Shan, Chuan-Jia; Zhao, Feng

    2018-04-01

    We propose a simple protocol to explore the topological properties of photonic integer quantum Hall states in a one-dimensional circiut-QED lattice. By periodically modulating the on-site photonic energies in such a lattice, we demonstrate that this one-dimensional lattice model can be mapped into a two-dimensional integer quantum Hall insulator model. Based on the lattice-based cavity input-output theory, we show that both the photonic topological protected edge states and topological invariants can be clearly measured from the final steady state of the resonator lattice after taking into account cavity dissipation. Interestingly, we also find that the measurement signals associated with the above topological features are quite unambitious even in five coupled dissipative resonators. Our work opens up a new prospect of exploring topological states with a small-size dissipative quantum artificial lattice, which is quite attractive to the current quantum optics community.

  3. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Rechnitzer, A.

    2011-04-01

    In this paper, the elementary moves of the BFACF-algorithm (Aragão de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragão de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  4. Ultraviolet laser spectroscopy of neutral mercury in a one-dimensional optical lattice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mejri, S.; McFerran, J. J.; Yi, L.

    2011-09-15

    We present details on the ultraviolet lattice spectroscopy of the (6s{sup 2}) {sup 1}S{sub 0}{r_reversible} (6s6p) {sup 3}P{sub 0} transition in neutral mercury, specifically {sup 199}Hg. Mercury atoms are loaded into a one-dimensional vertically aligned optical lattice from a magneto-optical trap with an rms temperature of {approx}60 {mu}K. We describe aspects of the magneto-optical trapping, the lattice cavity design, and the techniques employed to trap and detect mercury in an optical lattice. The clock-line frequency dependence on lattice depth is measured at a range of lattice wavelengths. We confirm the magic wavelength to be 362.51(0.16) nm. Further observations to thosemore » reported by Yi et al.[Phys. Rev. Lett. 106, 073005 (2011)] are presented regarding the laser excitation of a Wannier-Stark ladder of states.« less

  5. A New Equivalence Theory Method for Treating Doubly Heterogeneous Fuel - I. Theory

    DOE PAGES

    Williams, Mark L.; Lee, Deokjung; Choi, Sooyoung

    2015-03-04

    A new methodology has been developed to treat resonance self-shielding in doubly heterogeneous very high temperature gas-cooled reactor systems in which the fuel compact region of a reactor lattice consists of small fuel grains dispersed in a graphite matrix. This new method first homogenizes the fuel grain and matrix materials using an analytically derived disadvantage factor from a two-region problem with equivalence theory and intermediate resonance method. This disadvantage factor accounts for spatial self-shielding effects inside each grain within the framework of an infinite array of grains. Then the homogenized fuel compact is self-shielded using a Bondarenko method to accountmore » for interactions between the fuel compact regions in the fuel lattice. In the final form of the equations for actual implementations, the double-heterogeneity effects are accounted for by simply using a modified definition of a background cross section, which includes geometry parameters and cross sections for both the grain and fuel compact regions. With the new method, the doubly heterogeneous resonance self-shielding effect can be treated easily even with legacy codes programmed only for a singly heterogeneous system by simple modifications in the background cross section for resonance integral interpolations. This paper presents a detailed derivation of the new method and a sensitivity study of double-heterogeneity parameters introduced during the derivation. The implementation of the method and verification results for various test cases are presented in the companion paper.« less

  6. Engineering Room-temperature Superconductors Via ab-initio Calculations

    NASA Astrophysics Data System (ADS)

    Gulian, Mamikon; Melkonyan, Gurgen; Gulian, Armen

    The BCS, or bosonic model of superconductivity, as Little and Ginzburg have first argued, can bring in superconductivity at room temperatures in the case of high-enough frequency of bosonic mode. It was further elucidated by Kirzhnitset al., that the condition for existence of high-temperature superconductivity is closely related to negative values of the real part of the dielectric function at finite values of the reciprocal lattice vectors. In view of these findings, the task is to calculate the dielectric function for real materials. Then the poles of this function will indicate the existence of bosonic excitations which can serve as a "glue" for Cooper pairing, and if the frequency is high enough, and the dielectric matrix is simultaneously negative, this material is a good candidate for very high-Tc superconductivity. Thus, our approach is to elaborate a methodology of ab-initio calculation of the dielectric function of various materials, and then point out appropriate candidates. We used the powerful codes (TDDF with the DP package in conjunction with ABINIT) for computing dielectric responses at finite values of the wave vectors in the reciprocal lattice space. Though our report is concerned with the particular problem of superconductivity, the application range of the data processing methodology is much wider. The ability to compute the dielectric function of existing and still non-existing (though being predicted!) materials will have many more repercussions not only in fundamental sciences but also in technology and industry.

  7. Development of ORIGEN Libraries for Mixed Oxide (MOX) Fuel Assembly Designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mertyurek, Ugur; Gauld, Ian C.

    In this research, ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup.more » The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. Finally, the nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.« less

  8. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  9. Pathogenic mutations of TGFBI and CHST6 genes in Chinese patients with Avellino, lattice, and macular corneal dystrophies

    PubMed Central

    Huo, Ya-nan; Yao, Yu-feng; Yu, Ping

    2011-01-01

    Objective: To investigate gene mutations associated with three different types of corneal dystrophies (CDs), and to establish a phenotype-genotype correlation. Methods: Two patients with Avellino corneal dystrophy (ACD), four patients with lattice corneal dystrophy type I (LCD I) from one family, and three patients with macular corneal dystrophy type I (MCD I) were subjected to both clinical and genetic examinations. Slit lamp examination was performed for all the subjects to assess their corneal phenotypes. Genomic DNA was extracted from peripheral blood leukocytes. The coding regions of the human transforming growth factor β-induced (TGFBI) gene and carbohydrate sulfotransferase 6 (CHST6) gene were amplified by polymerase chain reaction (PCR) and subjected to direct sequencing. DNA samples from 50 healthy volunteers were used as controls. Results: Clinical examination showed three different phenotypes of CDs. Genetic examination identified that two ACD subjects were associated with homozygous R124H mutation of TGFBI, and four LCD I subjects were all associated with R124C heterozygous mutation. One MCD I subject was associated with a novel S51X homozygous mutation in CHST6, while the other two MCD I subjects harbored a previously reported W232X homozygous mutation. Conclusions: Our study highlights the prevalence of codon 124 mutations in the TGFBI gene among the Chinese ACD and LCD I patients. Moreover, we found a novel mutation among MCD I patients. PMID:21887843

  10. Development of ORIGEN Libraries for Mixed Oxide (MOX) Fuel Assembly Designs

    DOE PAGES

    Mertyurek, Ugur; Gauld, Ian C.

    2015-12-24

    In this research, ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup.more » The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. Finally, the nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.« less

  11. Model of wet chemical etching of swift heavy ions tracks

    NASA Astrophysics Data System (ADS)

    Gorbunov, S. A.; Malakhov, A. I.; Rymzhanov, R. A.; Volkov, A. E.

    2017-10-01

    A model of wet chemical etching of tracks of swift heavy ions (SHI) decelerated in solids in the electronic stopping regime is presented. This model takes into account both possible etching modes: etching controlled by diffusion of etchant molecules to the etching front, and etching controlled by the rate of a reaction of an etchant with a material. Olivine ((Mg0.88Fe0.12)2SiO4) crystals were chosen as a system for modeling. Two mechanisms of chemical activation of olivine around the SHI trajectory are considered. The first mechanism is activation stimulated by structural transformations in a nanometric track core, while the second one results from neutralization of metallic atoms by generated electrons spreading over micrometric distances. Monte-Carlo simulations (TREKIS code) form the basis for the description of excitations of the electronic subsystem and the lattice of olivine in an SHI track at times up to 100 fs after the projectile passage. Molecular dynamics supplies the initial conditions for modeling of lattice relaxation for longer times. These simulations enable us to estimate the effects of the chemical activation of olivine governed by both mechanisms. The developed model was applied to describe chemical activation and the etching kinetics of tracks of Au 2.1 GeV ions in olivine. The estimated lengthwise etching rate (38 µm · h-1) is in reasonable agreement with that detected in the experiments (24 µm · h-1).

  12. Superalloy Lattice Block Structures

    NASA Technical Reports Server (NTRS)

    Whittenberger, J. D.; Nathal, M. V.; Hebsur, M. G.; Kraus, D. L.

    2003-01-01

    In their simplest form, lattice block panels are produced by direct casting and result in lightweight, fully triangulated truss-like configurations which provide strength and stiffness [2]. The earliest realizations of lattice block were made from A1 and steels, primarily under funding from the US Navy [3]. This work also showed that the mechanical efficiency (eg., specific stiffness) of lattice block structures approached that of honeycomb structures [2]. The lattice architectures are also less anisotropic, and the investment casting route should provide a large advantage in cost and temperature capability over honeycombs which are limited to alloys that can be processed into foils. Based on this early work, a program was initiated to determine the feasibility of extending the high temperature superalloy lattice block [3]. The objective of this effort was to provide an alternative to intermetallics and composites in achieving a lightweight high temperature structure without sacrificing the damage tolerance and moderate cost inherent in superalloys. To establish the feasibility of the superalloy lattice block concept, work was performed in conjunction with JAMCORP, Inc. Billerica, MA, to produce a number of lattice block panels from both IN71 8 and Mar-M247.

  13. A quest for 2D lattice materials for actuation

    NASA Astrophysics Data System (ADS)

    Pronk, T. N.; Ayas, C.; Tekõglu, C.

    2017-08-01

    In the last two decades, most of the studies in shape morphing technology have focused on the Kagome lattice materials, which have superior properties such as in-plane isotropy, high specific stiffness and strength, and low energy requirement for actuation of its members. The Kagome lattice is a member of the family of semi-regular tessellations of the plane. Two fundamental questions naturally arise: i-) What makes a lattice material suitable for actuation? ii-) Are there other tessellations more effective than the Kagome lattice for actuation? The present paper tackles both questions, and provides a clear answer to the first one by comparing an alternative lattice material, the hexagonal cupola, with the Kagome lattice in terms of mechanical/actuation properties. The second question remains open, but, hopefully easier to challenge owing to a newly-discovered criterion: for an n-dimensional (n = 2 , 3) in-plane isotropic lattice material to be suitable for actuation, its pin-jointed equivalent must obey the generalised Maxwell's rule, and must possess M = 3(n - 1) non strain-producing finite kinematic mechanisms.

  14. Magnetic-film atom chip with 10 μm period lattices of microtraps for quantum information science with Rydberg atoms.

    PubMed

    Leung, V Y F; Pijn, D R M; Schlatter, H; Torralbo-Campo, L; La Rooij, A L; Mulder, G B; Naber, J; Soudijn, M L; Tauschinsky, A; Abarbanel, C; Hadad, B; Golan, E; Folman, R; Spreeuw, R J C

    2014-05-01

    We describe the fabrication and construction of a setup for creating lattices of magnetic microtraps for ultracold atoms on an atom chip. The lattice is defined by lithographic patterning of a permanent magnetic film. Patterned magnetic-film atom chips enable a large variety of trapping geometries over a wide range of length scales. We demonstrate an atom chip with a lattice constant of 10 μm, suitable for experiments in quantum information science employing the interaction between atoms in highly excited Rydberg energy levels. The active trapping region contains lattice regions with square and hexagonal symmetry, with the two regions joined at an interface. A structure of macroscopic wires, cutout of a silver foil, was mounted under the atom chip in order to load ultracold (87)Rb atoms into the microtraps. We demonstrate loading of atoms into the square and hexagonal lattice sections simultaneously and show resolved imaging of individual lattice sites. Magnetic-film lattices on atom chips provide a versatile platform for experiments with ultracold atoms, in particular for quantum information science and quantum simulation.

  15. Strong coupling constant from Adler function in lattice QCD

    NASA Astrophysics Data System (ADS)

    Hudspith, Renwick J.; Lewis, Randy; Maltman, Kim; Shintani, Eigo

    2016-09-01

    We compute the QCD coupling constant, αs, from the Adler function with vector hadronic vacuum polarization (HVP) function. On the lattice, Adler function can be measured by the differential of HVP at two different momentum scales. HVP is measured from the conserved-local vector current correlator using nf = 2 + 1 flavor Domain Wall lattice data with three different lattice cutoffs, up to a-1 ≈ 3.14 GeV. To avoid the lattice artifact due to O(4) symmetry breaking, we set the cylinder cut on the lattice momentum with reflection projection onto vector current correlator, and it then provides smooth function of momentum scale for extracted HVP. We present a global fit of the lattice data at a justified momentum scale with three lattice cutoffs using continuum perturbation theory at 𝒪(αs4) to obtain the coupling in the continuum limit at arbitrary scale. We take the running to Z boson mass through the appropriate thresholds, and obtain αs(5)(MZ) = 0.1191(24)(37) where the first is statistical error and the second is systematic one.

  16. New edge-centered photonic square lattices with flat bands

    NASA Astrophysics Data System (ADS)

    Zhang, Da; Zhang, Yiqi; Zhong, Hua; Li, Changbiao; Zhang, Zhaoyang; Zhang, Yanpeng; Belić, Milivoj R.

    2017-07-01

    We report a new class of edge-centered photonic square lattices with multiple flat bands, and consider in detail two examples: the Lieb-5 and Lieb-7 lattices. In these lattices, there are 5 and 7 sites in the unit cell and in general, the number is restricted to odd integers. The number of flat bands m in the new Lieb lattices is related to the number of sites N in the unit cell by a simple formula m =(N - 1) / 2. The flat bands reported here are independent of the pseudomagnetic field. The properties of lattices with even and odd number of flat bands are different. We consider the localization of light in such Lieb lattices. If the input beam excites the flat-band mode, it will not diffract during propagation, owing to the strong mode localization. In the Lieb-7 lattice, the beam will also oscillate during propagation and still not diffract. The period of oscillation is determined by the energy difference between the two flat bands. This study provides a new platform for investigating light trapping, photonic topological insulators, and pseudospin-mediated vortex generation.

  17. Investigation of the Fermi-Hubbard model with 6Li in an optical lattice

    NASA Astrophysics Data System (ADS)

    Hart, R. A.; Duarte, P. M.; Yang, T.-L.; Hulet, R. G.

    2013-05-01

    We present our results on investigation of the physics of the Fermi-Hubbard model using an ultracold gas of 6Li loaded into an optical lattice. We use all-optical methods to efficiently cool and load the lattice beginning with laser cooling on the 2S1 / 2 --> 2P3 / 2 transition and then further cooling using the narrow 2S1 / 2 --> 3P3 / 2 transition to T ~ 59 μK. The second stage of laser cooling greatly enhances loading to an optical dipole trap where a two spin state mixture of atoms is evaporatively cooled to degeneracy. We then adiabatically load ~106 degenerate fermions into a 3D optical lattice formed by three orthogonal standing waves of 1064 nm light. Overlapped with each of the three lattice beams is a non-retroreflected beam at 532 nm. This light cancels the harmonic trapping caused by the lattice beams, which extends the number of lattice sites over which a Néel phase can exist and may allow evaporative cooling in the lattice. By using Bragg scattering of light, we investigate the possibility of observing long-range antiferromagnetic ordering of spins in the lattice. Supported by NSF, ONR, DARPA, and the Welch Foundation.

  18. Electron capture and transport mediated by lattice solitons

    NASA Astrophysics Data System (ADS)

    Hennig, D.; Chetverikov, A.; Velarde, M. G.; Ebeling, W.

    2007-10-01

    We study electron transport in a one-dimensional molecular lattice chain. The molecules are linked by Morse interaction potentials. The electronic degree of freedom, expressed in terms of a tight binding system, is coupled to the longitudinal displacements of the molecules from their equilibrium positions along the axis of the lattice. More specifically, the distance between two sites influences in an exponential fashion the corresponding electronic transfer matrix element. We demonstrate that when an electron is injected in the undistorted lattice it causes a local deformation such that a compression results leading to a lowering of the electron’s energy below the lower edge of the band of linear states. This corresponds to self-localization of the electron due to a polaronlike effect. Then, if a traveling soliton lattice deformation is launched a distance apart from the electron’s position, upon encountering the polaronlike state it captures the latter dragging it afterwards along its path. Strikingly, even when the electron is initially uniformly distributed over the lattice sites a traveling soliton lattice deformation gathers the electronic amplitudes during its traversing of the lattice. Eventually, the electron state is strongly localized and moves coherently in unison with the soliton lattice deformation. This shows that for the achievement of coherent electron transport we need not start with the polaronic effect.

  19. LATTICE/hor ellipsis/a beam transport program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staples, J.

    1987-06-01

    LATTICE is a computer program that calculates the first order characteristics of synchrotrons and beam transport systems. The program uses matrix algebra to calculate the propagation of the betatron (Twiss) parameters along a beam line. The program draws on ideas from several older programs, notably Transport and Synch, adds many new ones and incorporates them into an interactive, user-friendly program. LATTICE will calculate the matched functions of a synchrotron lattice and display them in a number of ways, including a high resolution Tektronix graphics display. An optimizer is included to adjust selected element parameters so the beam meets a setmore » of constraints. LATTICE is a first order program, but the effect of sextupoles on the chromaticity of a synchrotron lattice is included, and the optimizer will set the sextupole strengths for zero chromaticity. The program will also calculate the characteristics of beam transport systems. In this mode, the beam parameters, defined at the start of the transport line, are propagated through to the end. LATTICE has two distinct modes: the lattice mode which finds the matched functions of a synchrotron, and the transport mode which propagates a predefined beam through a beam line. However, each mode can be used for either type of problem: the transport mode may be used to calculate an insertion for a synchrotron lattice, and the lattice mode may be used to calculate the characteristics of a long periodic beam transport system.« less

  20. Density functional calculation of activation energies for lattice and grain boundary diffusion in alumina

    NASA Astrophysics Data System (ADS)

    Lei, Yinkai; Gong, Yu; Duan, Zhiyao; Wang, Guofeng

    2013-06-01

    To acquire knowledge on the lattice and grain boundary diffusion processes in alumina, we have determined the activation energies of elementary O and Al diffusive jumps in the bulk crystal, Σ3(0001) grain boundaries, and Σ3(101¯0) grain boundaries of α-Al2O3 using the first-principles density functional theory method. Specifically, we calculated the activation energies for four elementary jumps of both O and Al lattice diffusion in alumina. It was predicted that the activation energy of O lattice diffusion varied from 3.58 to 5.03 eV, while the activation energy of Al lattice diffusion ranged from 1.80 to 3.17 eV. As compared with experimental measurements, the theoretical predictions of the activation energy for lattice diffusion were lower and thus implied that there might be other high-energy diffusive jumps in the experimental alumina samples. Moreover, our results suggested that the Al lattice diffusion was faster than the O lattice diffusion in alumina, in agreement with experiment observations. Furthermore, it was found from our calculations for α-Al2O3 that the activation energies of O and Al grain boundary diffusion in the high-energy Σ3(0001) grain boundaries were significantly lower than those of the lattice diffusion. In contrast, the activation energies of O and Al grain boundary diffusion in the low-energy Σ3(101¯0) grain boundaries could be even higher than those of the lattice diffusion.

Top