Sample records for density functional code

  1. Study regarding the density evolution of messages and the characteristic functions associated of a LDPC code

    NASA Astrophysics Data System (ADS)

    Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.

    2017-01-01

    In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.

  2. A comparison of different methods to implement higher order derivatives of density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, Hubertus J.J.

    Density functional theory is the dominant approach in electronic structure methods today. To calculate properties higher order derivatives of the density functionals are required. These derivatives might be implemented manually,by automatic differentiation, or by symbolic algebra programs. Different authors have cited different reasons for using the particular method of their choice. This paper presents work where all three approaches were used and the strengths and weaknesses of each approach are considered. It is found that all three methods produce code that is suffficiently performanted for practical applications, despite the fact that our symbolic algebra generated code and our automatic differentiationmore » code still have scope for significant optimization. The automatic differentiation approach is the best option for producing readable and maintainable code.« less

  3. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  4. Axial deformed solution of the Skyrme-Hartree-Fock-Bogolyubov equations using the transformed harmonic oscillator Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, R. Navarro; Schunck, N.; Lasseri, R.

    2017-03-09

    HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the nuclear energy Density Functional Theory (DFT), where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton densities. In HFBTHO, the energy density derives either from the zero-range Dkyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-Bogoliubov (HFB) approximation, and axial-symmetry of the nuclear shape is assumed. This version is the 3rd release ofmore » the program; the two previous versions were published in Computer Physics Communications [1,2]. The previous version was released at LLNL under GPL 3 Open Source License and was given release code LLNL-CODE-573953.« less

  5. LSMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisenbach, Markus; Li, Ying Wai; Liu, Xianglin

    2017-12-01

    LSMS is a first principles, Density Functional theory based, electronic structure code targeted mainly at materials applications. LSMS calculates the local spin density approximation to the diagonal part of the electron Green's function. The electron/spin density and energy are easily determined once the Green's function is known. Linear scaling with system size is achieved in the LSMS by using several unique properties of the real space multiple scattering approach to the Green's function.

  6. JDFTx: Software for joint density-functional theory

    DOE PAGES

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...

    2017-11-14

    Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less

  7. JDFTx: Software for joint density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.

    Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less

  8. Recent developments in LIBXC - A comprehensive library of functionals for density functional theory

    NASA Astrophysics Data System (ADS)

    Lehtola, Susi; Steigemann, Conrad; Oliveira, Micael J. T.; Marques, Miguel A. L.

    2018-01-01

    LIBXC is a library of exchange-correlation functionals for density-functional theory. We are concerned with semi-local functionals (or the semi-local part of hybrid functionals), namely local-density approximations, generalized-gradient approximations, and meta-generalized-gradient approximations. Currently we include around 400 functionals for the exchange, correlation, and the kinetic energy, spanning more than 50 years of research. Moreover, LIBXC is by now used by more than 20 codes, not only from the atomic, molecular, and solid-state physics, but also from the quantum chemistry communities.

  9. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Gardner, R. P.; Verghese, K.

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.

  10. Equilibrium Structures and Absorption Spectra for SixOy-nH2O Molecular Clusters using Density Functional Theory

    DTIC Science & Technology

    2017-05-04

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6390--17-9723 Equilibrium Structures and Absorption Spectra for SixOy-nH2O Molecular...Absorption Spectra for SixOy-nH2O Molecular Clusters using Density Functional Theory L. Huang, S.G. Lambrakos, and L. Massa1 Naval Research Laboratory, Code...and time-dependent density functional theory (TD-DFT). The size of the clusters considered is relatively large compared to those considered in

  11. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE PAGES

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain; ...

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  12. The AGORA High-resolution Galaxy Simulations Comparison Project II: Isolated disk test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from 9 state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt-Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly-formed stellar clump mass functions show more significant variation (difference by up to a factor of ~3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low density region, and between more diffusive and less diffusive schemes in the high density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Lastly, our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  13. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji-hoon; Agertz, Oscar; Teyssier, Romain

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, wemore » find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.« less

  14. PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules

    NASA Astrophysics Data System (ADS)

    Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.

    2018-03-01

    The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.

  15. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    PubMed

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-08

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  16. Direct G-code manipulation for 3D material weaving

    NASA Astrophysics Data System (ADS)

    Koda, S.; Tanaka, H.

    2017-04-01

    The process of conventional 3D printing begins by first build a 3D model, then convert to the model to G-code via a slicer software, feed the G-code to the printer, and finally start the printing. The most simple and popular 3D printing technique is Fused Deposition Modeling. However, in this method, the printing path that the printer head can take is restricted by the G-code. Therefore the printed 3D models with complex pattern have structural errors like holes or gaps between the printed material lines. In addition, the structural density and the material's position of the printed model are difficult to control. We realized the G-code editing, Fabrix, for making a more precise and functional printed model with both single and multiple material. The models with different stiffness are fabricated by the controlling the printing density of the filament materials with our method. In addition, the multi-material 3D printing has a possibility to expand the physical properties by the material combination and its G-code editing. These results show the new printing method to provide more creative and functional 3D printing techniques.

  17. Plato: A localised orbital based density functional theory code

    NASA Astrophysics Data System (ADS)

    Kenny, S. D.; Horsfield, A. P.

    2009-12-01

    The Plato package allows both orthogonal and non-orthogonal tight-binding as well as density functional theory (DFT) calculations to be performed within a single framework. The package also provides extensive tools for analysing the results of simulations as well as a number of tools for creating input files. The code is based upon the ideas first discussed in Sankey and Niklewski (1989) [1] with extensions to allow high-quality DFT calculations to be performed. DFT calculations can utilise either the local density approximation or the generalised gradient approximation. Basis sets from minimal basis through to ones containing multiple radial functions per angular momenta and polarisation functions can be used. Illustrations of how the package has been employed are given along with instructions for its utilisation. Program summaryProgram title: Plato Catalogue identifier: AEFC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 219 974 No. of bytes in distributed program, including test data, etc.: 1 821 493 Distribution format: tar.gz Programming language: C/MPI and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux and Mac OS X Has the code been vectorised or parallelised?: Yes, up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Nature of problem: Density functional theory study of electronic structure and total energies of molecules, crystals and surfaces. Solution method: Localised orbital based density functional theory. Restrictions: Tight-binding and density functional theory only, no exact exchange. Unusual features: Both atom centred and uniform meshes available. Can deal with arbitrary angular momenta for orbitals, whilst still retaining Slater-Koster tables for accuracy. Running time: Test cases will run in a few minutes, large calculations may run for several days.

  18. Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.

    PubMed

    Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K

    2018-03-13

    Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.

  19. Coded Modulation in C and MATLAB

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  20. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  1. DFT study of CdS-PVA film

    NASA Astrophysics Data System (ADS)

    Bala, Vaneeta; Tripathi, S. K.; Kumar, Ranjan

    2015-02-01

    Density functional theory has been applied to study cadmium sulphide-polyvinyl alcohol nanocomposite film. Structural models of two isotactic-polyvinyl alcohol (I-PVA) chains around one cadmium sulphide nanoparticle is considered in which each chain consists three monomer units of [-(CH2CH(OH))-]. All of the hydroxyl groups in I-PVA chains are directed to cadmium sulphide nanoparticle. Electronic and structural properties are investigated using ab-intio density functional code, SIESTA. Structural optimizations are done using local density approximations (LDA). The exchange correlation functional of LDA is parameterized by the Ceperley-Alder (CA) approach. The core electrons are represented by improved Troulier-Martins pseudopotentials. Densities of states clearly show the semiconducting nature of cadmium sulphide polyvinyl alcohol nanocomposite.

  2. Information preserving coding for multispectral data

    NASA Technical Reports Server (NTRS)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  3. Psi4 1.1: An Open-Source Electronic Structure Program Emphasizing Automation, Advanced Libraries, and Interoperability.

    PubMed

    Parrish, Robert M; Burns, Lori A; Smith, Daniel G A; Simmonett, Andrew C; DePrince, A Eugene; Hohenstein, Edward G; Bozkaya, Uğur; Sokolov, Alexander Yu; Di Remigio, Roberto; Richard, Ryan M; Gonthier, Jérôme F; James, Andrew M; McAlexander, Harley R; Kumar, Ashutosh; Saitow, Masaaki; Wang, Xiao; Pritchard, Benjamin P; Verma, Prakash; Schaefer, Henry F; Patkowski, Konrad; King, Rollin A; Valeev, Edward F; Evangelista, Francesco A; Turney, Justin M; Crawford, T Daniel; Sherrill, C David

    2017-07-11

    Psi4 is an ab initio electronic structure program providing methods such as Hartree-Fock, density functional theory, configuration interaction, and coupled-cluster theory. The 1.1 release represents a major update meant to automate complex tasks, such as geometry optimization using complete-basis-set extrapolation or focal-point methods. Conversion of the top-level code to a Python module means that Psi4 can now be used in complex workflows alongside other Python tools. Several new features have been added with the aid of libraries providing easy access to techniques such as density fitting, Cholesky decomposition, and Laplace denominators. The build system has been completely rewritten to simplify interoperability with independent, reusable software components for quantum chemistry. Finally, a wide range of new theoretical methods and analyses have been added to the code base, including functional-group and open-shell symmetry adapted perturbation theory, density-fitted coupled cluster with frozen natural orbitals, orbital-optimized perturbation and coupled-cluster methods (e.g., OO-MP2 and OO-LCCD), density-fitted multiconfigurational self-consistent field, density cumulant functional theory, algebraic-diagrammatic construction excited states, improvements to the geometry optimizer, and the "X2C" approach to relativistic corrections, among many other improvements.

  4. A Scalable Implementation of Van der Waals Density Functionals

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, Francois

    2010-03-01

    Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).

  5. Prediction of three sigma maximum dispersed density for aerospace applications

    NASA Technical Reports Server (NTRS)

    Charles, Terri L.; Nitschke, Michael D.

    1993-01-01

    Free molecular heating (FMH) is caused by the transfer of energy during collisions between the upper atmosphere molecules and a space vehicle. The dispersed free molecular heating on a surface is an important constraint for space vehicle thermal analyses since it can be a significant source of heating. To reduce FMH to a spacecraft, the parking orbit is often designed to a higher altitude at the expense of payload capability. Dispersed FMH is a function of both space vehicle velocity and atmospheric density, however, the space vehicle velocity variations are insignificant when compared to the atmospheric density variations. The density of the upper atmosphere molecules is a function of altitude, but also varies with other environmental factors, such as solar activity, geomagnetic activity, location, and time. A method has been developed to predict three sigma maximum dispersed density for up to 15 years into the future. This method uses a state-of-the-art atmospheric density code, MSIS 86, along with 50 years of solar data, NASA and NOAA solar activity predictions for the next 15 years, and an Aerospace Corporation correlation to account for density code inaccuracies to generate dispersed maximum density ratios denoted as 'K-factors'. The calculated K-factors can be used on a mission unique basis to calculate dispersed density, and hence dispersed free molecular heating rates. These more accurate K-factors can allow lower parking orbit altitudes, resulting in increased payload capability.

  6. Information theoretical assessment of image gathering and coding for digital restoration

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; John, Sarah; Reichenbach, Stephen E.

    1990-01-01

    The process of image-gathering, coding, and restoration is presently treated in its entirety rather than as a catenation of isolated tasks, on the basis of the relationship between the spectral information density of a transmitted signal and the restorability of images from the signal. This 'information-theoretic' assessment accounts for the information density and efficiency of the acquired signal as a function of the image-gathering system's design and radiance-field statistics, as well as for the information efficiency and data compression that are obtainable through the combination of image gathering with coding to reduce signal redundancy. It is found that high information efficiency is achievable only through minimization of image-gathering degradation as well as signal redundancy.

  7. A density distribution algorithm for bone incorporating local orthotropy, modal analysis and theories of cellular solids.

    PubMed

    Impelluso, Thomas J

    2003-06-01

    An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.

  8. The journey from forensic to predictive materials science using density functional theory

    DOE PAGES

    Schultz, Peter A.

    2017-09-12

    Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.

  9. The journey from forensic to predictive materials science using density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter A.

    Approximate methods for electronic structure, implemented in sophisticated computer codes and married to ever-more powerful computing platforms, have become invaluable in chemistry and materials science. The maturing and consolidation of quantum chemistry codes since the 1980s, based upon explicitly correlated electronic wave functions, has made them a staple of modern molecular chemistry. Here, the impact of first principles electronic structure in physics and materials science had lagged owing to the extra formal and computational demands of bulk calculations.

  10. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  11. Advanced capabilities for materials modelling with Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  12. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S

    2017-10-24

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  13. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  14. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    NASA Astrophysics Data System (ADS)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  15. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  16. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  17. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.

  18. Monte Carlo study of four dimensional binary hard hypersphere mixtures

    NASA Astrophysics Data System (ADS)

    Bishop, Marvin; Whitlock, Paula A.

    2012-01-01

    A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.

  19. Error-correcting codes on scale-free networks

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Hoon; Ko, Young-Jo

    2004-06-01

    We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.

  20. Model Description for the SOCRATES Contamination Code

    DTIC Science & Technology

    1988-10-21

    Special A2-I V ILLUSTRATIONS A Schematic Representaction of the Major Elements or Shuttle Contaminacion Problem .... .............. 3 2 A Diagram of the...Atmospherically Scattered Molecules on Ambient Number Density for the 200, 250, and 300 Km Runs 98 A--I A Plot of the Chi-Square Probability Density Function...are scaled with respect to the far field ambient number density, nD, which leaves only the cross section scaling factor to be determined. This factor

  1. Weibull crack density coefficient for polydimensional stress states

    NASA Technical Reports Server (NTRS)

    Gross, Bernard; Gyekenyesi, John P.

    1989-01-01

    A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.

  2. Predicting materials for sustainable energy sources: The key role of density functional theory

    NASA Astrophysics Data System (ADS)

    Galli, Giulia

    Climate change and the related need for sustainable energy sources replacing fossil fuels are pressing societal problems. The development of advanced materials is widely recognized as one of the key elements for new technologies that are required to achieve a sustainable environment and provide clean and adequate energy for our planet. We discuss the key role played by Density Functional Theory, and its implementations in high performance computer codes, in understanding, predicting and designing materials for energy applications.

  3. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  4. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  5. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  6. Addition of simultaneous heat and solute transport and variable fluid viscosity to SEAWAT

    USGS Publications Warehouse

    Thorne, D.; Langevin, C.D.; Sukop, M.C.

    2006-01-01

    SEAWAT is a finite-difference computer code designed to simulate coupled variable-density ground water flow and solute transport. This paper describes a new version of SEAWAT that adds the ability to simultaneously model energy and solute transport. This is necessary for simulating the transport of heat and salinity in coastal aquifers for example. This work extends the equation of state for fluid density to vary as a function of temperature and/or solute concentration. The program has also been modified to represent the effects of variable fluid viscosity as a function of temperature and/or concentration. The viscosity mechanism is verified against an analytical solution, and a test of temperature-dependent viscosity is provided. Finally, the classic Henry-Hilleke problem is solved with the new code. ?? 2006 Elsevier Ltd. All rights reserved.

  7. Temperature dependence of the symmetry energy and neutron skins in Ni, Sn, and Pb isotopic chains

    NASA Astrophysics Data System (ADS)

    Antonov, A. N.; Kadrev, D. N.; Gaidarov, M. K.; Sarriguren, P.; de Guerra, E. Moya

    2017-02-01

    The temperature dependence of the symmetry energy for isotopic chains of even-even Ni, Sn, and Pb nuclei is investigated in the framework of the local density approximation (LDA). The Skyrme energy density functional with two Skyrme-class effective interactions, SkM* and SLy4, is used in the calculations. The temperature-dependent proton and neutron densities are calculated through the hfbtho code that solves the nuclear Skyrme-Hartree-Fock-Bogoliubov problem by using the cylindrical transformed deformed harmonic-oscillator basis. In addition, two other density distributions of 208Pb, namely the Fermi-type density determined within the extended Thomas-Fermi (TF) method and symmetrized-Fermi local density obtained within the rigorous density functional approach, are used. The kinetic energy densities are calculated either by the hfbtho code or, for a comparison, by the extended TF method up to second order in temperature (with T2 term). Alternative ways to calculate the symmetry energy coefficient within the LDA are proposed. The results for the thermal evolution of the symmetry energy coefficient in the interval T =0 -4 MeV show that its values decrease with temperature. The temperature dependence of the neutron and proton root-mean-square radii and corresponding neutron skin thickness is also investigated, showing that the effect of temperature leads mainly to a substantial increase of the neutron radii and skins, especially in the more neutron-rich nuclei, a feature that may have consequences on astrophysical processes and neutron stars.

  8. Tartarus: A relativistic Green's function quantum average atom code

    DOE PAGES

    Gill, Nathanael Matthew; Starrett, Charles Edward

    2017-06-28

    A relativistic Green’s Function quantum average atom model is implemented in the Tartarus code for the calculation of equation of state data in dense plasmas. We first present the relativistic extension of the quantum Green’s Function average atom model described by Starrett [1]. The Green’s Function approach addresses the numerical challenges arising from resonances in the continuum density of states without the need for resonance tracking algorithms or adaptive meshes, though there are still numerical challenges inherent to this algorithm. We discuss how these challenges are addressed in the Tartarus algorithm. The outputs of the calculation are shown in comparisonmore » to PIMC/DFT-MD simulations of the Principal Shock Hugoniot in Silicon. Finally, we also present the calculation of the Hugoniot for Silver coming from both the relativistic and nonrelativistic modes of the Tartarus code.« less

  9. Tartarus: A relativistic Green's function quantum average atom code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, Nathanael Matthew; Starrett, Charles Edward

    A relativistic Green’s Function quantum average atom model is implemented in the Tartarus code for the calculation of equation of state data in dense plasmas. We first present the relativistic extension of the quantum Green’s Function average atom model described by Starrett [1]. The Green’s Function approach addresses the numerical challenges arising from resonances in the continuum density of states without the need for resonance tracking algorithms or adaptive meshes, though there are still numerical challenges inherent to this algorithm. We discuss how these challenges are addressed in the Tartarus algorithm. The outputs of the calculation are shown in comparisonmore » to PIMC/DFT-MD simulations of the Principal Shock Hugoniot in Silicon. Finally, we also present the calculation of the Hugoniot for Silver coming from both the relativistic and nonrelativistic modes of the Tartarus code.« less

  10. Serenity: A subsystem quantum chemistry program.

    PubMed

    Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes

    2018-05-15

    We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  11. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  12. Multigrid based First-Principles Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fattebert, Jean-Luc; Osei-Kuffuor, Daniel; Dunn, Ian

    2017-06-01

    MGmol ls a First-Principles Molecular Dynamics code. It relies on the Born-Oppenheimer approximation and models the electronic structure using Density Functional Theory, either LDA or PBE. Norm-conserving pseudopotentials are used to model atomic cores.

  13. EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.

  14. Information theoretical assessment of digital imaging systems

    NASA Technical Reports Server (NTRS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-01-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  15. Information theoretical assessment of digital imaging systems

    NASA Astrophysics Data System (ADS)

    John, Sarah; Rahman, Zia-Ur; Huck, Friedrich O.; Reichenbach, Stephen E.

    1990-10-01

    The end-to-end performance of image gathering, coding, and restoration as a whole is considered. This approach is based on the pivotal relationship that exists between the spectral information density of the transmitted signal and the restorability of images from this signal. The information-theoretical assessment accounts for (1) the information density and efficiency of the acquired signal as a function of the image-gathering system design and the radiance-field statistics, and (2) the improvement in information efficiency and data compression that can be gained by combining image gathering with coding to reduce the signal redundancy and irrelevancy. It is concluded that images can be restored with better quality and from fewer data as the information efficiency of the data is increased. The restoration correctly explains the image gathering and coding processes and effectively suppresses the image-display degradations.

  16. First Principles Study of Chemically Functionalized Graphene

    NASA Astrophysics Data System (ADS)

    Jha, Sanjiv; Vasiliev, Igor

    2015-03-01

    The electronic, structural and vibrational properties of carbon nanomaterials can be affected by chemical functionalization. We applied ab initio computational methods based on density functional theory to study the covalent functionalization of graphene with benzyne, carboxyl groups and tetracyanoethylene oxide (TCNEO). Our calculations were carried out using the SIESTA and Quantum-ESPRESSO electronic structure codes combined with the local density and generalized gradient approximations for the exchange correlation functional and norm-conserving Troullier-Martins pseudopotentials. The simulated Raman and infrared spectra of graphene functionalized with carboxyl groups and TCNEO were consistent with the available experimental results. The computed vibrational spectra of graphene functionalized with carboxyl groups showed that the presence of point defects near the functionalization site affects the Raman and infrared spectroscopic signatures of functionalized graphene. Supported by NSF CHE-1112388.

  17. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  18. Microwave beam broadening due to turbulent plasma density fluctuations within the limit of the Born approximation and beyond

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Guidi, L.; Holzhauer, E.; Maj, O.; Poli, E.; Snicker, A.; Weber, H.

    2018-07-01

    Plasma turbulence, and edge density fluctuations in particular, can under certain conditions broaden the cross-section of injected microwave beams significantly. This can be a severe problem for applications relying on well-localized deposition of the microwave power, like the control of MHD instabilities. Here we investigate this broadening mechanism as a function of fluctuation level, background density and propagation length in a fusion-relevant scenario using two numerical codes, the full-wave code IPF-FDMC and the novel wave kinetic equation solver WKBeam. The latter treats the effects of fluctuations using a statistical approach, based on an iterative solution of the scattering problem (Born approximation). The full-wave simulations are used to benchmark this approach. The Born approximation is shown to be valid over a large parameter range, including ITER-relevant scenarios.

  19. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  20. Microscopically based energy density functionals for nuclei using the density matrix expansion: Implementation and pre-optimization

    NASA Astrophysics Data System (ADS)

    Stoitsov, M.; Kortelainen, M.; Bogner, S. K.; Duguet, T.; Furnstahl, R. J.; Gebremariam, B.; Schunck, N.

    2010-11-01

    In a recent series of articles, Gebremariam, Bogner, and Duguet derived a microscopically based nuclear energy density functional by applying the density matrix expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory two- and three-nucleon interactions. Owing to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Because the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present article is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction of our test χ2 function compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  1. A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding.

    PubMed

    Taillefumier, Thibaud; Magnasco, Marcelo O

    2013-04-16

    Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.

  2. Dusty Plasmas in Planetary Magnetospheres Award

    NASA Technical Reports Server (NTRS)

    Horanyi, Mihaly

    2005-01-01

    This is my final report for the grant Dusty Plasmas in Planetary Magnetospheres. The funding from this grant supported our research on dusty plasmas to study: a) dust plasma interactions in general plasma environments, and b) dusty plasma processes in planetary magnetospheres (Earth, Jupiter and Saturn). We have developed a general purpose transport code in order to follow the spatial and temporal evolution of dust density distributions in magnetized plasma environments. The code allows the central body to be represented by a multipole expansion of its gravitational and magnetic fields. The density and the temperature of the possibly many-component plasma environment can be pre-defined as a function of coordinates and, if necessary, the time as well. The code simultaneously integrates the equations of motion with the equations describing the charging processes. The charging currents are dependent not only on the instantaneous plasma parameters but on the velocity, as well as on the previous charging history of the dust grains.

  3. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  4. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  5. Identification of significantly mutated regions across cancer types highlights a rich landscape of functional molecular alterations

    PubMed Central

    Araya, Carlos L.; Cenik, Can; Reuter, Jason A.; Kiss, Gert; Pande, Vijay S.; Snyder, Michael P.; Greenleaf, William J.

    2015-01-01

    Cancer sequencing studies have primarily identified cancer-driver genes by the accumulation of protein-altering mutations. An improved method would be annotation-independent, sensitive to unknown distributions of functions within proteins, and inclusive of non-coding drivers. We employed density-based clustering methods in 21 tumor types to detect variably-sized significantly mutated regions (SMRs). SMRs reveal recurrent alterations across a spectrum of coding and non-coding elements, including transcription factor binding sites and untranslated regions mutated in up to ∼15% of specific tumor types. SMRs reveal spatial clustering of mutations at molecular domains and interfaces, often with associated changes in signaling. Mutation frequencies in SMRs demonstrate that distinct protein regions are differentially mutated among tumor types, as exemplified by a linker region of PIK3CA in which biophysical simulations suggest mutations affect regulatory interactions. The functional diversity of SMRs underscores both the varied mechanisms of oncogenic misregulation and the advantage of functionally-agnostic driver identification. PMID:26691984

  6. TRIQS: A toolbox for research on interacting quantum systems

    NASA Astrophysics Data System (ADS)

    Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka

    2015-11-01

    We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.

  7. Applications of the microdosimetric function implemented in the macroscopic particle transport simulation code PHITS.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji

    2012-01-01

    Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.

  8. Polymerization of non-complementary RNA: systematic symmetric nucleotide exchanges mainly involving uracil produce mitochondrial RNA transcripts coding for cryptic overlapping genes.

    PubMed

    Seligmann, Hervé

    2013-03-01

    Usual DNA→RNA transcription exchanges T→U. Assuming different systematic symmetric nucleotide exchanges during translation, some GenBank RNAs match exactly human mitochondrial sequences (exchange rules listed in decreasing transcript frequencies): C↔U, A↔U, A↔U+C↔G (two nucleotide pairs exchanged), G↔U, A↔G, C↔G, none for A↔C, A↔G+C↔U, and A↔C+G↔U. Most unusual transcripts involve exchanging uracil. Independent measures of rates of rare replicational enzymatic DNA nucleotide misinsertions predict frequencies of RNA transcripts systematically exchanging the corresponding misinserted nucleotides. Exchange transcripts self-hybridize less than other gene regions, self-hybridization increases with length, suggesting endoribonuclease-limited elongation. Blast detects stop codon depleted putative protein coding overlapping genes within exchange-transcribed mitochondrial genes. These align with existing GenBank proteins (mainly metazoan origins, prokaryotic and viral origins underrepresented). These GenBank proteins frequently interact with RNA/DNA, are membrane transporters, or are typical of mitochondrial metabolism. Nucleotide exchange transcript frequencies increase with overlapping gene densities and stop densities, indicating finely tuned counterbalancing regulation of expression of systematic symmetric nucleotide exchange-encrypted proteins. Such expression necessitates combined activities of suppressor tRNAs matching stops, and nucleotide exchange transcription. Two independent properties confirm predicted exchanged overlap coding genes: discrepancy of third codon nucleotide contents from replicational deamination gradients, and codon usage according to circular code predictions. Predictions from both properties converge, especially for frequent nucleotide exchange types. Nucleotide exchanging transcription apparently increases coding densities of protein coding genes without lengthening genomes, revealing unsuspected functional DNA coding potential. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Objective Molecular Dynamics with Self-consistent Charge Density Functional Tight-Binding (SCC-DFTB) Method

    NASA Astrophysics Data System (ADS)

    Dumitrica, Traian; Hourahine, Ben; Aradi, Balint; Frauenheim, Thomas

    We discus the coupling of the objective boundary conditions into the SCC density functional-based tight binding code DFTB+. The implementation is enabled by a generalization to the helical case of the classical Ewald method, specifically by Ewald-like formulas that do not rely on a unit cell with translational symmetry. The robustness of the method in addressing complex hetero-nuclear nano- and bio-fibrous systems is demonstrated with illustrative simulations on a helical boron nitride nanotube, a screw dislocated zinc oxide nanowire, and an ideal double-strand DNA. Work supported by NSF CMMI 1332228.

  10. Study to investigate and evaluate means of optimizing the Ku-band communication function for the space shuttle

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Udalov, S.; Huth, G. K.

    1976-01-01

    The forward link of the overall Ku-band communication system consists of the ground- TDRS-orbiter communication path. Because the last segment of the link is directed towards a relatively low orbiting shuttle, a PN code is used to reduce the spectral density. A method is presented for incorporating code acquisition and tracking functions into the orbiter's Ku-band receiver. Optimization of a three channel multiplexing technique is described. The importance of Costas loop parameters to provide false lock immunity for the receiver, and the advantage of using a sinusoidal subcarrier waveform, rather than square wave, are discussed.

  11. A first principles study of the electronic structure, elastic and thermal properties of UB2

    NASA Astrophysics Data System (ADS)

    Jossou, Ericmoore; Malakkal, Linu; Szpunar, Barbara; Oladimeji, Dotun; Szpunar, Jerzy A.

    2017-07-01

    Uranium diboride (UB2) has been widely deployed for refractory use and is a proposed material for Accident Tolerant Fuel (ATF) due to its high thermal conductivity. However, the applicability of UB2 towards high temperature usage in a nuclear reactor requires the need to investigate the thermomechanical properties, and recent studies have failed in highlighting applicable properties. In this work, we present an in-depth theoretical outlook of the structural and thermophysical properties of UB2, including but not limited to elastic, electronic and thermal transport properties. These calculations were performed within the framework of Density Functional Theory (DFT) + U approach, using Quantum ESPRESSO (QE) code considering the addition of Coulomb correlations on the uranium atom. The phonon spectra and elastic constant analysis show the dynamic and mechanical stability of UB2 structure respectively. The electronic structure of UB2 was investigated using full potential linear augmented plane waves plus local orbitals method (FP-LAPW+lo) as implemented in WIEN2k code. The absence of a band gap in the total and partial density of states confirms the metallic nature while the valence electron density plot reveals the presence of covalent bond between adjacent B-B atoms. We predicted the lattice thermal conductivity (kL) by solving Boltzmann Transport Equation (BTE) using ShengBTE. The second order harmonic and third-order anharmonic interatomic force constants required as input to ShengBTE was calculated using the Density-functional perturbation theory (DFPT). However, we predicted the electronic thermal conductivity (kel) using Wiedemann-Franz law as implemented in Boltztrap code. We also show that the sound velocity along 'a' and 'c' axes exhibit high anisotropy, which accounts for the anisotropic thermal conductivity of UB2.

  12. Self-generated zonal flows in the plasma turbulence driven by trapped-ion and trapped-electron instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouot, T.; Gravier, E.; Reveille, T.

    This paper presents a study of zonal flows generated by trapped-electron mode and trapped-ion mode micro turbulence as a function of two plasma parameters—banana width and electron temperature. For this purpose, a gyrokinetic code considering only trapped particles is used. First, an analytical equation giving the predicted level of zonal flows is derived from the quasi-neutrality equation of our model, as a function of the density fluctuation levels and the banana widths. Then, the influence of the banana width on the number of zonal flows occurring in the system is studied using the gyrokinetic code. Finally, the impact of themore » temperature ratio T{sub e}/T{sub i} on the reduction of zonal flows is shown and a close link is highlighted between reduction and different gyro-and-bounce-average ion and electron density fluctuation levels. This reduction is found to be due to the amplitudes of gyro-and-bounce-average density perturbations n{sub e} and n{sub i} gradually becoming closer, which is in agreement with the analytical results given by the quasi-neutrality equation.« less

  13. High-temperature high-pressure properties of silica from Quantum Monte Carlo and Density Functional Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Cohen, R. E.; Driver, K.; Wu, Z.; Militzer, B.; Rios, P. L.; Towler, M.; Needs, R.

    2009-03-01

    We have used diffusion quantum Monte Carlo (DMC) with the CASINO code with thermal free energies from phonons computed using density functional perturbation theory (DFPT) with the ABINIT code to obtain phase transition curves and thermal equations of state of silica phases under pressure. We obtain excellent agreement with experiments for the metastable phase transition from quartz to stishovite. The local density approximation (LDA) incorrectly gives stishovite as the ground state. The generalized gradient approximation (GGA) correctly gives quartz as the ground state, but does worse than LDA for the equations of state. DMC, variational quantum Monte Carlo (VMC), and DFT all give good results for the ferroelastic transition of stishovite to the CaCl2 structure, and LDA or the WC exchange correlation potentials give good results within a given silica phase. The δV and δH from the CaCl2 structure to α-PbO2 is small, giving uncertainly in the theoretical transition pressure. It is interesting that DFT has trouble with silica transitions, although the electronic structures of silica are insulating, simple closed-shell with ionic/covalent bonding. It seems like the errors in DFT are from not precisely giving the ion sizes.

  14. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  15. The detectability of brown dwarfs - Predictions and uncertainties

    NASA Technical Reports Server (NTRS)

    Nelson, L. A.; Rappaport, S.; Joss, P. C.

    1993-01-01

    In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.

  16. Dispersion interactions in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Mostofi, Arash

    2012-02-01

    Semilocal functionals in Density Functional Theory (DFT) achieve high accuracy simulating a wide range of systems, but miss the effect of dispersion (vdW) interactions, important in weakly bound systems. We study two different methods to include vdW in DFT: First, we investigate a recent approach [1] to evaluate the vdW contribution to the total energy using maximally-localized Wannier functions. Using a set of simple dimers, we show that it has a number of shortcomings that hamper its predictive power; we then develop and implement a series of improvements [2] and obtain binding energies and equilibrium geometries in closer agreement to quantum-chemical coupled-cluster calculations. Second, we implement the vdW-DF functional [3], using Soler's method [4], within ONETEP [5], a linear-scaling DFT code, and apply it to a range of systems. This method within a linear-scaling DFT code allows the simulation of weakly bound systems of larger scale, such as organic/inorganic interfaces, biological systems and implicit solvation models. [1] P. Silvestrelli, JPC A 113, 5224 (2009). [2] L. Andrinopoulos et al, JCP 135, 154105 (2011). [3] M. Dion et al, PRL 92, 246401 (2004). [4] G. Rom'an-P'erez, J.M. Soler, PRL 103, 096102 (2009). [5] C. Skylaris et al, JCP 122, 084119 (2005).

  17. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  18. Performance of the density matrix functional theory in the quantum theory of atoms in molecules.

    PubMed

    García-Revilla, Marco; Francisco, E; Costales, A; Martín Pendás, A

    2012-02-02

    The generalization to arbitrary molecular geometries of the energetic partitioning provided by the atomic virial theorem of the quantum theory of atoms in molecules (QTAIM) leads to an exact and chemically intuitive energy partitioning scheme, the interacting quantum atoms (IQA) approach, that depends on the availability of second-order reduced density matrices (2-RDMs). This work explores the performance of this approach in particular and of the QTAIM in general with approximate 2-RDMs obtained from the density matrix functional theory (DMFT), which rests on the natural expansion (natural orbitals and their corresponding occupation numbers) of the first-order reduced density matrix (1-RDM). A number of these functionals have been implemented in the promolden code and used to perform QTAIM and IQA analyses on several representative molecules and model chemical reactions. Total energies, covalent intra- and interbasin exchange-correlation interactions, as well as localization and delocalization indices have been determined with these functionals from 1-RDMs obtained at different levels of theory. Results are compared to the values computed from the exact 2-RDMs, whenever possible.

  19. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  20. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  1. Chemical reactivity and spectroscopy explored from QM/MM molecular dynamics simulations using the LIO code

    NASA Astrophysics Data System (ADS)

    Marcolongo, Juan P.; Zeida, Ari; Semelak, Jonathan A.; Foglia, Nicolás O.; Morzan, Uriel N.; Estrin, Dario A.; González Lebrero, Mariano C.; Scherlis, Damián A.

    2018-03-01

    In this work we present the current advances in the development and the applications of LIO, a lab-made code designed for density functional theory calculations in graphical processing units (GPU), that can be coupled with different classical molecular dynamics engines. This code has been thoroughly optimized to perform efficient molecular dynamics simulations at the QM/MM DFT level, allowing for an exhaustive sampling of the configurational space. Selected examples are presented for the description of chemical reactivity in terms of free energy profiles, and also for the computation of optical properties, such as vibrational and electronic spectra in solvent and protein environments.

  2. The optimized effective potential and the self-interaction correction in density functional theory: Application to molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.

    2000-05-08

    The Krieger, Li, and Iafrate approximation to the optimized effective potential including the self-interaction correction for density functional theory has been implemented in a molecular code, NWChem, that uses Gaussian functions to represent the Kohn and Sham spin-orbitals. The differences between the implementation of the self-interaction correction in codes where planewaves are used with an optimized effective potential are discussed. The importance of the localization of the spin-orbitals to maximize the exchange-correlation of the self-interaction correction is discussed. We carried out exchange-only calculations to compare the results obtained with these approximations, and those obtained with the local spin density approximation,more » the generalized gradient approximation and Hartree-Fock theory. Interesting results for the energy difference (GAP) between the highest occupied molecular orbital, HOMO, and the lowest unoccupied molecular orbital, LUMO, (spin-orbital energies of closed shell atoms and molecules) using the optimized effective potential and the self-interaction correction have been obtained. The effect of the diffuse character of the basis set on the HOMO and LUMO eigenvalues at the various levels is discussed. Total energies obtained with the optimized effective potential and the self-interaction correction show that the exchange energy with these approximations is overestimated and this will be an important topic for future work. (c) 2000 American Institute of Physics.« less

  3. DensToolKit: A comprehensive open-source package for analyzing the electron density and its derivative scalar and vector fields

    NASA Astrophysics Data System (ADS)

    Solano-Altamirano, J. M.; Hernández-Pérez, Julio M.

    2015-11-01

    DensToolKit is a suite of cross-platform, optionally parallelized, programs for analyzing the molecular electron density (ρ) and several fields derived from it. Scalar and vector fields, such as the gradient of the electron density (∇ρ), electron localization function (ELF) and its gradient, localized orbital locator (LOL), region of slow electrons (RoSE), reduced density gradient, localized electrons detector (LED), information entropy, molecular electrostatic potential, kinetic energy densities K and G, among others, can be evaluated on zero, one, two, and three dimensional grids. The suite includes a program for searching critical points and bond paths of the electron density, under the framework of Quantum Theory of Atoms in Molecules. DensToolKit also evaluates the momentum space electron density on spatial grids, and the reduced density matrix of order one along lines joining two arbitrary atoms of a molecule. The source code is distributed under the GNU-GPLv3 license, and we release the code with the intent of establishing an open-source collaborative project. The style of DensToolKit's code follows some of the guidelines of an object-oriented program. This allows us to supply the user with a simple manner for easily implement new scalar or vector fields, provided they are derived from any of the fields already implemented in the code. In this paper, we present some of the most salient features of the programs contained in the suite, some examples of how to run them, and the mathematical definitions of the implemented fields along with hints of how we optimized their evaluation. We benchmarked our suite against both a freely-available program and a commercial package. Speed-ups of ˜2×, and up to 12× were obtained using a non-parallel compilation of DensToolKit for the evaluation of fields. DensToolKit takes similar times for finding critical points, compared to a commercial package. Finally, we present some perspectives for the future development and growth of the suite.

  4. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.

    PubMed

    Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).

  5. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, Daniel, E-mail: daniel.berger@ch.tum.de; Oberhofer, Harald; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capabilitymore » by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)« less

  6. Implicit solvation model for density-functional study of nanocrystal surfaces and reaction pathways

    NASA Astrophysics Data System (ADS)

    Mathew, Kiran; Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2014-02-01

    Solid-liquid interfaces are at the heart of many modern-day technologies and provide a challenge to many materials simulation methods. A realistic first-principles computational study of such systems entails the inclusion of solvent effects. In this work, we implement an implicit solvation model that has a firm theoretical foundation into the widely used density-functional code Vienna ab initio Software Package. The implicit solvation model follows the framework of joint density functional theory. We describe the framework, our algorithm and implementation, and benchmarks for small molecular systems. We apply the solvation model to study the surface energies of different facets of semiconducting and metallic nanocrystals and the SN2 reaction pathway. We find that solvation reduces the surface energies of the nanocrystals, especially for the semiconducting ones and increases the energy barrier of the SN2 reaction.

  7. Improvements and new features in the PDF module

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.

    1995-01-01

    This viewgraph presentation discusses what models are used in this package and what their advantages and disadvantages are, how the probability density function (PDF) model is implemented and the features of the program, and what can be expected in the future from the NASA Lewis PDF code.

  8. Recent advances in PDF modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Leonard, Andrew D.; Dai, F.

    1995-01-01

    This viewgraph presentation concludes that a Monte Carlo probability density function (PDF) solution successfully couples with an existing finite volume code; PDF solution method applied to turbulent reacting flows shows good agreement with data; and PDF methods must be run on parallel machines for practical use.

  9. PARAVT: Parallel Voronoi tessellation code

    NASA Astrophysics Data System (ADS)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  10. Lattice dynamics of Ru2FeX (X = Si, Ge) Full Heusler alloys

    NASA Astrophysics Data System (ADS)

    Rizwan, M.; Afaq, A.; Aneeza, A.

    2018-05-01

    In present work, the lattice dynamics of Ru2FeX (X = Si, Ge) full Heusler alloys are investigated using density functional theory (DFT) within generalized gradient approximation (GGA) in a plane wave basis, with norm-conserving pseudopotentials. Phonon dispersion curves and phonon density of states are obtained using first-principles linear response approach of density functional perturbation theory (DFPT) as implemented in Quantum ESPRESSO code. Phonon dispersion curves indicates for both Heusler alloys that there is no imaginary phonon in whole Brillouin zone, confirming dynamical stability of these alloys in L21 type structure. There is a considerable overlapping between acoustic and optical phonon modes predicting no phonon band gap exists in dispersion curves of alloys. The same result is shown by phonon density of states curves for both Heusler alloys. Reststrahlen band for Ru2FeSi is found smaller than Ru2FeGe.

  11. Tobacco outlet density and converted versus native non-daily cigarette use in a national US sample

    PubMed Central

    Kirchner, Thomas R; Anesetti-Rothermel, Andrew; Bennett, Morgane; Gao, Hong; Carlos, Heather; Scheuermann, Taneisha S; Reitzel, Lorraine R; Ahluwalia, Jasjit S

    2017-01-01

    Objective Investigate whether non-daily smokers’ (NDS) cigarette price and purchase preferences, recent cessation attempts, and current intentions to quit are associated with the density of the retail cigarette product landscape surrounding their residential address. Participants Cross-sectional assessment of N=904 converted NDS (CNDS). who previously smoked every day, and N=297 native NDS (NNDS) who only smoked non-daily, drawn from a national panel. Outcome measures Kernel density estimation was used to generate a nationwide probability surface of tobacco outlets linked to participants’ residential ZIP code. Hierarchically nested log-linear models were compared to evaluate associations between outlet density, non-daily use patterns, price sensitivity and quit intentions. Results Overall, NDS in ZIP codes with greater outlet density were less likely than NDS in ZIP codes with lower outlet density to hold 6-month quit intentions when they also reported that price affected use patterns (G2=66.1, p<0.001) and purchase locations (G2=85.2, p<0.001). CNDS were more likely than NNDS to reside in ZIP codes with higher outlet density (G2=322.0, p<0.001). Compared with CNDS in ZIP codes with lower outlet density, CNDS in high-density ZIP codes were more likely to report that price influenced the amount they smoke (G2=43.9, p<0.001), and were more likely to look for better prices (G2=59.3, p<0.001). NDS residing in high-density ZIP codes were not more likely to report that price affected their cigarette brand choice compared with those in ZIP codes with lower density. Conclusions This paper provides initial evidence that the point-of-sale cigarette environment may be differentially associated with the maintenance of CNDS versus NNDS patterns. Future research should investigate how tobacco control efforts can be optimised to both promote cessation and curb the rising tide of non-daily smoking in the USA. PMID:26969172

  12. Chemical insight from density functional modeling of molecular adsorption: Tracking the bonding and diffusion of anthracene derivatives on Cu(111) with molecular orbitals

    NASA Astrophysics Data System (ADS)

    Wyrick, Jonathan; Einstein, T. L.; Bartels, Ludwig

    2015-03-01

    We present a method of analyzing the results of density functional modeling of molecular adsorption in terms of an analogue of molecular orbitals. This approach permits intuitive chemical insight into the adsorption process. Applied to a set of anthracene derivates (anthracene, 9,10-anthraquinone, 9,10-dithioanthracene, and 9,10-diselenonanthracene), we follow the electronic states of the molecules that are involved in the bonding process and correlate them to both the molecular adsorption geometry and the species' diffusive behavior. We additionally provide computational code to easily repeat this analysis on any system.

  13. A fast algorithm for identifying friends-of-friends halos

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  14. Community Alcohol Outlet Density and Underage Drinking

    PubMed Central

    Chen, Meng-Jinn; Grube, Joel W.; Gruenewald, Paul J.

    2009-01-01

    Aim This study examined how community alcohol outlet density may be associated with drinking among youths. Methods Longitudinal data were collected from 1091 adolescents (aged 14–16 at baseline) recruited from 50 zip codes in California with varying levels of alcohol outlet density and median household income. Hierarchical linear models were used to examine the associations between zip code alcohol outlet density and frequency rates of general alcohol use and excessive drinking, taking into account zip code median household income and individual-level variables (age, gender, race/ethnicity, personal income, mobility, and perceived drinking by parents and peers). Findings When all other factors were controlled, higher initial levels of drinking and excessive drinking were observed among youths residing in zip codes with higher alcohol outlet densities. Growth in drinking and excessive drinking was on average more rapid in zip codes with lower alcohol outlet densities. The relation of zip code alcohol outlet density with drinking appeared to be mitigated by having friends with access to a car. Conclusion Alcohol outlet density may play a significant role in initiation of underage drinking during early teen ages, especially when youths have limited mobility. Youth who reside in areas with low alcohol outlet density may overcome geographic constraints through social networks that increase their mobility and the ability to seek alcohol and drinking opportunities beyond the local community. PMID:20078485

  15. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  16. In-orbit verification of small optical transponder (SOTA): evaluation of satellite-to-ground laser communication links

    NASA Astrophysics Data System (ADS)

    Takenaka, Hideki; Koyama, Yoshisada; Akioka, Maki; Kolev, Dimitar; Iwakiri, Naohiko; Kunimori, Hiroo; Carrasco-Casado, Alberto; Munemasa, Yasushi; Okamoto, Eiji; Toyoshima, Morio

    2016-03-01

    Research and development of space optical communications is conducted in the National Institute of Information and Communications Technology (NICT). The NICT developed the Small Optical TrAnsponder (SOTA), which was embarked on a 50kg-class satellite and launched into a low earth orbit (LEO). The space-to-ground laser communication experiments have been conducted with the SOTA. Atmospheric turbulence causes signal fadings and becomes an issue to be solved in satellite-to-ground laser communication links. Therefore, as error-correcting functions, a Reed-Solomon (RS) code and a Low-Density Generator Matrix (LDGM) code are implemented in the communication system onboard the SOTA. In this paper, we present the in-orbit verification results of SOTA including the characteristic of the functions, the communication performance with the LDGM code via satellite-to-ground atmospheric paths, and the link budget analysis and the comparison between theoretical and experimental results.

  17. Watching excitons move: the time-dependent transition density matrix

    NASA Astrophysics Data System (ADS)

    Ullrich, Carsten

    2012-02-01

    Time-dependent density-functional theory allows one to calculate excitation energies and the associated transition densities in principle exactly. The transition density matrix (TDM) provides additional information on electron-hole localization and coherence of specific excitations of the many-body system. We have extended the TDM concept into the real-time domain in order to visualize the excited-state dynamics in conjugated molecules. The time-dependent TDM is defined as an implicit density functional, and can be approximately obtained from the time-dependent Kohn-Sham orbitals. The quality of this approximation is assessed in simple model systems. A computational scheme for real molecular systems is presented: the time-dependent Kohn-Sham equations are solved with the OCTOPUS code and the time-dependent Kohn-Sham TDM is calculated using a spatial partitioning scheme. The method is applied to show in real time how locally created electron-hole pairs spread out over neighboring conjugated molecular chains. The coupling mechanism, electron-hole coherence, and the possibility of charge separation are discussed.

  18. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  19. Density Functional O(N) Calculations

    NASA Astrophysics Data System (ADS)

    Ordejón, Pablo

    1998-03-01

    We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.

  20. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

  1. Ab initio density-functional calculations in materials science: from quasicrystals over microporous catalysts to spintronics.

    PubMed

    Hafner, Jürgen

    2010-09-29

    During the last 20 years computer simulations based on a quantum-mechanical description of the interactions between electrons and atomic nuclei have developed an increasingly important impact on materials science, not only in promoting a deeper understanding of the fundamental physical phenomena, but also enabling the computer-assisted design of materials for future technologies. The backbone of atomic-scale computational materials science is density-functional theory (DFT) which allows us to cast the intractable complexity of electron-electron interactions into the form of an effective single-particle equation determined by the exchange-correlation functional. Progress in DFT-based calculations of the properties of materials and of simulations of processes in materials depends on: (1) the development of improved exchange-correlation functionals and advanced post-DFT methods and their implementation in highly efficient computer codes, (2) the development of methods allowing us to bridge the gaps in the temperature, pressure, time and length scales between the ab initio calculations and real-world experiments and (3) the extension of the functionality of these codes, permitting us to treat additional properties and new processes. In this paper we discuss the current status of techniques for performing quantum-based simulations on materials and present some illustrative examples of applications to complex quasiperiodic alloys, cluster-support interactions in microporous acid catalysts and magnetic nanostructures.

  2. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    DOE PAGES

    Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; ...

    2016-04-20

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree withmore » those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. As a result, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.« less

  3. The queueing perspective of asynchronous network coding in two-way relay network

    NASA Astrophysics Data System (ADS)

    Liang, Yaping; Chang, Qing; Li, Xianxu

    2018-04-01

    Asynchronous network coding (NC) has potential to improve the wireless network performance compared with a routing or the synchronous network coding. Recent researches concentrate on the optimization between throughput/energy consuming and delay with a couple of independent input flow. However, the implementation of NC requires a thorough investigation of its impact on relevant queueing systems where few work focuses on. Moreover, few works study the probability density function (pdf) in network coding scenario. In this paper, the scenario with two independent Poisson input flows and one output flow is considered. The asynchronous NC-based strategy is that a new arrival evicts a head packet holding in its queue when waiting for another packet from the other flow to encode. The pdf for the output flow which contains both coded and uncoded packets is derived. Besides, the statistic characteristics of this strategy are analyzed. These results are verified by numerical simulations.

  4. On the Representation of Aquifer Compressibility in General Subsurface Flow Codes: How an Alternate Definition of Aquifer Compressibility Matches Results from the Groundwater Flow Equation

    NASA Astrophysics Data System (ADS)

    Birdsell, D.; Karra, S.; Rajaram, H.

    2016-12-01

    The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.

  5. On the Representation of Aquifer Compressibility in General Subsurface Flow Codes: How an Alternate Definition of Aquifer Compressibility Matches Results from the Groundwater Flow Equation

    NASA Astrophysics Data System (ADS)

    Birdsell, D.; Karra, S.; Rajaram, H.

    2017-12-01

    The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.

  6. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  7. Hybrid density-functional calculations of phonons in LaCoO3

    NASA Astrophysics Data System (ADS)

    Gryaznov, Denis; Evarestov, Robert A.; Maier, Joachim

    2010-12-01

    Phonon frequencies at Γ point in nonmagnetic rhombohedral phase of LaCoO3 were calculated using density-functional theory with hybrid exchange correlation functional PBE0. The calculations involved a comparison of results for two types of basis functions commonly used in ab initio calculations, namely, the plane-wave approach and linear combination of atomic orbitals, as implemented in VASP and CRYSTAL computer codes, respectively. A good qualitative, but also within an error margin of less than 30%, a quantitative agreement was observed not only between the two formalisms but also between theoretical and experimental phonon frequency predictions. Moreover, the correlation between the phonon symmetries in cubic and rhombohedral phases is discussed in detail on the basis of group-theoretical analysis. It is concluded that the hybrid PBE0 functional is able to predict correctly the phonon properties in LaCoO3 .

  8. Periodic subsystem density-functional theory

    NASA Astrophysics Data System (ADS)

    Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele

    2014-11-01

    By partitioning the electron density into subsystem contributions, the Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) has recently emerged as a powerful tool for reducing the computational scaling of Kohn-Sham DFT. To date, however, FDE has been employed to molecular systems only. Periodic systems, such as metals, semiconductors, and other crystalline solids have been outside the applicability of FDE, mostly because of the lack of a periodic FDE implementation. To fill this gap, in this work we aim at extending FDE to treat subsystems of molecular and periodic character. This goal is achieved by a dual approach. On one side, the development of a theoretical framework for periodic subsystem DFT. On the other, the realization of the method into a parallel computer code. We find that periodic FDE is capable of reproducing total electron densities and (to a lesser extent) also interaction energies of molecular systems weakly interacting with metallic surfaces. In the pilot calculations considered, we find that FDE fails in those cases where there is appreciable density overlap between the subsystems. Conversely, we find FDE to be in semiquantitative agreement with Kohn-Sham DFT when the inter-subsystem density overlap is low. We also conclude that to make FDE a suitable method for describing molecular adsorption at surfaces, kinetic energy density functionals that go beyond the GGA level must be employed.

  9. Periodic subsystem density-functional theory.

    PubMed

    Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele

    2014-11-07

    By partitioning the electron density into subsystem contributions, the Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) has recently emerged as a powerful tool for reducing the computational scaling of Kohn-Sham DFT. To date, however, FDE has been employed to molecular systems only. Periodic systems, such as metals, semiconductors, and other crystalline solids have been outside the applicability of FDE, mostly because of the lack of a periodic FDE implementation. To fill this gap, in this work we aim at extending FDE to treat subsystems of molecular and periodic character. This goal is achieved by a dual approach. On one side, the development of a theoretical framework for periodic subsystem DFT. On the other, the realization of the method into a parallel computer code. We find that periodic FDE is capable of reproducing total electron densities and (to a lesser extent) also interaction energies of molecular systems weakly interacting with metallic surfaces. In the pilot calculations considered, we find that FDE fails in those cases where there is appreciable density overlap between the subsystems. Conversely, we find FDE to be in semiquantitative agreement with Kohn-Sham DFT when the inter-subsystem density overlap is low. We also conclude that to make FDE a suitable method for describing molecular adsorption at surfaces, kinetic energy density functionals that go beyond the GGA level must be employed.

  10. Periodic subsystem density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genova, Alessandro; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu; Ceresoli, Davide

    2014-11-07

    By partitioning the electron density into subsystem contributions, the Frozen Density Embedding (FDE) formulation of subsystem Density Functional Theory (DFT) has recently emerged as a powerful tool for reducing the computational scaling of Kohn–Sham DFT. To date, however, FDE has been employed to molecular systems only. Periodic systems, such as metals, semiconductors, and other crystalline solids have been outside the applicability of FDE, mostly because of the lack of a periodic FDE implementation. To fill this gap, in this work we aim at extending FDE to treat subsystems of molecular and periodic character. This goal is achieved by a dualmore » approach. On one side, the development of a theoretical framework for periodic subsystem DFT. On the other, the realization of the method into a parallel computer code. We find that periodic FDE is capable of reproducing total electron densities and (to a lesser extent) also interaction energies of molecular systems weakly interacting with metallic surfaces. In the pilot calculations considered, we find that FDE fails in those cases where there is appreciable density overlap between the subsystems. Conversely, we find FDE to be in semiquantitative agreement with Kohn–Sham DFT when the inter-subsystem density overlap is low. We also conclude that to make FDE a suitable method for describing molecular adsorption at surfaces, kinetic energy density functionals that go beyond the GGA level must be employed.« less

  11. Reactivity Coefficient Calculation for AP1000 Reactor Using the NODAL3 Code

    NASA Astrophysics Data System (ADS)

    Pinem, Surian; Malem Sembiring, Tagor; Tukiran; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    The reactivity coefficient is a very important parameter for inherent safety and stability of nuclear reactors operation. To provide the safety analysis of the reactor, the calculation of changes in reactivity caused by temperature is necessary because it is related to the reactor operation. In this paper, the temperature reactivity coefficients of fuel and moderator of the AP1000 core are calculated, as well as the moderator density and boron concentration. All of these coefficients are calculated at the hot full power condition (HFP). All neutron diffusion constant as a function of temperature, water density and boron concentration were generated by the SRAC2006 code. The core calculations for determination of the reactivity coefficient parameter are done by using NODAL3 code. The calculation results show that the fuel temperature, moderator temperature and boron reactivity coefficients are in the range between -2.613 pcm/°C to -4.657pcm/°C, -1.00518 pcm/°C to 1.00649 pcm/°C and -9.11361 pcm/ppm to -8.0751 pcm/ppm, respectively. For the water density reactivity coefficients, the positive reactivity occurs at the water temperature less than 190 °C. The calculation results show that the reactivity coefficients are accurate because the results have a very good agreement with the design value.

  12. Tobacco outlet density and converted versus native non-daily cigarette use in a national US sample.

    PubMed

    Kirchner, Thomas R; Anesetti-Rothermel, Andrew; Bennett, Morgane; Gao, Hong; Carlos, Heather; Scheuermann, Taneisha S; Reitzel, Lorraine R; Ahluwalia, Jasjit S

    2017-01-01

    Investigate whether non-daily smokers' (NDS) cigarette price and purchase preferences, recent cessation attempts, and current intentions to quit are associated with the density of the retail cigarette product landscape surrounding their residential address. Cross-sectional assessment of N=904 converted NDS (CNDS). who previously smoked every day, and N=297 native NDS (NNDS) who only smoked non-daily, drawn from a national panel. Kernel density estimation was used to generate a nationwide probability surface of tobacco outlets linked to participants' residential ZIP code. Hierarchically nested log-linear models were compared to evaluate associations between outlet density, non-daily use patterns, price sensitivity and quit intentions. Overall, NDS in ZIP codes with greater outlet density were less likely than NDS in ZIP codes with lower outlet density to hold 6-month quit intentions when they also reported that price affected use patterns (G 2 =66.1, p<0.001) and purchase locations (G 2 =85.2, p<0.001). CNDS were more likely than NNDS to reside in ZIP codes with higher outlet density (G 2 =322.0, p<0.001). Compared with CNDS in ZIP codes with lower outlet density, CNDS in high-density ZIP codes were more likely to report that price influenced the amount they smoke (G 2 =43.9, p<0.001), and were more likely to look for better prices (G 2 =59.3, p<0.001). NDS residing in high-density ZIP codes were not more likely to report that price affected their cigarette brand choice compared with those in ZIP codes with lower density. This paper provides initial evidence that the point-of-sale cigarette environment may be differentially associated with the maintenance of CNDS versus NNDS patterns. Future research should investigate how tobacco control efforts can be optimised to both promote cessation and curb the rising tide of non-daily smoking in the USA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro, E-mail: stener@univ.trieste.it

    2015-07-14

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The methodmore » has been applied to very different systems in nature and size (from H{sub 2} to [Au{sub 147}]{sup −}). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.« less

  14. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  15. Application of CORSIKA Simulation Code to Study Lateral and Longitudinal Distribution of Fluorescence Light in Cosmic Ray Extensive Air Showers

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra; Davoudifar, Pantea; Rastegarzadeh, Gohar; Shayan, Milad

    2017-03-01

    In this paper, we used CORSIKA code to understand the characteristics of cosmic ray induced showers at extremely high energy as a function of energy, detector distance to shower axis, number, and density of secondary charged particles and the nature particle producing the shower. Based on the standard properties of the atmosphere, lateral and longitudinal development of the shower for photons and electrons has been investigated. Fluorescent light has been collected by the detector for protons, helium, oxygen, silicon, calcium and iron primary cosmic rays in different energies. So we have obtained a number of electrons per unit area, distance to the shower axis, shape function of particles density, percentage of fluorescent light, lateral distribution of energy dissipated in the atmosphere and visual field angle of detector as well as size of the shower image. We have also shown that location of highest percentage of fluorescence light is directly proportional to atomic number of elements. Also we have shown when the distance from shower axis increases and the shape function of particles density decreases severely. At the first stages of development, shower axis distance from detector is high and visual field angle is small; then with shower moving toward the Earth, angle increases. Overall, in higher energies, the fluorescent light method has more efficiency. The paper provides standard calibration lines for high energy showers which can be used to determine the nature of the particles.

  16. Method of Error Floor Mitigation in Low-Density Parity-Check Codes

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor)

    2014-01-01

    A digital communication decoding method for low-density parity-check coded messages. The decoding method decodes the low-density parity-check coded messages within a bipartite graph having check nodes and variable nodes. Messages from check nodes are partially hard limited, so that every message which would otherwise have a magnitude at or above a certain level is re-assigned to a maximum magnitude.

  17. Adiabatic corrections to density functional theory energies and wave functions.

    PubMed

    Mohallem, José R; Coura, Thiago de O; Diniz, Leonardo G; de Castro, Gustavo; Assafrão, Denise; Heine, Thomas

    2008-09-25

    The adiabatic finite-nuclear-mass-correction (FNMC) to the electronic energies and wave functions of atoms and molecules is formulated for density-functional theory and implemented in the deMon code. The approach is tested for a series of local and gradient corrected density functionals, using MP2 results and diagonal-Born-Oppenheimer corrections from the literature for comparison. In the evaluation of absolute energy corrections of nonorganic molecules the LDA PZ81 functional works surprisingly better than the others. For organic molecules the GGA BLYP functional has the best performance. FNMC with GGA functionals, mainly BLYP, show a good performance in the evaluation of relative corrections, except for nonorganic molecules containing H atoms. The PW86 functional stands out with the best evaluation of the barrier of linearity of H2O and the isotopic dipole moment of HDO. In general, DFT functionals display an accuracy superior than the common belief and because the corrections are based on a change of the electronic kinetic energy they are here ranked in a new appropriate way. The approach is applied to obtain the adiabatic correction for full atomization of alcanes C(n)H(2n+2), n = 4-10. The barrier of 1 mHartree is approached for adiabatic corrections, justifying its insertion into DFT.

  18. Time-dependent transition density matrix for visualizing charge-transfer excitations in photoexcited organic donor-acceptor systems

    NASA Astrophysics Data System (ADS)

    Li, Yonghui; Ullrich, Carsten

    2013-03-01

    The time-dependent transition density matrix (TDM) is a useful tool to visualize and interpret the induced charges and electron-hole coherences of excitonic processes in large molecules. Combined with time-dependent density functional theory on a real-space grid (as implemented in the octopus code), the TDM is a computationally viable visualization tool for optical excitation processes in molecules. It provides real-time maps of particles and holes which gives information on excitations, in particular those that have charge-transfer character, that cannot be obtained from the density alone. Some illustration of the TDM and comparison with standard density difference plots will be shown for photoexcited organic donor-acceptor molecules. This work is supported by NSF Grant DMR-1005651

  19. Influence of temperature fluctuations on infrared limb radiance: a new simulation code

    NASA Astrophysics Data System (ADS)

    Rialland, Valérie; Chervet, Patrick

    2006-08-01

    Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.

  20. The Most Massive Galaxies and Black Holes Allowed by ΛCDM

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter; Silk, Joseph

    2018-04-01

    Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z > 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected ΛCDM halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST and WFIRST will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass — stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.

  1. Photoionization and High Density Gas

    NASA Technical Reports Server (NTRS)

    Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.

  2. Comparison of deterministic and stochastic approaches for isotopic concentration and decay heat uncertainty quantification on elementary fission pulse

    NASA Astrophysics Data System (ADS)

    Lahaye, S.; Huynh, T. D.; Tsilanizara, A.

    2016-03-01

    Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.

  3. Validity of virial theorem in all-electron mixed basis density functional, Hartree–Fock, and GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuwahara, Riichi; Accelrys K. K., Kasumigaseki Tokyu Building 17F, 3-7-1 Kasumigaseki, Chiyoda-ku, Tokyo 100-0013; Tadokoro, Yoichi

    In this paper, we calculate kinetic and potential energy contributions to the electronic ground-state total energy of several isolated atoms (He, Be, Ne, Mg, Ar, and Ca) by using the local density approximation (LDA) in density functional theory, the Hartree–Fock approximation (HFA), and the self-consistent GW approximation (GWA). To this end, we have implemented self-consistent HFA and GWA routines in our all-electron mixed basis code, TOMBO. We confirm that virial theorem is fairly well satisfied in all of these approximations, although the resulting eigenvalue of the highest occupied molecular orbital level, i.e., the negative of the ionization potential, is inmore » excellent agreement only in the case of the GWA. We find that the wave function of the lowest unoccupied molecular orbital level of noble gas atoms is a resonating virtual bound state, and that of the GWA spreads wider than that of the LDA and thinner than that of the HFA.« less

  4. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  5. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.

  6. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    DOE PAGES

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  7. MODTOHAFSD — A GUI based JAVA code for gravity analysis of strike limited sedimentary basins by means of growing bodies with exponential density contrast-depth variation: A space domain approach

    NASA Astrophysics Data System (ADS)

    Chakravarthi, V.; Sastry, S. Rajeswara; Ramamma, B.

    2013-07-01

    Based on the principles of modeling and inversion, two interpretation methods are developed in the space domain along with a GUI based JAVA code, MODTOHAFSD, to analyze the gravity anomalies of strike limited sedimentary basins using a prescribed exponential density contrast-depth function. A stack of vertical prisms all having equal widths, but each one possesses its own limited strike length and thickness, describes the structure of a sedimentary basin above the basement complex. The thicknesses of prisms represent the depths to the basement and are the unknown parameters to be estimated from the observed gravity anomalies. Forward modeling is realized in the space domain using a combination of analytical and numerical approaches. The algorithm estimates the initial depths of a sedimentary basin and improves them, iteratively, based on the differences between the observed and modeled gravity anomalies within the specified convergence criteria. The present code, works on Model-View-Controller (MVC) pattern, reads the Bouguer gravity anomalies, constructs/modifies regional gravity background in an interactive approach, estimates residual gravity anomalies and performs automatic modeling or inversion based on user specification for basement topography. Besides generating output in both ASCII and graphical forms, the code displays (i) the changes in the depth structure, (ii) nature of fit between the observed and modeled gravity anomalies, (iii) changes in misfit, and (iv) variation of density contrast with iteration in animated forms. The code is used to analyze both synthetic and real field gravity anomalies. The proposed technique yielded information that is consistent with the assumed parameters in case of synthetic structure and with available drilling depths in case of field example. The advantage of the code is that it can be used to analyze the gravity anomalies of sedimentary basins even when the profile along which the interpretation is intended fails to bisect the strike length.

  8. Aerodynamic characteristics of the upper stages of a launch vehicle in low-density regime

    NASA Astrophysics Data System (ADS)

    Oh, Bum Seok; Lee, Joon Ho

    2016-11-01

    Aerodynamic characteristics of the orbital block (remaining configuration after separation of nose fairing and 1st and 2nd stages of the launch vehicle) and the upper 2-3stage (configuration after separation of 1st stage) of the 3 stages launch vehicle (KSLV-II, Korea Space Launch Vehicle) at high altitude of low-density regime are analyzed by SMILE code which is based on DSMC (Direct Simulation Monte-Carlo) method. To validating of the SMILE code, coefficients of axial force and normal forces of Apollo capsule are also calculated and the results agree very well with the data predicted by others. For the additional validations and applications of the DSMC code, aerodynamic calculation results of simple shapes of plate and wedge in low-density regime are also introduced. Generally, aerodynamic characteristics in low-density regime differ from those of continuum regime. To understand those kinds of differences, aerodynamic coefficients of the upper stages (including upper 2-3 stage and the orbital block) of the launch vehicle in low-density regime are analyzed as a function of Mach numbers and altitudes. The predicted axial force coefficients of the upper stages of the launch vehicle are very high compared to those in continuum regime. In case of the orbital block which flies at very high altitude (higher than 250km), all aerodynamic coefficients are more dependent on velocity variations than altitude variations. In case of the upper 2-3 stage which flies at high altitude (80km-150km), while the axial force coefficients and the locations of center of pressure are less changed with the variations of Knudsen numbers (altitudes), the normal force coefficients and pitching moment coefficients are more affected by variations of Knudsen numbers (altitude).

  9. Light-scattering efficiency of starch acetate pigments as a function of size and packing density.

    PubMed

    Penttilä, Antti; Lumme, Kari; Kuutti, Lauri

    2006-05-20

    We study theoretically the light-scattering efficiency of paper coatings made of starch acetate pigments. For the light-scattering code we use a discrete dipole approximation method. The coating layer is assumed to consists of roughly equal-sized spherical pigments packed either at a packing density of 50% (large cylindrical slabs) or at 37% or 57% (large spheres). Because the scanning electron microscope images of starch acetate samples show either a particulate or a porous structure, we model the coatings in two complementary ways. The material can be either inside the constituent spheres (particulate case) or outside of those (cheeselike, porous medium). For the packing of our spheres we use either a simulated annealing or a dropping code. We can estimate, among other things, that the ideal sphere diameter is in the range 0.25-0.4 microm.

  10. Light-scattering efficiency of starch acetate pigments as a function of size and packing density

    NASA Astrophysics Data System (ADS)

    Penttilä, Antti; Lumme, Kari; Kuutti, Lauri

    2006-05-01

    We study theoretically the light-scattering efficiency of paper coatings made of starch acetate pigments. For the light-scattering code we use a discrete dipole approximation method. The coating layer is assumed to consists of roughly equal-sized spherical pigments packed either at a packing density of 50% (large cylindrical slabs) or at 37% or 57% (large spheres). Because the scanning electron microscope images of starch acetate samples show either a particulate or a porous structure, we model the coatings in two complementary ways. The material can be either inside the constituent spheres (particulate case) or outside of those (cheeselike, porous medium). For the packing of our spheres we use either a simulated annealing or a dropping code. We can estimate, among other things, that the ideal sphere diameter is in the range 0.25-0.4 μm.

  11. Simulations of Turbulent Momentum and Scalar Transport in Non-Reacting Confined Swirling Coaxial Jets

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey; Moder, Jeffrey P.

    2015-01-01

    This paper presents the numerical simulations of confined three-dimensional coaxial water jets. The objectives are to validate the newly proposed nonlinear turbulence models of momentum and scalar transport, and to evaluate the newly introduced scalar APDF and DWFDF equation along with its Eulerian implementation in the National Combustion Code (NCC). Simulations conducted include the steady RANS, the unsteady RANS (URANS), and the time-filtered Navier-Stokes (TFNS); both without and with invoking the APDF or DWFDF equation. When the APDF (ensemble averaged probability density function) or DWFDF (density weighted filtered density function) equation is invoked, the simulations are of a hybrid nature, i.e., the transport equations of energy and species are replaced by the APDF or DWFDF equation. Results of simulations are compared with the available experimental data. Some positive impacts of the nonlinear turbulence models and the Eulerian scalar APDF and DWFDF approach are observed.

  12. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  13. A 3D particle Monte Carlo approach to studying nucleation

    NASA Astrophysics Data System (ADS)

    Köhn, Christoph; Enghoff, Martin Bødker; Svensmark, Henrik

    2018-06-01

    The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate a swarm of sulphuric acid-water clusters with a size of 0.329 nm with densities between 107 and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate of clusters with a radius of 0.85 nm as a function of time, initial particle density and temperature. The nucleation rates obtained from the presented model agree well with experimentally obtained values and those of a numerical model which serves as a benchmark of our code. In contrast to previous nucleation models, we here present for the first time a code capable of tracing individual particles and thus of capturing the physics related to the discrete nature of particles.

  14. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  15. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

  16. A full-potential approach to the relativistic single-site Green's function

    DOE PAGES

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...

    2016-07-07

    One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less

  17. Polystyrene Foam EOS as a Function of Porosity and Fill Gas

    NASA Astrophysics Data System (ADS)

    Mulford, Roberta; Swift, Damian

    2009-06-01

    An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam, Differences between air-filled, nitrogen-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with plastic products during decomposition. Results differ somewhat from the conventional EOS, which are generated from values for plastic extrapolated to low densities.

  18. Self-consistent DFT +U method for real-space time-dependent density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Tancogne-Dejean, Nicolas; Oliveira, Micael J. T.; Rubio, Angel

    2017-12-01

    We implemented various DFT+U schemes, including the Agapito, Curtarolo, and Buongiorno Nardelli functional (ACBN0) self-consistent density-functional version of the DFT +U method [Phys. Rev. X 5, 011006 (2015), 10.1103/PhysRevX.5.011006] within the massively parallel real-space time-dependent density functional theory (TDDFT) code octopus. We further extended the method to the case of the calculation of response functions with real-time TDDFT+U and to the description of noncollinear spin systems. The implementation is tested by investigating the ground-state and optical properties of various transition-metal oxides, bulk topological insulators, and molecules. Our results are found to be in good agreement with previously published results for both the electronic band structure and structural properties. The self-consistent calculated values of U and J are also in good agreement with the values commonly used in the literature. We found that the time-dependent extension of the self-consistent DFT+U method yields improved optical properties when compared to the empirical TDDFT+U scheme. This work thus opens a different theoretical framework to address the nonequilibrium properties of correlated systems.

  19. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  20. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  1. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    DOE PAGES

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...

    2016-07-12

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less

  2. Neoclassical orbit calculations with a full-f code for tokamak edge plasmas

    NASA Astrophysics Data System (ADS)

    Rognlien, T. D.; Cohen, R. H.; Dorr, M.; Hittinger, J.; Xu, X. Q.; Collela, P.; Martin, D.

    2008-11-01

    Ion distribution function modifications are considered for the case of neoclassical orbit widths comparable to plasma radial-gradient scale-lengths. Implementation of proper boundary conditions at divertor plates in the continuum TEMPEST code, including the effect of drifts in determining the direction of total flow, enables such calculations in single-null divertor geometry, with and without an electrostatic potential. The resultant poloidal asymmetries in densities, temperatures, and flows are discussed. For long-time simulations, a slow numerical instability develops, even in simplified (circular) geometry with no endloss, which aids identification of the mixed treatment of parallel and radial convection terms as the cause. The new Edge Simulation Laboratory code, expected to be operational, has algorithmic refinements that should address the instability. We will present any available results from the new code on this problem as well as geodesic acoustic mode tests.

  3. Optimal nonlinear codes for the perception of natural colours.

    PubMed

    von der Twer, T; MacLeod, D I

    2001-08-01

    We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.

  4. Corrigendum: First principles calculation of field emission from nanostructures using time-dependent density functional theory: A simplified approach

    NASA Astrophysics Data System (ADS)

    Tawfik, Sherif A.; El-Sheikh, S. M.; Salem, N. M.

    2016-09-01

    Recently we have become aware that the description of the quantum wave functions in Sec. 2.1 is incorrect. In the published version of the paper, we have stated that the states are expanded in terms of plane waves. However, the correct description of the quantum states in the context of the real space implementation (using the Octopus code) is that states are represented by discrete points in a real space grid.

  5. Ab initio MD simulations of Mg2SiO4 liquid at high pressures and temperatures relevant to the Earth's mantle

    NASA Astrophysics Data System (ADS)

    Martin, G. B.; Kirtman, B.; Spera, F. J.

    2010-12-01

    Computational studies implementing Density Functional Theory (DFT) methods have become very popular in the Materials Sciences in recent years. DFT codes are now used routinely to simulate properties of geomaterials—mainly silicates and geochemically important metals such as Fe. These materials are ubiquitous in the Earth’s mantle and core and in terrestrial exoplanets. Because of computational limitations, most First Principles Molecular Dynamics (FPMD) calculations are done on systems of only 100 atoms for a few picoseconds. While this approach can be useful for calculating physical quantities related to crystal structure, vibrational frequency, and other lattice-scale properties (especially in crystals), it would be useful to be able to compute larger systems especially for extracting transport properties and coordination statistics. Previous studies have used codes such as VASP where CPU time increases as N2, making calculations on systems of more than 100 atoms computationally very taxing. SIESTA (Soler, et al. 2002) is a an order-N (linear-scaling) DFT code that enables electronic structure and MD computations on larger systems (N 1000) by making approximations such as localized numerical orbitals. Here we test the applicability of SIESTA to simulate geosilicates in the liquid and glass state. We have used SIESTA for MD simulations of liquid Mg2SiO4 at various state points pertinent to the Earth’s mantle and congruous with those calculated in a previous DFT study using the VASP code (DeKoker, et al. 2008). The core electronic wave functions of Mg, Si, and O were approximated using pseudopotentials with a core cutoff radius of 1.38, 1.0, and 0.61 Angstroms respectively. The Ceperly-Alder parameterization of the Local Density Approximation (LDA) was used as the exchange-correlation functional. Known systematic overbinding of LDA was corrected with the addition of a pressure term, P 1.6 GPa, which is the pressure calculated by SIESTA at the experimental zero-pressure volume of forsterite under static conditions (Stixrude and Lithgow-Bertollini 2005). Results are reported here that show SIESTA calculations of T and P on densities in the range of 2.7 - 5.0 g/cc of liquid Mg2SiO4 are similar to the VASP calculations of DeKoker et al. (2008), which used the same functional. This opens the possibility of conducting fast /emph{ab initio} MD simulations of geomaterials with a hundreds of atoms.

  6. The most massive galaxies and black holes allowed by ΛCDM

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter; Silk, Joseph

    2018-07-01

    Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z> 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected Lambda Cold Dark Matter halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST(James Webb Space Telescope) and WFIRST(Wide-Field InfraRed Survey Telescope) will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass to stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.

  7. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  8. Flexible configuration-interaction shell-model many-body solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; Ormand, W. Erich; McElvain, Kenneth S.

    BIGSTICK Is a flexible configuration-Interaction open-source shell-model code for the many-fermion problem In a shell model (occupation representation) framework. BIGSTICK can generate energy spectra, static and transition one-body densities, and expectation values of scalar operators. Using the built-in Lanczos algorithm one can compute transition probabflity distributions and decompose wave functions into components defined by group theory.

  9. Electron cyclotron thruster new modeling results preparation for initial experiments

    NASA Technical Reports Server (NTRS)

    Hooper, E. Bickford

    1993-01-01

    The following topics are discussed: a whistler-based electron cyclotron resonance heating (ECRH) thruster; cross-field coupling in the helicon approximation; wave propagation; wave structure; plasma density; wave absorption; the electron distribution function; isothermal and adiabatic plasma flow; ECRH thruster modeling; a PIC code model; electron temperature; electron energy; and initial experimental tests. The discussion is presented in vugraph form.

  10. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  11. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    NASA Technical Reports Server (NTRS)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  12. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    PubMed

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  13. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  14. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  15. EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2004-01-01

    EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.

  16. Structural and electronic properties of GaAs and GaP semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rani, Anita; Kumar, Ranjan

    2015-05-15

    The Structural and Electronic properties of Zinc Blende phase of GaAs and GaP compounds are studied using self consistent SIESTA-code, pseudopotentials and Density Functional Theory (DFT) in Local Density Approximation (LDA). The Lattice Constant, Equillibrium Volume, Cohesive Energy per pair, Compressibility and Band Gap are calculated. The band gaps calcultated with DFT using LDA is smaller than the experimental values. The P-V data fitted to third order Birch Murnaghan equation of state provide the Bulk Modulus and its pressure derivatives. Our Structural and Electronic properties estimations are in agreement with available experimental and theoretical data.

  17. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  18. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasrabadi, M. N., E-mail: mnnasrabadi@ast.ui.ac.ir; Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  19. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of 64Cu and 67Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    NASA Astrophysics Data System (ADS)

    Nasrabadi, M. N.; Sepiani, M.

    2015-03-01

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  20. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE PAGES

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto; ...

    2017-09-15

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  1. ELSI: A unified software interface for Kohn-Sham electronic structure solvers

    NASA Astrophysics Data System (ADS)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; García, Alberto; Huhn, William P.; Jacquelin, Mathias; Jia, Weile; Lange, Björn; Lin, Lin; Lu, Jianfeng; Mi, Wenhui; Seifitokaldani, Ali; Vázquez-Mayagoitia, Álvaro; Yang, Chao; Yang, Haizhao; Blum, Volker

    2018-01-01

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aims to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.

  2. ELSI: A unified software interface for Kohn–Sham electronic structure solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victor Wen-zhe; Corsetti, Fabiano; Garcia, Alberto

    Solving the electronic structure from a generalized or standard eigenproblem is often the bottleneck in large scale calculations based on Kohn-Sham density-functional theory. This problem must be addressed by essentially all current electronic structure codes, based on similar matrix expressions, and by high-performance computation. We here present a unified software interface, ELSI, to access different strategies that address the Kohn-Sham eigenvalue problem. Currently supported algorithms include the dense generalized eigensolver library ELPA, the orbital minimization method implemented in libOMM, and the pole expansion and selected inversion (PEXSI) approach with lower computational complexity for semilocal density functionals. The ELSI interface aimsmore » to simplify the implementation and optimal use of the different strategies, by offering (a) a unified software framework designed for the electronic structure solvers in Kohn-Sham density-functional theory; (b) reasonable default parameters for a chosen solver; (c) automatic conversion between input and internal working matrix formats, and in the future (d) recommendation of the optimal solver depending on the specific problem. As a result, comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800 basis functions) on distributed memory supercomputing architectures.« less

  3. On the continuity of the stationary state distribution of DPCM

    NASA Astrophysics Data System (ADS)

    Naraghi-Pour, Morteza; Neuhoff, David L.

    1990-03-01

    Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.

  4. Different evolutionary patterns of SNPs between domains and unassigned regions in human protein-coding sequences.

    PubMed

    Pang, Erli; Wu, Xiaomei; Lin, Kui

    2016-06-01

    Protein evolution plays an important role in the evolution of each genome. Because of their functional nature, in general, most of their parts or sites are differently constrained selectively, particularly by purifying selection. Most previous studies on protein evolution considered individual proteins in their entirety or compared protein-coding sequences with non-coding sequences. Less attention has been paid to the evolution of different parts within each protein of a given genome. To this end, based on PfamA annotation of all human proteins, each protein sequence can be split into two parts: domains or unassigned regions. Using this rationale, single nucleotide polymorphisms (SNPs) in protein-coding sequences from the 1000 Genomes Project were mapped according to two classifications: SNPs occurring within protein domains and those within unassigned regions. With these classifications, we found: the density of synonymous SNPs within domains is significantly greater than that of synonymous SNPs within unassigned regions; however, the density of non-synonymous SNPs shows the opposite pattern. We also found there are signatures of purifying selection on both the domain and unassigned regions. Furthermore, the selective strength on domains is significantly greater than that on unassigned regions. In addition, among all of the human protein sequences, there are 117 PfamA domains in which no SNPs are found. Our results highlight an important aspect of protein domains and may contribute to our understanding of protein evolution.

  5. Phonological Codes Constrain Output of Orthographic Codes via Sublexical and Lexical Routes in Chinese Written Production

    PubMed Central

    Wang, Cheng; Zhang, Qingfang

    2015-01-01

    To what extent do phonological codes constrain orthographic output in handwritten production? We investigated how phonological codes constrain the selection of orthographic codes via sublexical and lexical routes in Chinese written production. Participants wrote down picture names in a picture-naming task in Experiment 1or response words in a symbol—word associative writing task in Experiment 2. A sublexical phonological property of picture names (phonetic regularity: regular vs. irregular) in Experiment 1and a lexical phonological property of response words (homophone density: dense vs. sparse) in Experiment 2, as well as word frequency of the targets in both experiments, were manipulated. A facilitatory effect of word frequency was found in both experiments, in which words with high frequency were produced faster than those with low frequency. More importantly, we observed an inhibitory phonetic regularity effect, in which low-frequency picture names with regular first characters were slower to write than those with irregular ones, and an inhibitory homophone density effect, in which characters with dense homophone density were produced more slowly than those with sparse homophone density. Results suggested that phonological codes constrained handwritten production via lexical and sublexical routes. PMID:25879662

  6. ECON-KG: A Code for Computation of Electrical Conductivity Using Density Functional Theory

    DTIC Science & Technology

    2017-10-01

    is presented. Details of the implementation and instructions for execution are presented, and an example calculation of the frequency- dependent ...shown to depend on carbon content,3 and electrical conductivity models have become a requirement for input into continuum-level simulations being... dependent electrical conductivity is computed as a weighted sum over k-points: () = ∑ () ∗ () , (2) where W(k) is

  7. First-principles calculations on the four phases of BaTiO3.

    PubMed

    Evarestov, Robert A; Bandura, Andrei V

    2012-04-30

    The calculations based on linear combination of atomic orbitals basis functions as implemented in CRYSTAL09 computer code have been performed for cubic, tetragonal, orthorhombic, and rhombohedral modifications of BaTiO(3) crystal. Structural and electronic properties as well as phonon frequencies were obtained using local density approximation, generalized gradient approximation, and hybrid exchange-correlation density functional theory (DFT) functionals for four stable phases of BaTiO(3). A comparison was made between the results of different DFT techniques. It is concluded that the hybrid PBE0 [J. P. Perdew, K. Burke, M. Ernzerhof, J. Chem. Phys. 1996, 105, 9982.] functional is able to predict correctly the structural stability and phonon properties both for cubic and ferroelectric phases of BaTiO(3). The comparative phonon symmetry analysis in BaTiO(3) four phases has been made basing on the site symmetry and irreducible representation indexes for the first time. Copyright © 2012 Wiley Periodicals, Inc.

  8. Electron dynamics in complex environments with real-time time dependent density functional theory in a QM-MM framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzan, Uriel N.; Ramírez, Francisco F.; Scherlis, Damián A., E-mail: damian@qi.fcen.uba.ar, E-mail: mcgl@qb.ffyb.uba.ar

    2014-04-28

    This article presents a time dependent density functional theory (TDDFT) implementation to propagate the Kohn-Sham equations in real time, including the effects of a molecular environment through a Quantum-Mechanics Molecular-Mechanics (QM-MM) hamiltonian. The code delivers an all-electron description employing Gaussian basis functions, and incorporates the Amber force-field in the QM-MM treatment. The most expensive parts of the computation, comprising the commutators between the hamiltonian and the density matrix—required to propagate the electron dynamics—, and the evaluation of the exchange-correlation energy, were migrated to the CUDA platform to run on graphics processing units, which remarkably accelerates the performance of the code.more » The method was validated by reproducing linear-response TDDFT results for the absorption spectra of several molecular species. Two different schemes were tested to propagate the quantum dynamics: (i) a leap-frog Verlet algorithm, and (ii) the Magnus expansion to first-order. These two approaches were confronted, to find that the Magnus scheme is more efficient by a factor of six in small molecules. Interestingly, the presence of iron was found to seriously limitate the length of the integration time step, due to the high frequencies associated with the core-electrons. This highlights the importance of pseudopotentials to alleviate the cost of the propagation of the inner states when heavy nuclei are present. Finally, the methodology was applied to investigate the shifts induced by the chemical environment on the most intense UV absorption bands of two model systems of general relevance: the formamide molecule in water solution, and the carboxy-heme group in Flavohemoglobin. In both cases, shifts of several nanometers are observed, consistently with the available experimental data.« less

  9. Electron dynamics in complex environments with real-time time dependent density functional theory in a QM-MM framework

    NASA Astrophysics Data System (ADS)

    Morzan, Uriel N.; Ramírez, Francisco F.; Oviedo, M. Belén; Sánchez, Cristián G.; Scherlis, Damián A.; Lebrero, Mariano C. González

    2014-04-01

    This article presents a time dependent density functional theory (TDDFT) implementation to propagate the Kohn-Sham equations in real time, including the effects of a molecular environment through a Quantum-Mechanics Molecular-Mechanics (QM-MM) hamiltonian. The code delivers an all-electron description employing Gaussian basis functions, and incorporates the Amber force-field in the QM-MM treatment. The most expensive parts of the computation, comprising the commutators between the hamiltonian and the density matrix—required to propagate the electron dynamics—, and the evaluation of the exchange-correlation energy, were migrated to the CUDA platform to run on graphics processing units, which remarkably accelerates the performance of the code. The method was validated by reproducing linear-response TDDFT results for the absorption spectra of several molecular species. Two different schemes were tested to propagate the quantum dynamics: (i) a leap-frog Verlet algorithm, and (ii) the Magnus expansion to first-order. These two approaches were confronted, to find that the Magnus scheme is more efficient by a factor of six in small molecules. Interestingly, the presence of iron was found to seriously limitate the length of the integration time step, due to the high frequencies associated with the core-electrons. This highlights the importance of pseudopotentials to alleviate the cost of the propagation of the inner states when heavy nuclei are present. Finally, the methodology was applied to investigate the shifts induced by the chemical environment on the most intense UV absorption bands of two model systems of general relevance: the formamide molecule in water solution, and the carboxy-heme group in Flavohemoglobin. In both cases, shifts of several nanometers are observed, consistently with the available experimental data.

  10. Electron dynamics in complex environments with real-time time dependent density functional theory in a QM-MM framework.

    PubMed

    Morzan, Uriel N; Ramírez, Francisco F; Oviedo, M Belén; Sánchez, Cristián G; Scherlis, Damián A; Lebrero, Mariano C González

    2014-04-28

    This article presents a time dependent density functional theory (TDDFT) implementation to propagate the Kohn-Sham equations in real time, including the effects of a molecular environment through a Quantum-Mechanics Molecular-Mechanics (QM-MM) hamiltonian. The code delivers an all-electron description employing Gaussian basis functions, and incorporates the Amber force-field in the QM-MM treatment. The most expensive parts of the computation, comprising the commutators between the hamiltonian and the density matrix-required to propagate the electron dynamics-, and the evaluation of the exchange-correlation energy, were migrated to the CUDA platform to run on graphics processing units, which remarkably accelerates the performance of the code. The method was validated by reproducing linear-response TDDFT results for the absorption spectra of several molecular species. Two different schemes were tested to propagate the quantum dynamics: (i) a leap-frog Verlet algorithm, and (ii) the Magnus expansion to first-order. These two approaches were confronted, to find that the Magnus scheme is more efficient by a factor of six in small molecules. Interestingly, the presence of iron was found to seriously limitate the length of the integration time step, due to the high frequencies associated with the core-electrons. This highlights the importance of pseudopotentials to alleviate the cost of the propagation of the inner states when heavy nuclei are present. Finally, the methodology was applied to investigate the shifts induced by the chemical environment on the most intense UV absorption bands of two model systems of general relevance: the formamide molecule in water solution, and the carboxy-heme group in Flavohemoglobin. In both cases, shifts of several nanometers are observed, consistently with the available experimental data.

  11. SOAP and the Interstellar Froth

    NASA Astrophysics Data System (ADS)

    Tüllmann, R.; Rosa, M. R.; Dettmar, R.-J.

    2005-06-01

    We investigate whether the alleged failure of standard photoionization codes to match the Diffuse Ionized Gas (DIG) is simply caused by geometrical effects and the insufficient treatment of the radiative transfer. Standard photoionization models are applicable only to homogeneous and spherically symmetric nebulae with central ionizing stars, whereas the geometry of disk galaxies requires a 3D distribution of ionizing sources in the disk which illuminate the halo. This change in geometry together with a proper radiative transfer model is expected to substantially influence ionization conditions. Therefore, we developed a new and sophisticated 3D Monte Carlo photoionization code, called SOAP (Simulations Of Astrophysical Plasmas), by adapting an existing 1D code for HII-regions tep*{och} such, that it self-consistently models a 3D disk galaxy with a gaseous DIG halo. First results from a simple (dust-free) model with exponentially decreasing gas densities are presented and the predicted ionization structure of disk and halo are discussed. Theoretical line ratios agree well with observed ones, e.g,. for the halo of NGC 891. Moreover, the fraction of ionizing photons leaving the halo of the galaxy is plotted as a function of varying gas densities. This quantity will be of particular importance for forthcoming studies, because rough estimates indicate that about 7% of ionizing photons escape from the halo and contribute to the ionization of the IGM. Given the relatively large number density of normal spiral galaxies, OB-stars could have a much stronger impact on the ionization of the IGM than AGN or QSOs.

  12. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less

  13. An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.

  14. Overview and evaluation of different nuclear level density models for the 123I radionuclide production.

    PubMed

    Nikjou, A; Sadeghi, M

    2018-06-01

    The 123 I radionuclide (T 1/2 = 13.22 h, β+ = 100%) is one of the most potent gamma emitters for nuclear medicine. In this study, the cyclotron production of this radionuclide via different nuclear reactions namely, the 121 Sb(α,2n), 122 Te(d,n), 123 Te(p,n), 124 Te(p,2n), 124 Xe(p,2n), 127 I(p,5n) and 127 I(d,6n) were investigated. The effect of the various phenomenological nuclear level density models such as Fermi gas model (FGM), Back-shifted Fermi gas model (BSFGM), Generalized superfluid model (GSM) and Enhanced generalized superfluid model (EGSM) moreover, the three microscopic level density models were evaluated for predicting of cross sections and production yield predictions. The SRIM code was used to obtain the target thickness. The 123 I excitation function of reactions were calculated by using of the TALYS-1.8, EMPIRE-3.2 nuclear codes and with data which taken from TENDL-2015 database, and finally the theoretical calculations were compared with reported experimental measurements in which taken from EXFOR database. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. 3D Field Modifications of Core Neutral Fueling In the EMC3-EIRENE Code

    NASA Astrophysics Data System (ADS)

    Waters, Ian; Frerichs, Heinke; Schmitz, Oliver; Ahn, Joon-Wook; Canal, Gustavo; Evans, Todd; Feng, Yuehe; Kaye, Stanley; Maingi, Rajesh; Soukhanovskii, Vsevolod

    2017-10-01

    The application of 3-D magnetic field perturbations to the edge plasmas of tokamaks has long been seen as a viable way to control damaging Edge Localized Modes (ELMs). These 3-D fields have also been correlated with a density drop in the core plasmas of tokamaks; known as `pump-out'. While pump-out is typically explained as the result of enhanced outward transport, degraded fueling of the core may also play a role. By altering the temperature and density of the plasma edge, 3-D fields will impact the distribution function of high energy neutral particles produced through ion-neutral energy exchange processes. Starved of the deeply penetrating neutral source, the core density will decrease. Numerical studies carried out with the EMC3-EIRENE code on National Spherical Tokamak eXperiment-Upgrade (NSTX-U) equilibria show that this change to core fueling by high energy neutrals may be a significant contributor to the overall particle balance in the NSTX-U tokamak: deep core (Ψ < 0.5) fueling from neutral ionization sources is decreased by 40-60% with RMPs. This work was funded by the US Department of Energy under Grant DE-SC0012315.

  16. Simulation of Mach Probes in Non-Uniform Magnetized Plasmas: the Influence of a Background Density Gradient

    NASA Astrophysics Data System (ADS)

    Haakonsen, Christian Bernt; Hutchinson, Ian H.

    2013-10-01

    Mach probes can be used to measure transverse flow in magnetized plasmas, but what they actually measure in strongly non-uniform plasmas has not been definitively established. A fluid treatment in previous work has suggested that the diamagnetic drifts associated with background density and temperature gradients affect transverse flow measurements, but detailed computational study is required to validate and elaborate on those results; it is really a kinetic problem, since the probe deforms and introduces voids in the ion and electron distribution functions. A new code, the Plasma-Object Simulator with Iterated Trajectories (POSIT) has been developed to self-consistently compute the steady-state six-dimensional ion and electron distribution functions in the perturbed plasma. Particle trajectories are integrated backwards in time to the domain boundary, where arbitrary background distribution functions can be specified. This allows POSIT to compute the ion and electron density at each node of its unstructured mesh, update the potential based on those densities, and then iterate until convergence. POSIT is used to study the impact of a background density gradient on transverse Mach probe measurements, and the results compared to the previous fluid theory. C.B. Haakonsen was supported in part by NSF/DOE Grant No. DE-FG02-06ER54512, and in part by an SCGF award administered by ORISE under DOE Contract No. DE-AC05-06OR23100.

  17. Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More

    NASA Technical Reports Server (NTRS)

    Kou, Yu; Lin, Shu; Fossorier, Marc

    1999-01-01

    Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.

  18. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  19. Rotordynamics on the PC: Further Capabilities of ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Rotordynamics codes for personal computers are now becoming available. One of the most capable codes is Analysis of RotorDynamic Systems (ARDS) which uses the component mode synthesis method to analyze a system of up to 5 rotating shafts. ARDS was originally written for a mainframe computer but has been successfully ported to a PC; its basic capabilities for steady-state and transient analysis were reported in an earlier paper. Additional functions have now been added to the PC version of ARDS. These functions include: 1) Estimation of the peak response following blade loss without resorting to a full transient analysis; 2) Calculation of response sensitivity to input parameters; 3) Formulation of optimum rotor and damper designs to place critical speeds in desirable ranges or minimize bearing loads; 4) Production of Poincard plots so the presence of chaotic motion can be ascertained. ARDS produces printed and plotted output. The executable code uses the full array sizes of the mainframe version and fits on a high density floppy disc. Examples of all program capabilities are presented and discussed.

  20. A high burnup model developed for the DIONISIO code

    NASA Astrophysics Data System (ADS)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  1. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  2. Ab-initio study on electronic properties of rocksalt SnAs

    NASA Astrophysics Data System (ADS)

    Babariya, Bindiya; Vaghela, M. V.; Gajjar, P. N.

    2018-05-01

    Within the frame work of Local Density Approximation of Exchange and Correlation, ab-initio method of density functional theory with Abinit code is used to compute electronic energy band structure, density of States and charge density of SnAs in rocksalt phase. Our result after optimization for lattice constant agrees with experimental value within 0.59% deviation. The computed electronic energy bands in high symmetry directions Γ→K→X→Γ→L→X→W→L→U shown metallic nature. The lowest band in the electronic band structure is showing band-gap approximately 1.70 eV from next higher band and no crossing between lowest two bands are seen. The density of states revels p-p orbit hybridization between Sn and As atoms. The spherical contour around Sn and As in the charge density plot represent partly ionic and partly covalent bonding. Fermi surface topology is the resultant effect of the single band crossing along L direction at Ef.

  3. Simulations of nanocrystals under pressure: combining electronic enthalpy and linear-scaling density-functional theory.

    PubMed

    Corsini, Niccolò R C; Greco, Andrea; Hine, Nicholas D M; Molteni, Carla; Haynes, Peter D

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  4. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    NASA Astrophysics Data System (ADS)

    Corsini, Niccolò R. C.; Greco, Andrea; Hine, Nicholas D. M.; Molteni, Carla; Haynes, Peter D.

    2013-08-01

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett. 94, 145501 (2005)], 10.1103/PhysRevLett.94.145501, it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  5. Computing partial traces and reduced density matrices

    NASA Astrophysics Data System (ADS)

    Maziero, Jonas

    Taking partial traces (PTrs) for computing reduced density matrices, or related functions, is a ubiquitous procedure in the quantum mechanics of composite systems. In this paper, we present a thorough description of this function and analyze the number of elementary operations (ops) needed, under some possible alternative implementations, to compute it on a classical computer. As we note, it is worthwhile doing some analytical developments in order to avoid making null multiplications and sums, what can considerably reduce the ops. For instance, for a bipartite system ℋa⊗ℋb with dimensions da=dimℋa and db=dimℋb and for da,db≫1, while a direct use of PTr definition applied to ℋb requires 𝒪(da6db6) ops, its optimized implementation entails 𝒪(da2db) ops. In the sequence, we regard the computation of PTrs for general multipartite systems and describe Fortran code provided to implement it numerically. We also consider the calculation of reduced density matrices via Bloch’s parametrization with generalized Gell Mann’s matrices.

  6. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  7. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  8. Coded acoustic wave sensors and system using time diversity

    NASA Technical Reports Server (NTRS)

    Solie, Leland P. (Inventor); Hines, Jacqueline H. (Inventor)

    2012-01-01

    An apparatus and method for distinguishing between sensors that are to be wirelessly detected is provided. An interrogator device uses different, distinct time delays in the sensing signals when interrogating the sensors. The sensors are provided with different distinct pedestal delays. Sensors that have the same pedestal delay as the delay selected by the interrogator are detected by the interrogator whereas other sensors with different pedestal delays are not sensed. Multiple sensors with a given pedestal delay are provided with different codes so as to be distinguished from one another by the interrogator. The interrogator uses a signal that is transmitted to the sensor and returned by the sensor for combination and integration with the reference signal that has been processed by a function. The sensor may be a surface acoustic wave device having a differential impulse response with a power spectral density consisting of lobes. The power spectral density of the differential response is used to determine the value of the sensed parameter or parameters.

  9. Metalloid Aluminum Clusters with Fluorine

    DTIC Science & Technology

    2016-12-01

    molecular dynamics, binding energy , siesta code, density of states, projected density of states 15. NUMBER OF PAGES 69 16. PRICE CODE 17. SECURITY...high energy density compared to explosives, but typically release this energy slowly via diffusion-limited combustion. There is recent interest in using...examine the cluster binding energy and electronic structure. Partial fluorine substitution in a prototypical aluminum-cyclopentadienyl cluster results

  10. Tight-binding calculation of single-band and generalized Wannier functions of graphene

    NASA Astrophysics Data System (ADS)

    Ribeiro, Allan Victor; Bruno-Alfonso, Alexys

    Recent work has shown that a tight-binding approach associated with Wannier functions (WFs) provides an intuitive physical image of the electronic structure of graphene. Regarding the case of graphene, Marzari et al. displayed the calculated WFs and presented a comparison between the Wannier-interpolated bands and the bands generated by using the density-functional code. Jung and MacDonald provided a tight-binding model for the π-bands of graphene that involves maximally localized Wannier functions (MLWFs). The mixing of the bands yields better localized WFs. In the present work, the MLWFs of graphene are calculated by combining the Quantum-ESPRESSO code and tight-binding approach. The MLWFs of graphene are calculated from the Bloch functions obtained through a tight binding approach that includes interactions and overlapping obtained by partially fitting the DFT bands. The phase of the Bloch functions of each band is appropriately chosen to produce MLWFs. The same thing applies to the coefficients of their linear combination in the generalized case. The method allows for an intuitive understanding of the maximally localized WFs of graphene and shows excellent agreement with the literature. Moreover, it provides accurate results at reduced computational cost.

  11. Global multi-dimensional modeling of ionospheric electron density using GNSS measurements and IRI model

    NASA Astrophysics Data System (ADS)

    Alizadeh, M.; Schuh, H.; Schmidt, M. G.

    2012-12-01

    In the last decades Global Navigation Satellite System (GNSS) has turned into a promising tool for probing the ionosphere. The classical input data for developing Global Ionosphere Maps (GIM) is obtained from the dual-frequency GNSS observations. Simultaneous observations of GNSS code or carrier phase at each frequency is used to form a geometric-free linear combination which contains only the ionospheric refraction term and the differential inter-frequency hardware delays. To relate the ionospheric observable to the electron density, a model is used that represents an altitude-dependent distribution of the electron density. This study aims at developing a global multi-dimensional model of the electron density using simulated GNSS observations from about 150 International GNSS Service (IGS) ground stations. Due to the fact that IGS stations are in-homogenously distributed around the world and the accuracy and reliability of the developed models are considerably lower in the area not well covered with IGS ground stations, the International Reference Ionosphere (IRI) model has been used as a background model. The correction term is estimated by applying spherical harmonics expansion to the GNSS ionospheric observable. Within this study this observable is related to the electron density using different functions for the bottom-side and top-side ionosphere. The bottom-side ionosphere is represented by an alpha-Chapman function and the top-side ionosphere is represented using the newly proposed Vary-Chap function.aximum electron density, IRI background model (elec/m3), day 202 - 2010, 0 UT eight of maximum electron density, IRI background model (km), day 202 - 2010, 0 UT

  12. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.

    PubMed

    Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M

    2009-09-30

    QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.

  13. Low-Density Parity-Check Code Design Techniques to Simplify Encoding

    NASA Astrophysics Data System (ADS)

    Perez, J. M.; Andrews, K.

    2007-11-01

    This work describes a method for encoding low-density parity-check (LDPC) codes based on the accumulate-repeat-4-jagged-accumulate (AR4JA) scheme, using the low-density parity-check matrix H instead of the dense generator matrix G. The use of the H matrix to encode allows a significant reduction in memory consumption and provides the encoder design a great flexibility. Also described are new hardware-efficient codes, based on the same kind of protographs, which require less memory storage and area, allowing at the same time a reduction in the encoding delay.

  14. Sequence and analysis of chromosome 4 of the plant Arabidopsis thaliana.

    PubMed

    Mayer, K; Schüller, C; Wambutt, R; Murphy, G; Volckaert, G; Pohl, T; Düsterhöft, A; Stiekema, W; Entian, K D; Terryn, N; Harris, B; Ansorge, W; Brandt, P; Grivell, L; Rieger, M; Weichselgartner, M; de Simone, V; Obermaier, B; Mache, R; Müller, M; Kreis, M; Delseny, M; Puigdomenech, P; Watson, M; Schmidtheini, T; Reichert, B; Portatelle, D; Perez-Alonso, M; Boutry, M; Bancroft, I; Vos, P; Hoheisel, J; Zimmermann, W; Wedler, H; Ridley, P; Langham, S A; McCullagh, B; Bilham, L; Robben, J; Van der Schueren, J; Grymonprez, B; Chuang, Y J; Vandenbussche, F; Braeken, M; Weltjens, I; Voet, M; Bastiaens, I; Aert, R; Defoor, E; Weitzenegger, T; Bothe, G; Ramsperger, U; Hilbert, H; Braun, M; Holzer, E; Brandt, A; Peters, S; van Staveren, M; Dirske, W; Mooijman, P; Klein Lankhorst, R; Rose, M; Hauf, J; Kötter, P; Berneiser, S; Hempel, S; Feldpausch, M; Lamberth, S; Van den Daele, H; De Keyser, A; Buysshaert, C; Gielen, J; Villarroel, R; De Clercq, R; Van Montagu, M; Rogers, J; Cronin, A; Quail, M; Bray-Allen, S; Clark, L; Doggett, J; Hall, S; Kay, M; Lennard, N; McLay, K; Mayes, R; Pettett, A; Rajandream, M A; Lyne, M; Benes, V; Rechmann, S; Borkova, D; Blöcker, H; Scharfe, M; Grimm, M; Löhnert, T H; Dose, S; de Haan, M; Maarse, A; Schäfer, M; Müller-Auer, S; Gabel, C; Fuchs, M; Fartmann, B; Granderath, K; Dauner, D; Herzl, A; Neumann, S; Argiriou, A; Vitale, D; Liguori, R; Piravandi, E; Massenet, O; Quigley, F; Clabauld, G; Mündlein, A; Felber, R; Schnabl, S; Hiller, R; Schmidt, W; Lecharny, A; Aubourg, S; Chefdor, F; Cooke, R; Berger, C; Montfort, A; Casacuberta, E; Gibbons, T; Weber, N; Vandenbol, M; Bargues, M; Terol, J; Torres, A; Perez-Perez, A; Purnelle, B; Bent, E; Johnson, S; Tacon, D; Jesse, T; Heijnen, L; Schwarz, S; Scholler, P; Heber, S; Francs, P; Bielke, C; Frishman, D; Haase, D; Lemcke, K; Mewes, H W; Stocker, S; Zaccaria, P; Bevan, M; Wilson, R K; de la Bastide, M; Habermann, K; Parnell, L; Dedhia, N; Gnoj, L; Schutz, K; Huang, E; Spiegel, L; Sehkon, M; Murray, J; Sheet, P; Cordes, M; Abu-Threideh, J; Stoneking, T; Kalicki, J; Graves, T; Harmon, G; Edwards, J; Latreille, P; Courtney, L; Cloud, J; Abbott, A; Scott, K; Johnson, D; Minx, P; Bentley, D; Fulton, B; Miller, N; Greco, T; Kemp, K; Kramer, J; Fulton, L; Mardis, E; Dante, M; Pepin, K; Hillier, L; Nelson, J; Spieth, J; Ryan, E; Andrews, S; Geisel, C; Layman, D; Du, H; Ali, J; Berghoff, A; Jones, K; Drone, K; Cotton, M; Joshu, C; Antonoiu, B; Zidanic, M; Strong, C; Sun, H; Lamar, B; Yordan, C; Ma, P; Zhong, J; Preston, R; Vil, D; Shekher, M; Matero, A; Shah, R; Swaby, I K; O'Shaughnessy, A; Rodriguez, M; Hoffmann, J; Till, S; Granat, S; Shohdy, N; Hasegawa, A; Hameed, A; Lodhi, M; Johnson, A; Chen, E; Marra, M; Martienssen, R; McCombie, W R

    1999-12-16

    The higher plant Arabidopsis thaliana (Arabidopsis) is an important model for identifying plant genes and determining their function. To assist biological investigations and to define chromosome structure, a coordinated effort to sequence the Arabidopsis genome was initiated in late 1996. Here we report one of the first milestones of this project, the sequence of chromosome 4. Analysis of 17.38 megabases of unique sequence, representing about 17% of the genome, reveals 3,744 protein coding genes, 81 transfer RNAs and numerous repeat elements. Heterochromatic regions surrounding the putative centromere, which has not yet been completely sequenced, are characterized by an increased frequency of a variety of repeats, new repeats, reduced recombination, lowered gene density and lowered gene expression. Roughly 60% of the predicted protein-coding genes have been functionally characterized on the basis of their homology to known genes. Many genes encode predicted proteins that are homologous to human and Caenorhabditis elegans proteins.

  15. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  16. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  17. Characterization of the thermal conductivity for Advanced Toughened Uni-piece Fibrous Insulations

    NASA Technical Reports Server (NTRS)

    Stewart, David A.; Leiser, Daniel B.

    1993-01-01

    Advanced Toughened Uni-piece Fibrous Insulations (TUFI) is discussed in terms of their thermal response to an arc-jet air stream. A modification of the existing Ames thermal conductivity program to predict the thermal response of these functionally gradient materials is described in the paper. The modified program was used to evaluate the effect of density, surface porosity, and density gradient through the TUFI materials on the thermal response of these insulations. Predictions using a finite-difference code and calculated thermal conductivity values from the modified program were compared with in-depth temperature measurements taken from TUFI insulations during short exposures to arc-jet hypersonic air streams.

  18. Investigation of structural, electronic, elastic and optical properties of Cd1-x-yZnxHgyTe alloys

    NASA Astrophysics Data System (ADS)

    Tamer, M.

    2016-06-01

    Structural, optical and electronic properties and elastic constants of Cd1-x-yZnx HgyTe alloys have been studied by employing the commercial code Castep based on density functional theory. The generalized gradient approximation and local density approximation were utilized as exchange correlation. Using elastic constants for compounds, bulk modulus, band gap, Fermi energy and Kramers-Kronig relations, dielectric constants and the refractive index have been found through calculations. Apart from these, X-ray measurements revealed elastic constants and Vegard's law. It is seen that results obtained from theory and experiments are all in agreement.

  19. The first principles study of elastic and thermodynamic properties of ZnSe

    NASA Astrophysics Data System (ADS)

    Khatta, Swati; Kaur, Veerpal; Tripathi, S. K.; Prakash, Satya

    2018-05-01

    The elastic and thermodynamic properties of ZnSe are investigated using thermo_pw package implemented in Quantum espresso code within the framework of density functional theory. The pseudopotential method within the local density approximation is used for the exchange-correlation potential. The physical parameters of ZnSe bulk modulus and shear modulus, anisotropy factor, Young's modulus, Poisson's ratio, Pugh's ratio and Frantsevich's ratio are calculated. The sound velocity and Debye temperature are obtained from elastic constant calculations. The Helmholtz free energy and internal energy of ZnSe are also calculated. The results are compared with available theoretical calculations and experimental data.

  20. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  1. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  2. mBEEF-vdW: Robust fitting of error estimation density functionals

    DOE PAGES

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...

    2016-06-15

    Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less

  3. a Case Study: Exploring Industrial Agglomeration of Manufacturing Industries in Shanghai Using Duranton and Overman's K-Density Function

    NASA Astrophysics Data System (ADS)

    Tian, S.; Wang, J.; Gui, Z.; Wu, H.; Wang, Y.

    2017-09-01

    There has wide academic and policy attention on the issue of scale economy and industrial agglomeration, with most of the attention paid to industrial geography concentration. This paper adopted a scale-independent and distance-based measurement method, K-density function or known as Duranton and Overman (DO) index, to study the manufacturing industries localization in Shanghai, which is the most representative economic development zone in China and East Asia. The result indicates the industry has a growing tendency of localization, and various spatial distribution patterns in different distances. Furthermore, the class of industry also show significant influence on the concentration pattern. Besides, the method has been coded and published on GeoCommerce, a visualization and analysis portal for industrial big data, to provide geoprocessing and spatial decision support.

  4. The Distribution and Annihilation of Dark Matter Around Black Holes

    NASA Technical Reports Server (NTRS)

    Schnittman, Jeremy D.

    2015-01-01

    We use a Monte Carlo code to calculate the geodesic orbits of test particles around Kerr black holes, generating a distribution function of both bound and unbound populations of dark matter (DM) particles. From this distribution function, we calculate annihilation rates and observable gamma-ray spectra for a few simple DM models. The features of these spectra are sensitive to the black hole spin, observer inclination, and detailed properties of the DM annihilation cross-section and density profile. Confirming earlier analytic work, we find that for rapidly spinning black holes, the collisional Penrose process can reach efficiencies exceeding 600%, leading to a high-energy tail in the annihilation spectrum. The high particle density and large proper volume of the region immediately surrounding the horizon ensures that the observed flux from these extreme events is non-negligible.

  5. The detailed balance requirement and general empirical formalisms for continuum absorption

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.

    1994-01-01

    Two general empirical formalisms are presented for the spectral density which take into account the deviations from the Lorentz line shape in the wing regions of resonance lines. These formalisms satisfy the detailed balance requirement. Empirical line shape functions, which are essential to provide the continuum absorption at different temperatures in various frequency regions for atmospheric transmission codes, can be obtained by fitting to experimental data.

  6. CoFFEE: Corrections For Formation Energy and Eigenvalues for charged defect simulations

    NASA Astrophysics Data System (ADS)

    Naik, Mit H.; Jain, Manish

    2018-05-01

    Charged point defects in materials are widely studied using Density Functional Theory (DFT) packages with periodic boundary conditions. The formation energy and defect level computed from these simulations need to be corrected to remove the contributions from the spurious long-range interaction between the defect and its periodic images. To this effect, the CoFFEE code implements the Freysoldt-Neugebauer-Van de Walle (FNV) correction scheme. The corrections can be applied to charged defects in a complete range of material shapes and size: bulk, slab (or two-dimensional), wires and nanoribbons. The code is written in Python and features MPI parallelization and optimizations using the Cython package for slow steps.

  7. Design criteria for noncoherent Gaussian channels with MFSK signaling and coding

    NASA Technical Reports Server (NTRS)

    Butman, S. A.; Levitt, B. K.; Bar-David, I.; Lyon, R. F.; Klass, M. J.

    1976-01-01

    This paper presents data and criteria to assess and guide the design of modems for coded noncoherent communication systems subject to practical system constraints of power, bandwidth, noise spectral density, coherence time, and number of orthogonal signals M. Three basic receiver types are analyzed for the noncoherent multifrequency-shift keying (MFSK) additive white Gaussian noise channel: hard decision, unquantized (optimum), and quantized (soft decision). Channel capacity and computational cutoff rate are computed for each type and presented as functions of the predetection signal-to-noise ratio and the number of orthogonal signals. This relates the channel constraints of power, bandwidth, coherence time, and noise power to the optimum choice of signal duration and signal number.

  8. DFMSPH14: A C-code for the double folding interaction potential of two spherical nuclei

    NASA Astrophysics Data System (ADS)

    Gontchar, I. I.; Chushnyakova, M. V.

    2016-09-01

    This is a new version of the DFMSPH code designed to obtain the nucleus-nucleus potential by using the double folding model (DFM) and in particular to find the Coulomb barrier. The new version uses the charge, proton, and neutron density distributions provided by the user. Also we added an option for fitting the DFM potential by the Gross-Kalinowski profile. The main functionalities of the original code (e.g. the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei, the Coulomb barrier characteristics, etc.) have not been modified. Catalog identifier: AEFH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 114404 Distribution format: tar.gz Programming language: C Computer: PC and Mac Operation system: Windows XP and higher, MacOS, Unix/Linux Memory required to execute with typical data: below 10 Mbyte Classification: 17.9 Catalog identifier of previous version: AEFH_v1_0 Journal reference of previous version: Comp. Phys. Comm. 181 (2010) 168 Does the new version supersede the previous version?: Yes Nature of physical problem: The code calculates in a semimicroscopic way the bare interaction potential between two colliding spherical nuclei as a function of the center of mass distance. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by an analytical profile (Woods-Saxon or Gross-Kalinowski) near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e.g. the range of the exchange part of the nuclear term) can be investigated. Method of solution: The nucleus-nucleus potential is calculated using the double folding model with the Coulomb and the effective M3Y NN interactions. For the direct parts of the Coulomb and the nuclear terms, the Fourier transform method is used. In order to calculate the exchange parts, the density matrix expansion method is applied. Typical running time: less than 1 minute. Reason for new version: Many users asked us how to implement their own density distributions in the DFMSPH. Now this option has been added. Also we found that the calculated Double-Folding Potential (DFP) is approximated more accurately by the Gross-Kalinowski (GK) profile. This option has been also added.

  9. Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data

    NASA Astrophysics Data System (ADS)

    Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.

    2018-01-01

    Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.

  10. Linear calculations of edge current driven kink modes with BOUT++ code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y.; Lawrence Livermore National Laboratory, Livermore, California 94550

    This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linearmore » growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.« less

  11. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  12. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  13. The effect of gas physics on the halo mass function

    NASA Astrophysics Data System (ADS)

    Stanek, R.; Rudd, D.; Evrard, A. E.

    2009-03-01

    Cosmological tests based on cluster counts require accurate calibration of the space density of massive haloes, but most calibrations to date have ignored complex gas physics associated with halo baryons. We explore the sensitivity of the halo mass function to baryon physics using two pairs of gas-dynamic simulations that are likely to bracket the true behaviour. Each pair consists of a baseline model involving only gravity and shock heating, and a refined physics model aimed at reproducing the observed scaling of the hot, intracluster gas phase. One pair consists of billion-particle resimulations of the original 500h-1Mpc Millennium Simulation of Springel et al., run with the smoothed particle hydrodynamics (SPH) code GADGET-2 and using a refined physics treatment approximated by pre-heating (PH) at high redshift. The other pair are high-resolution simulations from the adaptive-mesh refinement code ART, for which the refined treatment includes cooling, star formation and supernova feedback (CSF). We find that, although the mass functions of the gravity-only (GO) treatments are consistent with the recent calibration of Tinker et al. (2008), both pairs of simulations with refined baryon physics show significant deviations. Relative to the GO case, the masses of ~1014h-1Msolar haloes in the PH and CSF treatments are shifted by the averages of -15 +/- 1 and +16 +/- 2 per cent, respectively. These mass shifts cause ~30 per cent deviations in number density relative to the Tinker function, significantly larger than the 5 per cent statistical uncertainty of that calibration.

  14. Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software

    NASA Astrophysics Data System (ADS)

    Hellekson, Ron; Campbell, Scott

    1988-06-01

    Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.

  15. Use of Fluka to Create Dose Calculations

    NASA Technical Reports Server (NTRS)

    Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John

    2012-01-01

    Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.

  16. Collaborative Simulation Grid: Multiscale Quantum-Mechanical/Classical Atomistic Simulations on Distributed PC Clusters in the US and Japan

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash; hide

    2002-01-01

    A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.

  17. Study of transmission function and electronic transport in one dimensional silver nanowire: Ab-initio method using density functional theory (DFT)

    NASA Astrophysics Data System (ADS)

    Thakur, Anil; Kashyap, Rajinder

    2018-05-01

    Single nanowire electrode devices have their application in variety of fields which vary from information technology to solar energy. Silver nanowires, made in an aqueous chemical reduction process, can be reacted with gold salt to create bimetallic nanowires. Silver nanowire can be used as electrodes in batteries and have many other applications. In this paper we investigated structural and electronic transport properties of Ag nanowire using density functional theory (DFT) with SIESTA code. Electronic transport properties of Ag nanowire have been studied theoretically. First of all an optimized geometry for Ag nanowire is obtained using DFT calculations, and then the transport relations are obtained using NEGF approach. SIESTA and TranSIESTA simulation codes are used in the calculations respectively. The electrodes are chosen to be the same as the central region where transport is studied, eliminating current quantization effects due to contacts and focusing the electronic transport study to the intrinsic structure of the material. By varying chemical potential in the electrode regions, an I-V curve is traced which is in agreement with the predicted behavior. Bulk properties of Ag are in agreement with experimental values which make the study of electronic and transport properties in silver nanowires interesting because they are promising materials as bridging pieces in nanoelectronics. Transmission coefficient and V-I characteristic of Ag nano wire reveals that silver nanowire can be used as an electrode device.

  18. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  19. [Population density, age distribution and urbanisation as factors influencing the frequency of home visits--an analysis for Mecklenburg-West Pomerania].

    PubMed

    Heymann, R; Weitmann, K; Weiss, S; Thierfelder, D; Flessa, S; Hoffmann, W

    2009-07-01

    This study examines and compares the frequency of home visits by general practitioners in regions with a lower population density and regions with a higher population density. The discussion centres on the hypothesis whether the number of home visits in rural and remote areas with a low population density is, in fact, higher than in urbanised areas with a higher population density. The average age of the population has been considered in both cases. The communities of Mecklenburg West-Pomerania were aggregated into postal code regions. The analysis is based on these postal code regions. The average frequency of home visits per 100 inhabitants/km2 has been calculated via a bivariate, linear regression model with the population density and the average age for the postal code region as independent variables. The results are based on billing data of the year 2006 as provided by the Association of Statutory Health Insurance Physicians of Mecklenburg-Western Pomerania. In a second step a variable which clustered the postal codes of urbanised areas was added to a multivariate model. The hypothesis of a negative correlation between the frequency of home visits and the population density of the areas examined cannot be confirmed for Mecklenburg-Western Pomerania. Following the dichotomisation of the postal code regions into sparsely and densely populated areas, only the very sparsely populated postal code regions (less than 100 inhabitants/km2) show a tendency towards a higher frequency of home visits. Overall, the frequency of home visits in sparsely populated postal code regions is 28.9% higher than in the densely populated postal code regions (more than 100 inhabitants/km2), although the number of general practitioners is approximately the same in both groups. In part this association seems to be confirmed by a positive correlation between the average age in the individual postal code regions and the number of home visits carried out in the area. As calculated on the basis of the data at hand, only the very sparsely populated areas with a still gradually decreasing population show a tendency towards a higher frequency of home visits. According to the data of 2006, the number of home visits remains high in sparsely populated areas. It may increase in the near future as the number of general practitioners in these areas will gradually decrease while the number of immobile and older inhabitants will increase.

  20. A photoemission moments model using density functional and transfer matrix methods applied to coating layers on surfaces: Theory

    NASA Astrophysics Data System (ADS)

    Jensen, Kevin L.; Finkenstadt, Daniel; Shabaev, Andrew; Lambrakos, Samuel G.; Moody, Nathan A.; Petillo, John J.; Yamaguchi, Hisato; Liu, Fangze

    2018-01-01

    Recent experimental measurements of a bulk material covered with a small number of graphene layers reported by Yamaguchi et al. [NPJ 2D Mater. Appl. 1, 12 (2017)] (on bialkali) and Liu et al. [Appl. Phys. Lett. 110, 041607 (2017)] (on copper) and the needs of emission models in beam optics codes have lead to substantial changes in a Moments model of photoemission. The changes account for (i) a barrier profile and density of states factor based on density functional theory (DFT) evaluations, (ii) a Drude-Lorentz model of the optical constants and laser penetration depth, and (iii) a transmission probability evaluated by an Airy Transfer Matrix Approach. Importantly, the DFT results lead to a surface barrier profile of a shape similar to both resonant barriers and reflectionless wells: the associated quantum mechanical transmission probabilities are shown to be comparable to those recently required to enable the Moments (and Three Step) model to match experimental data but for reasons very different than the assumption by conventional wisdom that a barrier is responsible. The substantial modifications of the Moments model components, motivated by computational materials methods, are developed. The results prepare the Moments model for use in treating heterostructures and discrete energy level systems (e.g., quantum dots) proposed for decoupling the opposing metrics of performance that undermine the performance of advanced light sources like the x-ray Free Electron Laser. The consequences of the modified components on quantum yield, emittance, and emission models needed by beam optics codes are discussed.

  1. Development And Characterization Of A Liner-On-Target Injector For Staged Z-Pinch Experiments

    NASA Astrophysics Data System (ADS)

    Valenzuela, J. C.; Conti, F.; Krasheninnikov, I.; Narkis, J.; Beg, F.; Wessel, F. J.; Rahman, H. U.

    2016-10-01

    We present the design and optimization of a liner-on-target injector for Staged Z-pinch experiments. The injector is composed of an annular high atomic number (e.g. Ar, Kr) gas-puff and an on-axis plasma gun that delivers the ionized deuterium target. The liner nozzle injector has been carefully studied using Computational Fluid Dynamics (CFD) simulations to produce a highly collimated 1 cm radius gas profile that satisfies the theoretical requirement for best performance on the 1 MA Zebra current driver. The CFD simulations produce density profiles as a function of the nozzle shape and gas. These profiles are initialized in the MHD MACH2 code to find the optimal liner density for a stable, uniform implosion. We use a simple Snowplow model to study the plasma sheath acceleration in a coaxial plasma gun to help us properly design the target injector. We have performed line-integrated density measurements using a CW He-Ne laser to characterize the liner gas and the plasma gun density as a function of time. The measurements are compared with models and calculations and benchmarked accordingly. Advanced Research Projects Agency - Energy, DE-AR0000569.

  2. Stochastic density functional theory at finite temperatures

    NASA Astrophysics Data System (ADS)

    Cytter, Yael; Rabani, Eran; Neuhauser, Daniel; Baer, Roi

    2018-03-01

    Simulations in the warm dense matter regime using finite temperature Kohn-Sham density functional theory (FT-KS-DFT), while frequently used, are computationally expensive due to the partial occupation of a very large number of high-energy KS eigenstates which are obtained from subspace diagonalization. We have developed a stochastic method for applying FT-KS-DFT, that overcomes the bottleneck of calculating the occupied KS orbitals by directly obtaining the density from the KS Hamiltonian. The proposed algorithm scales as O (" close=")N3T3)">N T-1 and is compared with the high-temperature limit scaling O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popovich, P.; Carter, T. A.; Friedman, B.

    Numerical simulation of plasma turbulence in the Large Plasma Device (LAPD) [W. Gekelman, H. Pfister, Z. Lucky et al., Rev. Sci. Instrum. 62, 2875 (1991)] is presented. The model, implemented in the BOUndary Turbulence code [M. Umansky, X. Xu, B. Dudson et al., Contrib. Plasma Phys. 180, 887 (2009)], includes three-dimensional (3D) collisional fluid equations for plasma density, electron parallel momentum, and current continuity, and also includes the effects of ion-neutral collisions. In nonlinear simulations using measured LAPD density profiles but assuming constant temperature profile for simplicity, self-consistent evolution of instabilities and nonlinearly generated zonal flows results in a saturatedmore » turbulent state. Comparisons of these simulations with measurements in LAPD plasmas reveal good qualitative and reasonable quantitative agreement, in particular in frequency spectrum, spatial correlation, and amplitude probability distribution function of density fluctuations. For comparison with LAPD measurements, the plasma density profile in simulations is maintained either by direct azimuthal averaging on each time step, or by adding particle source/sink function. The inferred source/sink values are consistent with the estimated ionization source and parallel losses in LAPD. These simulations lay the groundwork for more a comprehensive effort to test fluid turbulence simulation against LAPD data.« less

  3. C library for topological study of the electronic charge density.

    PubMed

    Vega, David; Aray, Yosslen; Rodríguez, Jesús

    2012-12-05

    The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.

  4. Modeling solvation effects in real-space and real-time within density functional approaches

    NASA Astrophysics Data System (ADS)

    Delgado, Alain; Corni, Stefano; Pittalis, Stefano; Rozzi, Carlo Andrea

    2015-10-01

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that are close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the Octopus code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.

  5. Modeling solvation effects in real-space and real-time within density functional approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgado, Alain; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, Calle 30 # 502, 11300 La Habana; Corni, Stefano

    2015-10-14

    The Polarizable Continuum Model (PCM) can be used in conjunction with Density Functional Theory (DFT) and its time-dependent extension (TDDFT) to simulate the electronic and optical properties of molecules and nanoparticles immersed in a dielectric environment, typically liquid solvents. In this contribution, we develop a methodology to account for solvation effects in real-space (and real-time) (TD)DFT calculations. The boundary elements method is used to calculate the solvent reaction potential in terms of the apparent charges that spread over the van der Waals solute surface. In a real-space representation, this potential may exhibit a Coulomb singularity at grid points that aremore » close to the cavity surface. We propose a simple approach to regularize such singularity by using a set of spherical Gaussian functions to distribute the apparent charges. We have implemented the proposed method in the OCTOPUS code and present results for the solvation free energies and solvatochromic shifts for a representative set of organic molecules in water.« less

  6. Ab initio calculation of resonant Raman intensities of transition metal dichalcogenides

    NASA Astrophysics Data System (ADS)

    Miranda, Henrique; Reichardt, Sven; Molina-Sanchez, Alejandro; Wirtz, Ludger

    Raman spectroscopy is used to characterize optical and vibrational properties of materials. Its computational simulation is important for the interpretation of experimental results. Two approaches are the bond polarizability model and density functional perturbation theory. However, both are known to not capture resonance effects. These resonances and quantum interference effects are important to correctly reproduce the intensities as a function of laser energy as, e.g., reported for the case of multi-layer MoTe21.We present two fully ab initio approaches that overcome this limitation. In the first, we calculate finite difference derivatives of the dielectric susceptibility with the phonon displacements2. In the second we calculate electron-light and electron-phonon matrix elements from density functional theory and use them to evaluate expressions for the Raman intensity derived from time-dependent perturbation theory. These expressions are implemented in a computer code that performs the calculations as a post-processing step. We compare both methods and study the case of triple-layer MoTe2. Luxembourg National Research Fund (FNR).

  7. The impacts of marijuana dispensary density and neighborhood ecology on marijuana abuse and dependence

    PubMed Central

    Mair, Christina; Freisthler, Bridget; Ponicki, William R.; Gaidus, Andrew

    2015-01-01

    Background As an increasing number of states liberalize cannabis use and develop laws and local policies, it is essential to better understand the impacts of neighborhood ecology and marijuana dispensary density on marijuana use, abuse, and dependence. We investigated associations between marijuana abuse/dependence hospitalizations and community demographic and environmental conditions from 2001–2012 in California, as well as cross-sectional associations between local and adjacent marijuana dispensary densities and marijuana hospitalizations. Methods We analyzed panel population data relating hospitalizations coded for marijuana abuse or dependence and assigned to residential ZIP codes in California from 2001 through 2012 (20,219 space-time units) to ZIP code demographic and ecological characteristics. Bayesian space-time misalignment models were used to account for spatial variations in geographic unit definitions over time, while also accounting for spatial autocorrelation using conditional autoregressive priors. We also analyzed cross-sectional associations between marijuana abuse/dependence and the density of dispensaries in local and spatially adjacent ZIP codes in 2012. Results An additional one dispensary per square mile in a ZIP code was cross-sectionally associated with a 6.8% increase in the number of marijuana hospitalizations (95% credible interval 1.033, 1.105) with a marijuana abuse/dependence code. Other local characteristics, such as the median household income and age and racial/ethnic distributions, were associated with marijuana hospitalizations in cross-sectional and panel analyses. Conclusions Prevention and intervention programs for marijuana abuse and dependence may be particularly essential in areas of concentrated disadvantage. Policy makers may want to consider regulations that limit the density of dispensaries. PMID:26154479

  8. Documentation of a numerical code for the simulation of variable density ground-water flow in three dimensions

    USGS Publications Warehouse

    Kuiper, L.K.

    1985-01-01

    A numerical code is documented for the simulation of variable density time dependent groundwater flow in three dimensions. The groundwater density, although variable with distance, is assumed to be constant in time. The Integrated Finite Difference grid elements in the code follow the geologic strata in the modeled area. If appropriate, the determination of hydraulic head in confining beds can be deleted to decrease computation time. The strongly implicit procedure (SIP), successive over-relaxation (SOR), and eight different preconditioned conjugate gradient (PCG) methods are used to solve the approximating equations. The use of the computer program that performs the calculations in the numerical code is emphasized. Detailed instructions are given for using the computer program, including input data formats. An example simulation and the Fortran listing of the program are included. (USGS)

  9. Analysis of density effects in plasmas and their influence on electron-impact cross sections

    NASA Astrophysics Data System (ADS)

    Belkhiri, M.; Poirier, M.

    2014-12-01

    Density effects in plasmas are analyzed using a Thomas-Fermi approach for free electrons. First, scaling properties are determined for the free-electron potential and density. For hydrogen-like ions, the first two terms of an analytical expansion of this potential as a function of the plasma coupling parameter are obtained. In such ions, from these properties and numerical calculations, a simple analytical fit is proposed for the plasma potential, which holds for any electron density, temperature, and atomic number, at least assuming that Maxwell-Boltzmann statistics is applicable. This allows one to analyze perturbatively the influence of the plasma potential on energies, wave functions, transition rates, and electron-impact collision rates for single-electron ions. Second, plasmas with an arbitrary charge state are considered, using a modified version of the Flexible Atomic Code (FAC) package with a plasma potential based on a Thomas-Fermi approach. Various methods for the collision cross-section calculations are reviewed. The influence of plasma density on these cross sections is analyzed in detail. Moreover, it is demonstrated that, in a given transition, the radiative and collisional-excitation rates are differently affected by the plasma density. Some analytical expressions are proposed for hydrogen-like ions in the limit where the Born or Lotz approximation applies and are compared to the numerical results from the FAC.

  10. Enhancements to the SSME transfer function modeling code

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an effort to filter out high frequency characteristics. The fourth method removes the presumed system excitation and its harmonics in order to investigate the effects of the excitation on the modeling process. The fifth method is an attempt to apply constrained RID to obtain better transfer functions through more accurate modeling over certain frequency ranges. Section 4 presents some new C main files which were created to round out the functionality of the existing SSME transfer function modeling code. It is now possible to go from time data to transfer function models using only the C codes; it is not necessary to rely on external software. The new C main files and instructions for their use are included. Section 5 presents current and future enhancements to the XPLOT graphics program which was delivered with the initial software. Several new features which have been added to the program are detailed in the first part of this section. The remainder of Section 5 then lists some possible features which may be added in the future. Section 6 contains the conclusion section of this report. Section 6.1 is an overview of the work including a summary and observations relating to finding transfer functions with the SSME code. Section 6.2 contains information relating to future work on the project.

  11. Biological dose estimation for charged-particle therapy using an improved PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit

    2009-01-01

    Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.

  12. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  13. Room temperature stable single molecule rectifiers with graphite electrodes

    NASA Astrophysics Data System (ADS)

    Rungger, Ivan; Kaliginedi, V.; Droghetti, A.; Ozawa, H.; Kuzume, A.; Haga, M.; Broekmann, P.; Rudnev, A. V.

    In this combined theoretical and experimental study we present new molecular electronics device characteristics of unprecedented stability at room temperature by using electrodes based on highly oriented pyrolytic graphite with covalently attached molecules. To this aim, we explore the effect of the anchoring group chemistry on the charge transport properties of graphite/molecule contacts by means of the scanning tunneling microscopy break-junction technique and ab initio simulations. The theoretical approach to evaluate the conductance is based on density functional theory calculations combined with the non-equilibrium Greens function technique, as implemented in the Smeagol electron transport code. We also demonstrate a strong bias dependence and rectification of the single molecule conductance induced by the anchoring chemistry in combination with the very low density of states of graphite around the Fermi energy. We show that the direction of tunneling current rectification can be tuned by anchoring group chemistry.

  14. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1987-01-01

    New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.

  15. Nuclear shielding constants by density functional theory with gauge including atomic orbitals

    NASA Astrophysics Data System (ADS)

    Helgaker, Trygve; Wilson, Philip J.; Amos, Roger D.; Handy, Nicholas C.

    2000-08-01

    Recently, we introduced a new density-functional theory (DFT) approach for the calculation of NMR shielding constants. First, a hybrid DFT calculation (using 5% exact exchange) is performed on the molecule to determine Kohn-Sham orbitals and their energies; second, the constants are determined as in nonhybrid DFT theory, that is, the paramagnetic contribution to the constants is calculated from a noniterative, uncoupled sum-over-states expression. The initial results suggested that this semiempirical DFT approach gives shielding constants in good agreement with the best ab initio and experimental data; in this paper, we further validate this procedure, using London orbitals in the theory, having implemented DFT into the ab initio code DALTON. Calculations on a number of small and medium-sized molecules confirm that our approach produces shieldings in excellent agreement with experiment and the best ab initio results available, demonstrating its potential for the study of shielding constants of large systems.

  16. mBEEF-vdW: Robust fitting of error estimation density functionals

    NASA Astrophysics Data System (ADS)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xianglin; Wang, Yang; Eisenbach, Markus

    One major purpose of studying the single-site scattering problem is to obtain the scattering matrices and differential equation solutions indispensable to multiple scattering theory (MST) calculations. On the other hand, the single-site scattering itself is also appealing because it reveals the physical environment experienced by electrons around the scattering center. In this study, we demonstrate a new formalism to calculate the relativistic full-potential single-site Green's function. We implement this method to calculate the single-site density of states and electron charge densities. Lastly, the code is rigorously tested and with the help of Krein's theorem, the relativistic effects and full potentialmore » effects in group V elements and noble metals are thoroughly investigated.« less

  18. Investigation of structural, electronic, elastic and optical properties of Cd{sub 1-x-y}Zn{sub x}Hg{sub y}Te alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamer, M., E-mail: mehmet.tamer@zirve.edu.tr

    2016-06-15

    Structural, optical and electronic properties and elastic constants of Cd1{sub -x-y}Zn{sub x} Hg{sub y}Te alloys have been studied by employing the commercial code Castep based on density functional theory. The generalized gradient approximation and local density approximation were utilized as exchange correlation. Using elastic constants for compounds, bulk modulus, band gap, Fermi energy and Kramers–Kronig relations, dielectric constants and the refractive index have been found through calculations. Apart from these, X-ray measurements revealed elastic constants and Vegard’s law. It is seen that results obtained from theory and experiments are all in agreement.

  19. Ab-intio study of phonon and thermodynamic properties of Znic-blende ZnSe

    NASA Astrophysics Data System (ADS)

    Khatta, Swati; Kaur, Veerpal; Tripathi, S. K.; Prakash, Satya

    2018-04-01

    The Phonon and thermodynamic properties of ZnSe are investigated using density functional perturbation theory (DFPT) and quasi-harmonic approximation (QHA) implemented in Quantum espresso code. The phonon dispersion curve and phonon density of states of ZnSe are obtained. It is shown that high symmetries D→X and D→L directions, there are four branches of dispersion curves which split into six branches along the X→W, W→X and X→D directions. The LO-TO splitting frequencies (in cm-1) at the zone center (D point) are LO=255 and TO=215. The total and partial phonon density of states is used to compute the entropy and specific heat capacity of ZnSe. The computed values are in reasonable agreement with experimental data and other with available theoretical calculations.

  1. Many-body formulation of carriers capture time in quantum dots applicable in device simulation codes

    NASA Astrophysics Data System (ADS)

    Vallone, Marco

    2010-03-01

    We present an application of Green's functions formalism to calculate in a simplified but rigorous way electrons and holes capture time in quantum dots in closed form as function of carrier density, levels confinement potential, and temperature. Carrier-carrier (Auger) scattering and single LO-phonon emission are both addressed accounting for dynamic effects of the potential screening in the single plasmon pole approximation of the dielectric function. Regarding the LO-phonons interaction, the formulation evidences the role of the dynamic screening from wetting-layer carriers in comparison with its static limit, describes the interplay between screening and Fermi band filling, and offers simple expressions for capture time, suitable for modeling implementation.

  2. A study of high density bit transition requirements versus the effects on BCH error correcting coding

    NASA Technical Reports Server (NTRS)

    Ingels, F.; Schoggen, W. O.

    1981-01-01

    The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.

  3. STELLTRANS: A Transport Analysis Suite for Stellarators

    NASA Astrophysics Data System (ADS)

    Mittelstaedt, Joseph; Lazerson, Samuel; Pablant, Novimir; Weir, Gavin; W7-X Team

    2016-10-01

    The stellarator transport code STELLTRANS allows us to better analyze the power balance in W7-X. Although profiles of temperature and density are measured experimentally, geometrical factors are needed in conjunction with these measurements to properly analyze heat flux densities in stellarators. The STELLTRANS code interfaces with VMEC to find an equilibrium flux surface configuration and with TRAVIS to determine the RF heating and current drive in the plasma. Stationary transport equations are then considered which are solved using a boundary value differential equation solver. The equations and quantities considered are averaged over flux surfaces to reduce the system to an essentially one dimensional problem. We have applied this code to data from W-7X and were able to calculate the heat flux coefficients. We will also present extensions of the code to a predictive capacity which would utilize DKES to find neoclassical transport coefficients to update the temperature and density profiles.

  4. Numerical Studies of Impurities in Fusion Plasmas

    DOE R&D Accomplishments Database

    Hulse, R. A.

    1982-09-01

    The coupled partial differential equations used to describe the behavior of impurity ions in magnetically confined controlled fusion plasmas require numerical solution for cases of practical interest. Computer codes developed for impurity modeling at the Princeton Plasma Physics Laboratory are used as examples of the types of codes employed for this purpose. These codes solve for the impurity ionization state densities and associated radiation rates using atomic physics appropriate for these low-density, high-temperature plasmas. The simpler codes solve local equations in zero spatial dimensions while more complex cases require codes which explicitly include transport of the impurity ions simultaneously with the atomic processes of ionization and recombination. Typical applications are discussed and computational results are presented for selected cases of interest.

  5. Correlates of residential wiring code used in studies of health effects of residential electromagnetic fields.

    PubMed

    Bracken, M B; Belanger, K; Hellenbrand, K; Addesso, K; Patel, S; Triche, E; Leaderer, B P

    1998-09-01

    The home wiring code is the most widely used metric for studies of residential electromagnetic field (EMF) exposure and health effects. Despite the fact that wiring code often shows stronger correlations with disease outcome than more direct EMF home assessments, little is known about potential confounders of the wiring code association. In a study carried out in southern Connecticut in 1988-1991, the authors used strict and widely used criteria to assess the wiring codes of 3,259 homes in which respondents lived. They also collected other home characteristics from the tax assessor's office, estimated traffic density around the home from state data, and interviewed each subject (2,967 mothers of reproductive age) for personal characteristics. Women who lived in very high current configuration wiring coded homes were more likely to be in manual jobs and their homes were older (built before 1949, odds ratio (OR) = 73.24, 95% confidence interval (CI) 29.53-181.65) and had lower assessed value and higher traffic densities (highest density quartile, OR = 3.99, 95% CI 1.17-13.62). Because some of these variables have themselves been associated with health outcomes, the possibility of confounding of the wiring code associations must be rigorously evaluated in future EMF research.

  6. Numerical simulation of inductive method for determining spatial distribution of critical current density

    NASA Astrophysics Data System (ADS)

    Kamitani, A.; Takayama, T.; Tanaka, A.; Ikuno, S.

    2010-11-01

    The inductive method for measuring the critical current density jC in a high-temperature superconducting (HTS) thin film has been investigated numerically. In order to simulate the method, a non-axisymmetric numerical code has been developed for analyzing the time evolution of the shielding current density. In the code, the governing equation of the shielding current density is spatially discretized with the finite element method and the resulting first-order ordinary differential system is solved by using the 5th-order Runge-Kutta method with an adaptive step-size control algorithm. By using the code, the threshold current IT is evaluated for various positions of a coil. The results of computations show that, near a film edge, the accuracy of the estimating formula for jC is remarkably degraded. Moreover, even the proportional relationship between jC and IT will be lost there. Hence, the critical current density near a film edge cannot be estimated by using the inductive method.

  7. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  8. Mars surface radiation exposure for solar maximum conditions and 1989 solar proton events

    NASA Technical Reports Server (NTRS)

    Simonsen, Lisa C.; Nealy, John E.

    1992-01-01

    The Langley heavy-ion/nucleon transport code, HZETRN, and the high-energy nucleon transport code, BRYNTRN, are used to predict the propagation of galactic cosmic rays (GCR's) and solar flare protons through the carbon dioxide atmosphere of Mars. Particle fluences and the resulting doses are estimated on the surface of Mars for GCR's during solar maximum conditions and the Aug., Sep., and Oct. 1989 solar proton events. These results extend previously calculated surface estimates for GCR's at solar minimum conditions and the Feb. 1956, Nov. 1960, and Aug. 1972 solar proton events. Surface doses are estimated with both a low-density and a high-density carbon dioxide model of the atmosphere for altitudes of 0, 4, 8, and 12 km above the surface. A solar modulation function is incorporated to estimate the GCR dose variation between solar minimum and maximum conditions over the 11-year solar cycle. By using current Mars mission scenarios, doses to the skin, eye, and blood-forming organs are predicted for short- and long-duration stay times on the Martian surface throughout the solar cycle.

  9. Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations

    NASA Astrophysics Data System (ADS)

    Tritsis, A.; Yorke, H.; Tassis, K.

    2018-05-01

    We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.

  10. The impacts of marijuana dispensary density and neighborhood ecology on marijuana abuse and dependence.

    PubMed

    Mair, Christina; Freisthler, Bridget; Ponicki, William R; Gaidus, Andrew

    2015-09-01

    As an increasing number of states liberalize cannabis use and develop laws and local policies, it is essential to better understand the impacts of neighborhood ecology and marijuana dispensary density on marijuana use, abuse, and dependence. We investigated associations between marijuana abuse/dependence hospitalizations and community demographic and environmental conditions from 2001 to 2012 in California, as well as cross-sectional associations between local and adjacent marijuana dispensary densities and marijuana hospitalizations. We analyzed panel population data relating hospitalizations coded for marijuana abuse or dependence and assigned to residential ZIP codes in California from 2001 through 2012 (20,219 space-time units) to ZIP code demographic and ecological characteristics. Bayesian space-time misalignment models were used to account for spatial variations in geographic unit definitions over time, while also accounting for spatial autocorrelation using conditional autoregressive priors. We also analyzed cross-sectional associations between marijuana abuse/dependence and the density of dispensaries in local and spatially adjacent ZIP codes in 2012. An additional one dispensary per square mile in a ZIP code was cross-sectionally associated with a 6.8% increase in the number of marijuana hospitalizations (95% credible interval 1.033, 1.105) with a marijuana abuse/dependence code. Other local characteristics, such as the median household income and age and racial/ethnic distributions, were associated with marijuana hospitalizations in cross-sectional and panel analyses. Prevention and intervention programs for marijuana abuse and dependence may be particularly essential in areas of concentrated disadvantage. Policy makers may want to consider regulations that limit the density of dispensaries. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. A cross-sectional prevalence study of ethnically targeted and general audience outdoor obesity-related advertising.

    PubMed

    Yancey, Antronette K; Cole, Brian L; Brown, Rochelle; Williams, Jerome D; Hillier, Amy; Kline, Randolph S; Ashe, Marice; Grier, Sonya A; Backman, Desiree; McCarthy, William J

    2009-03-01

    Commercial marketing is a critical but understudied element of the sociocultural environment influencing Americans' food and beverage preferences and purchases. This marketing also likely influences the utilization of goods and services related to physical activity and sedentary behavior. A growing literature documents the targeting of racial/ethnic and income groups in commercial advertisements in magazines, on billboards, and on television that may contribute to sociodemographic disparities in obesity and chronic disease risk and protective behaviors. This article examines whether African Americans, Latinos, and people living in low-income neighborhoods are disproportionately exposed to advertisements for high-calorie, low nutrient-dense foods and beverages and for sedentary entertainment and transportation and are relatively underexposed to advertising for nutritious foods and beverages and goods and services promoting physical activities. Outdoor advertising density and content were compared in zip code areas selected to offer contrasts by area income and ethnicity in four cities: Los Angeles, Austin, New York City, and Philadelphia. Large variations were observed in the amount, type, and value of advertising in the selected zip code areas. Living in an upper-income neighborhood, regardless of its residents' predominant ethnicity, is generally protective against exposure to most types of obesity-promoting outdoor advertising (food, fast food, sugary beverages, sedentary entertainment, and transportation). The density of advertising varied by zip code area race/ethnicity, with African American zip code areas having the highest advertising densities, Latino zip code areas having slightly lower densities, and white zip code areas having the lowest densities. The potential health and economic implications of differential exposure to obesity-related advertising are substantial. Although substantive legal questions remain about the government's ability to regulate advertising, the success of limiting tobacco advertising offers lessons for reducing the marketing contribution to the obesigenicity of urban environments.

  12. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  13. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  14. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  15. C2x: A tool for visualisation and input preparation for CASTEP and other electronic structure codes

    NASA Astrophysics Data System (ADS)

    Rutter, M. J.

    2018-04-01

    The c2x code fills two distinct roles. Its first role is in acting as a converter between the binary format .check files from the widely-used CASTEP [1] electronic structure code and various visualisation programs. Its second role is to manipulate and analyse the input and output files from a variety of electronic structure codes, including CASTEP, ONETEP and VASP, as well as the widely-used 'Gaussian cube' file format. Analysis includes symmetry analysis, and manipulation arbitrary cell transformations. It continues to be under development, with growing functionality, and is written in a form which would make it easy to extend it to working directly with files from other electronic structure codes. Data which c2x is capable of extracting from CASTEP's binary checkpoint files include charge densities, spin densities, wavefunctions, relaxed atomic positions, forces, the Fermi level, the total energy, and symmetry operations. It can recreate .cell input files from checkpoint files. Volumetric data can be output in formats useable by many common visualisation programs, and c2x will itself calculate integrals, expand data into supercells, and interpolate data via combinations of Fourier and trilinear interpolation. It can extract data along arbitrary lines (such as lines between atoms) as 1D output. C2x is able to convert between several common formats for describing molecules and crystals, including the .cell format of CASTEP. It can construct supercells, reduce cells to their primitive form, and add specified k-point meshes. It uses the spglib library [2] to report symmetry information, which it can add to .cell files. C2x is a command-line utility, so is readily included in scripts. It is available under the GPL and can be obtained from http://www.c2x.org.uk. It is believed to be the only open-source code which can read CASTEP's .check files, so it will have utility in other projects.

  16. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  17. Structural stability and electronic structure of transition metal compound: HfN

    NASA Astrophysics Data System (ADS)

    Sarwan, Madhu; Shukoor, V. Abdul; Singh, Sadhna

    2018-05-01

    The structural stability of transition metal nitride (HfN) has been investigated using density functional theory (DFT) with the help of Quantum-espresso codes. Our calculations confirm that the hafnium nitride (HfN) is stable in zinc-blende (B3) and rock-salt (B1) type structure. We have also reported the structural and electronic properties of HfN compound. These structural properties have been compared with experimental and theoretical data available on this compound.

  18. Nonadiabatic Dynamics for Electrons at Second-Order: Real-Time TDDFT and OSCF2.

    PubMed

    Nguyen, Triet S; Parkhill, John

    2015-07-14

    We develop a new model to simulate nonradiative relaxation and dephasing by combining real-time Hartree-Fock and density functional theory (DFT) with our recent open-systems theory of electronic dynamics. The approach has some key advantages: it has been systematically derived and properly relaxes noninteracting electrons to a Fermi-Dirac distribution. This paper combines the new dissipation theory with an atomistic, all-electron quantum chemistry code and an atom-centered model of the thermal environment. The environment is represented nonempirically and is dependent on molecular structure in a nonlocal way. A production quality, O(N(3)) closed-shell implementation of our theory applicable to realistic molecular systems is presented, including timing information. This scaling implies that the added cost of our nonadiabatic relaxation model, time-dependent open self-consistent field at second order (OSCF2), is computationally inexpensive, relative to adiabatic propagation of real-time time-dependent Hartree-Fock (TDHF) or time-dependent density functional theory (TDDFT). Details of the implementation and numerical algorithm, including factorization and efficiency, are discussed. We demonstrate that OSCF2 approaches the stationary self-consistent field (SCF) ground state when the gap is large relative to k(b)T. The code is used to calculate linear-response spectra including the effects of bath dynamics. Finally, we show how our theory of finite-temperature relaxation can be used to correct ground-state DFT calculations.

  19. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  20. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.

  1. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2017-05-01

    Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  2. Electron density increases due to Lightning activity as deduced from LWPC code and VLF signal perturbations.

    NASA Astrophysics Data System (ADS)

    Samir, Nait Amor; Bouderba, Yasmina

    VLF signal perturbations in association with thunderstorm activity appear as changes in the signal amplitude and phase. Several papers reported on the characteristics of thus perturbations and their connection to the lightning strokes amplitude and polarity. In this contribution, we quantified the electrons density increases due to lightning activity by the use of the LWPC code and VLF signal perturbations parameters. The method is similar to what people did in studying the solar eruptions effect. the results showed that the reference height (h') decreased to lower altitudes (between 70 and 80 km). From the LWPC code results the maximum of the electron density was then deduced. Therefore, a numerical simulation of the atmospheric species times dependences was performed to study the recovery times of the electrons density at different heights. The results showed that the recovery time last for several minutes and explain the observation of long recovery Early signal perturbations.

  3. Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.

    PubMed

    Wilkinson, Karl; Skylaris, Chris-Kriton

    2013-10-30

    We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.

  4. An N-body Integrator for Planetary Rings

    NASA Astrophysics Data System (ADS)

    Hahn, Joseph M.

    2011-04-01

    A planetary ring that is disturbed by a satellite's resonant perturbation can respond in an organized way. When the resonance lies in the ring's interior, the ring responds via an m-armed spiral wave, while a ring whose edge is confined by the resonance exhibits an m-lobed scalloping along the ring-edge. The amplitude of these disturbances are sensitive to ring surface density and viscosity, so modelling these phenomena can provide estimates of the ring's properties. However a brute force attempt to simulate a ring's full azimuthal extent with an N-body code will likely fail because of the large number of particles needed to resolve the ring's behavior. Another impediment is the gravitational stirring that occurs among the simulated particles, which can wash out the ring's organized response. However it is possible to adapt an N-body integrator so that it can simulate a ring's collective response to resonant perturbations. The code developed here uses a few thousand massless particles to trace streamlines within the ring. Particles are close in a radial sense to these streamlines, which allows streamlines to be treated as straight wires of constant linear density. Consequently, gravity due to these streamline is a simple function of the particle's radial distance to all streamlines. And because particles are responding to smooth gravitating streamlines, rather than discrete particles, this method eliminates the stirring that ordinarily occurs in brute force N-body calculations. Note also that ring surface density is now a simple function of streamline separations, so effects due to ring pressure and viscosity are easily accounted for, too. A poster will describe this N-body method in greater detail. Simulations of spiral density waves and scalloped ring-edges are executed in typically ten minutes on a desktop PC, and results for Saturn's A and B rings will be presented at conference time.

  5. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  6. Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

    DOE PAGES

    Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.

    2017-04-26

    Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less

  7. Discovering charge density functionals and structure-property relationships with PROPhet: A general framework for coupling machine learning and first-principles methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.

    Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less

  8. Theoretical analysis of the performance of code division multiple access communications over multimode optical fiber channels. Part 1: Transmission and detection

    NASA Astrophysics Data System (ADS)

    Walker, Ernest L.

    1994-05-01

    This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.

  9. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  10. A photoemission moments model using density functional and transfer matrix methods applied to coating layers on surfaces: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L.; Finkenstadt, Daniel; Shabaev, Andrew

    Recent experimental measurements of a bulk material covered with a small number of graphene layers reported by Yamaguchi et al. [NPJ 2D Mater. Appl. 1, 12 (2017)] (on bialkali) and Liu et al.[Appl. Phys. Lett. 110, 041607 (2017)] (on copper) and the needs of emission models in beam optics codes have lead to substantial changes in a Moments model of photoemission. The changes account for (i) a barrier profile and density of states factor based on density functional theory (DFT) evaluations, (ii) a Drude-Lorentz model of the optical constants and laser penetration depth, and (iii) a transmission probability evaluated bymore » an Airy Transfer Matrix Approach. Importantly, the DFT results lead to a surface barrier profile of a shape similar to both resonant barriers and reflectionless wells: the associated quantum mechanical transmission probabilities are shown to be comparable to those recently required to enable the Moments (and Three Step) model to match experimental data but for reasons very different than the assumption by conventional wisdom that a barrier is responsible. The substantial modifications of the Moments model components, motivated by computational materials methods, are developed. The results prepare the Moments model for use in treating heterostructures and discrete energy level systems (e.g., quantum dots) proposed for decoupling the opposing metrics of performance that undermine the performance of advanced light sources like the x-ray Free Electron Laser. The consequences of the modified components on quan-tum yield, emittance, and emission models needed by beam optics codes are discussed. Published by AIP Publishing. https://doi.org/10.1063/1.5008600« less

  11. A photoemission moments model using density functional and transfer matrix methods applied to coating layers on surfaces: Theory

    DOE PAGES

    Jensen, Kevin L.; Finkenstadt, Daniel; Shabaev, Andrew; ...

    2018-01-28

    Recent experimental measurements of a bulk material covered with a small number of graphene layers reported by Yamaguchi et al. [NPJ 2D Mater. Appl. 1, 12 (2017)] (on bialkali) and Liu et al.[Appl. Phys. Lett. 110, 041607 (2017)] (on copper) and the needs of emission models in beam optics codes have lead to substantial changes in a Moments model of photoemission. The changes account for (i) a barrier profile and density of states factor based on density functional theory (DFT) evaluations, (ii) a Drude-Lorentz model of the optical constants and laser penetration depth, and (iii) a transmission probability evaluated bymore » an Airy Transfer Matrix Approach. Importantly, the DFT results lead to a surface barrier profile of a shape similar to both resonant barriers and reflectionless wells: the associated quantum mechanical transmission probabilities are shown to be comparable to those recently required to enable the Moments (and Three Step) model to match experimental data but for reasons very different than the assumption by conventional wisdom that a barrier is responsible. The substantial modifications of the Moments model components, motivated by computational materials methods, are developed. The results prepare the Moments model for use in treating heterostructures and discrete energy level systems (e.g., quantum dots) proposed for decoupling the opposing metrics of performance that undermine the performance of advanced light sources like the x-ray Free Electron Laser. The consequences of the modified components on quan-tum yield, emittance, and emission models needed by beam optics codes are discussed. Published by AIP Publishing. https://doi.org/10.1063/1.5008600« less

  12. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  13. Jahn-Teller transition in TiF3 investigated using density-functional theory

    NASA Astrophysics Data System (ADS)

    Perebeinos, Vasili; Vogt, Tom

    2004-03-01

    We use first-principles density-functional theory to calculate the electronic and magnetic properties of TiF3 using the full-potential-linearized augmented-plane-wave method. The local density approximation (LDA) predicts a fully saturated ferromagnetic metal and finds degenerate energy minima for high- and low-symmetry structures. The experimentally observed Jahn-Teller phase transition at Tc=370 K cannot be driven by the electron-phonon interaction alone, which is usually described accurately by the LDA. Electron correlations beyond the LDA are essential to lift the degeneracy of the singly occupied Ti t2g orbital. Although the on-site Coulomb correlations are important, the direction of the t2g-level splitting is determined by dipole-dipole interactions. The LDA+U functional predicts an aniferromagnetic insulator with an orbitally ordered ground state. The input parameters U=8.1 eV and J=0.9 eV for the Ti 3d orbital were found by varying the total charge on the TiF2-6 ion using the molecular NRLMOL code. We estimate the Heisenberg exchange constant for spin 1/2 on a cubic lattice to be approximately 24 K. The symmetry lowering energy in LDA+U is about 900 K per TiF3 formula unit.

  14. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  15. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less

  16. Inferring probabilistic stellar rotation periods using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh

    2018-02-01

    Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.

  17. Investigation of the electronic structure in La1-xCaxCoO3 (x = 0, 0.5) using full potential calculations

    NASA Astrophysics Data System (ADS)

    Sahnoun, M.; Daul, C.; Haas, O.; Wokaun, A.

    2005-12-01

    The electronic and magnetic properties of both LaCoO3 and La0.5Ca0.5CoO3 have been investigated by means of ab initio full-potential augmented plane wave plus local orbitals (APW+lo) calculations carried out with the Wien 2k code. The functional used is the local-density approximation LDA +U. Doping with Ca2+ introduces holes into the Co-O network. We analyse the densities of states and we confirm that the intermediate state (IS) is stabilized by the Ca2+ substitution. This intermediate state in our results turns out to be metallic, and has a large density of states at the Fermi energy. The calculated magnetic moment in La0.5Ca0.5CoO3 is found to be in good agreement with experiment.

  18. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density.

    PubMed

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My; Vejborg, Ilse; Andersen, Zorana Jovanovic

    2014-06-01

    In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system.

  19. Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less

  20. Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian

    2009-07-15

    Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less

  1. Design and implementation of a channel decoder with LDPC code

    NASA Astrophysics Data System (ADS)

    Hu, Diqing; Wang, Peng; Wang, Jianzong; Li, Tianquan

    2008-12-01

    Because Toshiba quit the competition, there is only one standard of blue-ray disc: BLU-RAY DISC, which satisfies the demands of high-density video programs. But almost all the patents are gotten by big companies such as Sony, Philips. As a result we must pay much for these patents when our productions use BD. As our own high-density optical disk storage system, Next-Generation Versatile Disc(NVD) which proposes a new data format and error correction code with independent intellectual property rights and high cost performance owns higher coding efficiency than DVD and 12GB which could meet the demands of playing the high-density video programs. In this paper, we develop Low-Density Parity-Check Codes (LDPC): a new channel encoding process and application scheme using Q-matrix based on LDPC encoding has application in NVD's channel decoder. And combined with the embedded system portable feature of SOPC system, we have completed all the decoding modules by FPGA. In the NVD experiment environment, tests are done. Though there are collisions between LDPC and Run-Length-Limited modulation codes (RLL) which are used in optical storage system frequently, the system is provided as a suitable solution. At the same time, it overcomes the defects of the instability and inextensibility, which occurred in the former decoding system of NVD--it was implemented by hardware.

  2. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  3. Cosmology in one dimension: Vlasov dynamics.

    PubMed

    Manfredi, Giovanni; Rouet, Jean-Louis; Miller, Bruce; Shiozawa, Yui

    2016-04-01

    Numerical simulations of self-gravitating systems are generally based on N-body codes, which solve the equations of motion of a large number of interacting particles. This approach suffers from poor statistical sampling in regions of low density. In contrast, Vlasov codes, by meshing the entire phase space, can reach higher accuracy irrespective of the density. Here, we perform one-dimensional Vlasov simulations of a long-standing cosmological problem, namely, the fractal properties of an expanding Einstein-de Sitter universe in Newtonian gravity. The N-body results are confirmed for high-density regions and extended to regions of low matter density, where the N-body approach usually fails.

  4. MODFLOW-2000, the U.S. Geological Survey Modular Ground-Water Model--Documentation of the SEAWAT-2000 Version with the Variable-Density Flow Process (VDF) and the Integrated MT3DMS Transport Process (IMT)

    USGS Publications Warehouse

    Langevin, Christian D.; Shoemaker, W. Barclay; Guo, Weixing

    2003-01-01

    SEAWAT-2000 is the latest release of the SEAWAT computer program for simulation of three-dimensional, variable-density, transient ground-water flow in porous media. SEAWAT-2000 was designed by combining a modified version of MODFLOW-2000 and MT3DMS into a single computer program. The code was developed using the MODFLOW-2000 concept of a process, which is defined as ?part of the code that solves a fundamental equation by a specified numerical method.? SEAWAT-2000 contains all of the processes distributed with MODFLOW-2000 and also includes the Variable-Density Flow Process (as an alternative to the constant-density Ground-Water Flow Process) and the Integrated MT3DMS Transport Process. Processes may be active or inactive, depending on simulation objectives; however, not all processes are compatible. For example, the Sensitivity and Parameter Estimation Processes are not compatible with the Variable-Density Flow and Integrated MT3DMS Transport Processes. The SEAWAT-2000 computer code was tested with the common variable-density benchmark problems and also with problems representing evaporation from a salt lake and rotation of immiscible fluids.

  5. Modeling Laboratory Astrophysics Experiments in the High-Energy-Density Regime Using the CRASH Radiation-Hydrodynamics Model

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Trantham, M. R.; Kuranz, C. C.; Keiter, P. A.; Rutter, E. M.; Sweeney, R. M.; Malamud, G.

    2012-10-01

    The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density physics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. CRASH model results have shown good agreement with a experimental results from a variety of applications, including: radiative shock, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL), collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  6. A fully-implicit Particle-In-Cell Monte Carlo Collision code for the simulation of inductively coupled plasmas

    NASA Astrophysics Data System (ADS)

    Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.

    2017-12-01

    We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.

  7. Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW

    NASA Astrophysics Data System (ADS)

    Schaars, F.; Bakker, M.

    2012-12-01

    We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.

  8. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  9. SUGGEL: A Program Suggesting the Orbital Angular Momentum of a Neutron Resonance from the Magnitude of its Neutron Width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, S.Y.

    2001-02-02

    The SUGGEL computer code has been developed to suggest a value for the orbital angular momentum of a neutron resonance that is consistent with the magnitude of its neutron width. The suggestion is based on the probability that a resonance having a certain value of g{Gamma}{sub n} is an l-wave resonance. The probability is calculated by using Bayes' theorem on the conditional probability. The probability density functions (pdf's) of g{Gamma}{sub n} for up to d-wave (l=2) have been derived from the {chi}{sup 2} distribution of Porter and Thomas. The pdf's take two possible channel spins into account. This code ismore » a tool which evaluators will use to construct resonance parameters and help to assign resonance spin. The use of this tool is expected to reduce time and effort in the evaluation procedure, since the number of repeated runs of the fitting code (e.g., SAMMY) may be reduced.« less

  10. DFT applied to the study of carbon-doped zinc-blende (cubic) GaN

    NASA Astrophysics Data System (ADS)

    Espitia R, M. J.; Ortega-López, C.; Rodríguez Martínez, J. A.

    2016-08-01

    Employing first principles within the framework of density functional theory, the structural properties, electronic structure, and magnetism of C-doped zincblende (cubic) GaN were investigated. The calculations were carried out using the pseudopotential method, employed exactly as implemented in Quantum ESPRESSO code. For GaC0.0625N0.9375 concentration, a metallic behavior was found. This metallic property comes from the hybridization and polarization of C-2p states and their neighboring N-2p and G-4p states.

  11. Investigation of structural stability and elastic properties of CrH and MnH: A first principles study

    NASA Astrophysics Data System (ADS)

    Kanagaprabha, S.; Rajeswarapalanichamy, R.; Sudhapriyanga, G.; Murugan, A.; Santhosh, M.; Iyakutti, K.

    2015-06-01

    The structural and mechanical properties of CrH and MnH are investigated using first principles calculation based on density functional theory as implemented in VASP code with generalized gradient approximation. The calculated ground state properties are in good agreement with previous experimental and other theoretical results. A structural phase transition from NaCl to NiAs phase at a pressure of 76 GPa is predicted for both CrH and MnH.

  12. The WFIRST Galaxy Survey Exposure Time Calculator

    NASA Technical Reports Server (NTRS)

    Hirata, Christopher M.; Gehrels, Neil; Kneib, Jean-Paul; Kruk, Jeffrey; Rhodes, Jason; Wang, Yun; Zoubian, Julien

    2013-01-01

    This document describes the exposure time calculator for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey. The calculator works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and SN determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The source code is made available for public use.

  13. Electronic structure, chemical bonding, and geometry of pure and Sr-doped CaCO3.

    PubMed

    Stashans, Arvids; Chamba, Gaston; Pinto, Henry

    2008-02-01

    The electronic structure, chemical bonding, geometry, and effects produced by Sr-doping in CaCO(3) have been studied on the basis of density-functional theory using the VASP simulation package and molecular-orbital theory utilizing the CLUSTERD computer code. Two calcium carbonate structures which occur naturally in anhydrous crystalline forms, calcite and aragonite, were considered in the present investigation. The obtained diagrams of density of states show similar patterns for both materials. The spatial structures are computed and analyzed in comparison to the available experimental data. The electronic properties and atomic displacements because of the trace element Sr-incorporation are discussed in a comparative manner for the two crystalline structures. (c) 2007 Wiley Periodicals, Inc.

  14. Phonon and thermodynamical properties of CuSc: A DFT study

    NASA Astrophysics Data System (ADS)

    Jain, Ekta; Pagare, Gitanjali; Dubey, Shubha; Sanyal, S. P.

    2018-05-01

    A detailed systematic theoretical investigation of phonon and thermodynamical behavior of CuSc intermetallic compound has been carried out by uing first-principles density functional theory in B2-type (CsCl) crystal structure. Phonon dispersion curve and phonon density of states (PhDOS) are studied which confirm the stability of CuSc intermetallic compound in B2 phase. It is found that PhDOS at high frequencies mostly composed of Sc states. We have also presented some temperature dependent properties such as entropy, free energy, heat capacity, internal energy and thermal displacement, which are computed under PHONON code. The various features of these quantities are discussed in detail. From these results we demonstrate that the particular intermetallic have better ductility and larger thermal expansion.

  15. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  16. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Kuang; Libisch, Florian; Carter, Emily A., E-mail: eac@princeton.edu

    We report a new implementation of the density functional embedding theory (DFET) in the VASP code, using the projector-augmented-wave (PAW) formalism. Newly developed algorithms allow us to efficiently perform optimized effective potential optimizations within PAW. The new algorithm generates robust and physically correct embedding potentials, as we verified using several test systems including a covalently bound molecule, a metal surface, and bulk semiconductors. We show that with the resulting embedding potential, embedded cluster models can reproduce the electronic structure of point defects in bulk semiconductors, thereby demonstrating the validity of DFET in semiconductors for the first time. Compared to ourmore » previous version, the new implementation of DFET within VASP affords use of all features of VASP (e.g., a systematic PAW library, a wide selection of functionals, a more flexible choice of U correction formalisms, and faster computational speed) with DFET. Furthermore, our results are fairly robust with respect to both plane-wave and Gaussian type orbital basis sets in the embedded cluster calculations. This suggests that the density functional embedding method is potentially an accurate and efficient way to study properties of isolated defects in semiconductors.« less

  18. Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20

    DOE PAGES

    Michael, J. Robert; Volkov, Anatoliy

    2015-03-01

    The widely used pseudoatom formalism in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens. It was shown that the analytical form for normalization coefficients is available primarily forl ≤ 4. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 < l ≤ 7.more » In most cases for l > 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle–Coppens method in the Wolfram Mathematicasoftware to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.« less

  19. Spheromak reactor-design study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Les, J.M.

    1981-06-30

    A general overview of spheromak reactor characteristics, such as MHD stability, start up, and plasma geometry is presented. In addition, comparisons are made between spheromaks, tokamaks and field reversed mirrors. The computer code Sphero is also discussed. Sphero is a zero dimensional time independent transport code that uses particle confinement times and profile parameters as input since they are not known with certainty at the present time. More specifically, Sphero numerically solves a given set of transport equations whose solutions include such variables as fuel ion (deuterium and tritium) density, electron density, alpha particle density and ion, electron temperatures.

  20. The new semi-analytic code GalICS 2.0 - reproducing the galaxy stellar mass function and the Tully-Fisher relation simultaneously

    NASA Astrophysics Data System (ADS)

    Cattaneo, A.; Blaizot, J.; Devriendt, J. E. G.; Mamon, G. A.; Tollet, E.; Dekel, A.; Guiderdoni, B.; Kucukbas, M.; Thob, A. C. R.

    2017-10-01

    GalICS 2.0 is a new semi-analytic code to model the formation and evolution of galaxies in a cosmological context. N-body simulations based on a Planck cosmology are used to construct halo merger trees, track subhaloes, compute spins and measure concentrations. The accretion of gas on to galaxies and the morphological evolution of galaxies are modelled with prescriptions derived from hydrodynamic simulations. Star formation and stellar feedback are described with phenomenological models (as in other semi-analytic codes). GalICS 2.0 computes rotation speeds from the gravitational potential of the dark matter, the disc and the central bulge. As the rotation speed depends not only on the virial velocity but also on the ratio of baryons to dark matter within a galaxy, our calculation predicts a different Tully-Fisher relation from models in which vrot ∝ vvir. This is why, GalICS 2.0 is able to reproduce the galaxy stellar mass function and the Tully-Fisher relation simultaneously. Our results are also in agreement with halo masses from weak lensing and satellite kinematics, gas fractions, the relation between star formation rate (SFR) and stellar mass, the evolution of the cosmic SFR density, bulge-to-disc ratios, disc sizes and the Faber-Jackson relation.

  1. cncRNAs: Bi-functional RNAs with protein coding and non-coding functions

    PubMed Central

    Kumari, Pooja; Sampath, Karuna

    2015-01-01

    For many decades, the major function of mRNA was thought to be to provide protein-coding information embedded in the genome. The advent of high-throughput sequencing has led to the discovery of pervasive transcription of eukaryotic genomes and opened the world of RNA-mediated gene regulation. Many regulatory RNAs have been found to be incapable of protein coding and are hence termed as non-coding RNAs (ncRNAs). However, studies in recent years have shown that several previously annotated non-coding RNAs have the potential to encode proteins, and conversely, some coding RNAs have regulatory functions independent of the protein they encode. Such bi-functional RNAs, with both protein coding and non-coding functions, which we term as ‘cncRNAs’, have emerged as new players in cellular systems. Here, we describe the functions of some cncRNAs identified from bacteria to humans. Because the functions of many RNAs across genomes remains unclear, we propose that RNAs be classified as coding, non-coding or both only after careful analysis of their functions. PMID:26498036

  2. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  3. Two-dimensional Coupled Petrological-tectonic Modelling of Extensional Basins

    NASA Astrophysics Data System (ADS)

    Kaus, B. J. P.; Podladchikov, Y. Y.; Connolly, J. A. D.

    Most numerical codes that simulate the deformation of a lithosphere assume the den- sity of the lithosphere to be either constant or depend only on temperature and pres- sure. It is, however, well known that rocks undergo phase transformations in response to changes in pressure and temperature. Such phase transformations may substantially alter the bulk properties of the rock (i.e., density, thermal conductivity, thermal ex- pansivity and elastic moduli). Several previous studies demonstrated that the density effects due to phase transitions are indeed large enough to have an impact on the litho- sphere dynamics. These studies were however oversimplified in that they accounted for only one or two schematic discontinuous phase transitions. The current study there- fore takes into account all the reactions that occur for a realistic lithospheric composi- tion. Calculation of the phase diagram and bulk physical properties of the stable phase assemblages for the crust and mantle within the continental lithosphere was done ac- counting for mineral solution behaviour using a free energy minimization program for natural rock compositions. The results of these calculations provide maps of the varia- tions in rock properties as a function of pressure and temperature that are easily incor- porated in any dynamic model computations. In this contribution we implemented a density map in the two-dimensional basin code TECMOD2D. We compare the results of the model with metamorphic reactions with a model without reactions and define some effective parameters that allow the use of a simpler model that still mimics most of the density effects of the metamorphic reactions.

  4. A Cross-Sectional Prevalence Study of Ethnically Targeted and General Audience Outdoor Obesity-Related Advertising

    PubMed Central

    Yancey, Antronette K; Cole, Brian L; Brown, Rochelle; Williams, Jerome D; Hillier, Amy; Kline, Randolph S; Ashe, Marice; Grier, Sonya A; Backman, Desiree; McCarthy, William J

    2009-01-01

    Context: Commercial marketing is a critical but understudied element of the sociocultural environment influencing Americans' food and beverage preferences and purchases. This marketing also likely influences the utilization of goods and services related to physical activity and sedentary behavior. A growing literature documents the targeting of racial/ethnic and income groups in commercial advertisements in magazines, on billboards, and on television that may contribute to sociodemographic disparities in obesity and chronic disease risk and protective behaviors. This article examines whether African Americans, Latinos, and people living in low-income neighborhoods are disproportionately exposed to advertisements for high-calorie, low nutrient–dense foods and beverages and for sedentary entertainment and transportation and are relatively underexposed to advertising for nutritious foods and beverages and goods and services promoting physical activities. Methods: Outdoor advertising density and content were compared in zip code areas selected to offer contrasts by area income and ethnicity in four cities: Los Angeles, Austin, New York City, and Philadelphia. Findings: Large variations were observed in the amount, type, and value of advertising in the selected zip code areas. Living in an upper-income neighborhood, regardless of its residents' predominant ethnicity, is generally protective against exposure to most types of obesity-promoting outdoor advertising (food, fast food, sugary beverages, sedentary entertainment, and transportation). The density of advertising varied by zip code area race/ethnicity, with African American zip code areas having the highest advertising densities, Latino zip code areas having slightly lower densities, and white zip code areas having the lowest densities. Conclusions: The potential health and economic implications of differential exposure to obesity-related advertising are substantial. Although substantive legal questions remain about the government's ability to regulate advertising, the success of limiting tobacco advertising offers lessons for reducing the marketing contribution to the obesigenicity of urban environments. PMID:19298419

  5. Density-Functional-Theory-Based Equation-of-State Table of Beryllium for Inertial Confinement Fusion Applications

    NASA Astrophysics Data System (ADS)

    Ding, Y. H.; Hu, S. X.

    2017-10-01

    Beryllium has been considered a superior ablator material for inertial confinement fusion target designs. Based on density-functional-theory calculations, we have established a wide-range beryllium equation-of-state (EOS) table of density ρ = 0.001 to ρ = 500 g/cm3 and temperature T = 2000 to 108 K. Our first-principles equation-of-state (FPEOS) table is in better agreement with widely used SESAMEEOS table (SESAME2023) than the average-atom INFERNOmodel and the Purgatoriomodel. For the principal Hugoniot, our FPEOS prediction shows 10% stiffer behavior than the last two models at maximum compression. Comparisons between FPEOS and SESAMEfor off-Hugoniot conditions show that both the pressure and internal energy differences are within 20% between two EOS tables. By implementing the FPEOS table into the 1-D radiation-hydrodynamics code LILAC, we studied the EOS effects on beryllium target-shell implosions. The FPEOS simulation predicts up to an 15% higher neutron yield compared to the simulation using the SESAME2023 EOS table. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  6. Remapping HELENA to incompressible plasma rotation parallel to the magnetic field

    NASA Astrophysics Data System (ADS)

    Poulipoulis, G.; Throumoulopoulos, G. N.; Konz, C.

    2016-07-01

    Plasma rotation in connection to both zonal and mean (equilibrium) flows can play a role in the transitions to the advanced confinement regimes in tokamaks, as the L-H transition and the formation of internal transport barriers (ITBs). For incompressible rotation, the equilibrium is governed by a generalised Grad-Shafranov (GGS) equation and a decoupled Bernoulli-type equation for the pressure. For parallel flow, the GGS equation can be transformed to one identical in form with the usual Grad-Shafranov equation. In the present study on the basis of the latter equation, we have extended HELENA, an equilibrium fixed boundary solver. The extended code solves the GGS equation for a variety of the two free-surface-function terms involved for arbitrary Alfvén Mach number and density functions. We have constructed diverted-boundary equilibria pertinent to ITER and examined their characteristics, in particular, as concerns the impact of rotation on certain equilibrium quantities. It turns out that the rotation and its shear affect noticeably the pressure and toroidal current density with the impact on the current density being stronger in the parallel direction than in the toroidal one.

  7. Impact of thermal energy storage properties on solar dynamic space power conversion system mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.; Coles-Hamilton, Carolyn E.; Lacy, Dovie E.

    1987-01-01

    A 16 parameter solar concentrator/heat receiver mass model is used in conjunction with Stirling and Brayton Power Conversion System (PCS) performance and mass computer codes to determine the effect of thermal energy storage (TES) material property changes on overall PCS mass as a function of steady state electrical power output. Included in the PCS mass model are component masses as a function of thermal power for: concentrator, heat receiver, heat exchangers (source unless integral with heat receiver, heat sink, regenerator), heat engine units with optional parallel redundancy, power conditioning and control (PC and C), PC and C radiator, main radiator, and structure. Critical TES properties are: melting temperature, heat of fusion, density of the liquid phase, and the ratio of solid-to-liquid density. Preliminary results indicate that even though overalll system efficiency increases with TES melting temperature up to 1400 K for concentrator surface accuracies of 1 mrad or better, reductions in the overall system mass beyond that achievable with lithium fluoride (LiF) can be accomplished only if the heat of fusion is at least 800 kJ/kg and the liquid density is comparable to that of LiF (1880 kg/cu m.

  8. Impact of thermal energy storage properties on solar dynamic space power conversion system mass

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.; Coles-Hamilton, Carolyn E.; Lacy, Dovie E.

    1987-01-01

    A 16 parameter solar concentrator/heat receiver mass model is used in conjunction with Stirling and Brayton Power Conversion System (PCS) performance and mass computer codes to determine the effect of thermal energy storage (TES) material property changes on overall PCS mass as a function of steady state electrical power output. Included in the PCS mass model are component masses as a function of thermal power for: concentrator, heat receiver, heat exchangers (source unless integral with heat receiver, heat sink, regenerator), heat engine units with optional parallel redundancy, power conditioning and control (PC and C), PC and C radiator, main radiator, and structure. Critical TES properties are: melting temperature, heat of fusion, density of the liquid phase, and the ratio of solid-to-liquid density. Preliminary results indicate that even though overall system efficiency increases with TES melting temperature up to 1400 K for concentrator surface accuracies of 1 mrad or better, reductions in the overall system mass beyond that achievable with lithium fluoride (LiF) can be accomplished only if the heat of fusion is at least 800 kJ/kg and the liquid density is comparable to that of LiF (1800 kg/cu m).

  9. Polystyrene Foam Products Equation of State as a Function of Porosity and Fill Gas

    NASA Astrophysics Data System (ADS)

    Mulford, R. N.; Swift, D. C.

    2009-12-01

    An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam. Differences between air-filled, Ar-blown, and CO2-blown foams are investigated, to estimate the importance of allowing air to react with products of polystyrene decomposition. O2-blown foams are included in some comparisons, to amplify any consequences of reaction with oxygen in air. He-blown foams are included in some comparisons, to provide an extremum of density. Product pressures are slightly higher for oxygen-containing fill gases than for non-oxygen-containing fill gases. Examination of product species indicates that CO2 decomposes at high temperatures.

  10. Scalable real space pseudopotential density functional codes for materials in the exascale regime

    NASA Astrophysics Data System (ADS)

    Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack

    Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).

  11. Spatial panel analyses of alcohol outlets and motor vehicle crashes in California: 1999–2008

    PubMed Central

    Ponicki, William R.; Gruenewald, Paul J.; Remer, Lillian G.

    2014-01-01

    Although past research has linked alcohol outlet density to higher rates of drinking and many related social problems, there is conflicting evidence of density’s association with traffic crashes. An abundance of local alcohol outlets simultaneously encourages drinking and reduces driving distances required to obtain alcohol, leading to an indeterminate expected impact on alcohol-involved crash risk. This study separately investigates the effects of outlet density on (1) the risk of injury crashes relative to population and (2) the likelihood that any given crash is alcohol-involved, as indicated by police reports and single-vehicle nighttime status of crashes. Alcohol outlet density effects are estimated using Bayesian misalignment Poisson analyses of all California ZIP codes over the years 1999–2008. These misalignment models allow panel analysis of ZIP-code data despite frequent redefinition of postal-code boundaries, while also controlling for overdispersion and the effects of spatial autocorrelation. Because models control for overall retail density, estimated alcohol-outlet associations represent the extra effect of retail establishments selling alcohol. The results indicate a number of statistically well-supported associations between retail density and crash behavior, but the implied effects on crash risks are relatively small. Alcohol-serving restaurants have a greater impact on overall crash risks than on the likelihood that those crashes involve alcohol, whereas bars primarily affect the odds that crashes are alcohol-involved. Off-premise outlet density is negatively associated with risks of both crashes and alcohol involvement, while the presence of a tribal casino in a ZIP code is linked to higher odds of police-reported drinking involvement. Alcohol outlets in a given area are found to influence crash risks both locally and in adjacent ZIP codes, and significant spatial autocorrelation also suggests important relationships across geographical units. These results suggest that each type of alcohol outlet can have differing impacts on risks of crashing as well as the alcohol involvement of those crashes. PMID:23537623

  12. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  13. Ray-tracing 3D dust radiative transfer with DART-Ray: code upgrade and public release

    NASA Astrophysics Data System (ADS)

    Natale, Giovanni; Popescu, Cristina C.; Tuffs, Richard J.; Clarke, Adam J.; Debattista, Victor P.; Fischera, Jörg; Pasetto, Stefano; Rushton, Mark; Thirlwall, Jordan J.

    2017-11-01

    We present an extensively updated version of the purely ray-tracing 3D dust radiation transfer code DART-Ray. The new version includes five major upgrades: 1) a series of optimizations for the ray-angular density and the scattered radiation source function; 2) the implementation of several data and task parallelizations using hybrid MPI+OpenMP schemes; 3) the inclusion of dust self-heating; 4) the ability to produce surface brightness maps for observers within the models in HEALPix format; 5) the possibility to set the expected numerical accuracy already at the start of the calculation. We tested the updated code with benchmark models where the dust self-heating is not negligible. Furthermore, we performed a study of the extent of the source influence volumes, using galaxy models, which are critical in determining the efficiency of the DART-Ray algorithm. The new code is publicly available, documented for both users and developers, and accompanied by several programmes to create input grids for different model geometries and to import the results of N-body and SPH simulations. These programmes can be easily adapted to different input geometries, and for different dust models or stellar emission libraries.

  14. Comparative modelling of lower hybrid current drive with two launcher designs in the Tore Supra tokamak

    NASA Astrophysics Data System (ADS)

    Nilsson, E.; Decker, J.; Peysson, Y.; Artaud, J.-F.; Ekedahl, A.; Hillairet, J.; Aniel, T.; Basiuk, V.; Goniche, M.; Imbeaux, F.; Mazon, D.; Sharma, P.

    2013-08-01

    Fully non-inductive operation with lower hybrid current drive (LHCD) in the Tore Supra tokamak is achieved using either a fully active multijunction (FAM) launcher or a more recent ITER-relevant passive active multijunction (PAM) launcher, or both launchers simultaneously. While both antennas show comparable experimental efficiencies, the analysis of stability properties in long discharges suggest different current profiles. We present comparative modelling of LHCD with the two different launchers to characterize the effect of the respective antenna spectra on the driven current profile. The interpretative modelling of LHCD is carried out using a chain of codes calculating, respectively, the global discharge evolution (tokamak simulator METIS), the spectrum at the antenna mouth (LH coupling code ALOHA), the LH wave propagation (ray-tracing code C3PO), and the distribution function (3D Fokker-Planck code LUKE). Essential aspects of the fast electron dynamics in time, space and energy are obtained from hard x-ray measurements of fast electron bremsstrahlung emission using a dedicated tomographic system. LHCD simulations are validated by systematic comparisons between these experimental measurements and the reconstructed signal calculated by the code R5X2 from the LUKE electron distribution. An excellent agreement is obtained in the presence of strong Landau damping (found under low density and high-power conditions in Tore Supra) for which the ray-tracing model is valid for modelling the LH wave propagation. Two aspects of the antenna spectra are found to have a significant effect on LHCD. First, the driven current is found to be proportional to the directivity, which depends upon the respective weight of the main positive and main negative lobes and is particularly sensitive to the density in front of the antenna. Second, the position of the main negative lobe in the spectrum is different for the two launchers. As this lobe drives a counter-current, the resulting driven current profile is also different for the FAM and PAM launchers.

  15. Opacity of iron, nickel, and copper plasmas in the x-ray wavelength range: Theoretical interpretation of 2p-3d absorption spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blenski, T.; Loisel, G.; Poirier, M.

    2011-09-15

    This paper deals with theoretical studies on the 2p-3d absorption in iron, nickel, and copper plasmas related to LULI2000 (Laboratoire pour l'Utilisation des Lasers Intenses, 2000J facility) measurements in which target temperatures were of the order of 20 eV and plasma densities were in the range 0.004-0.01 g/cm{sup 3}. The radiatively heated targets were close to local thermodynamic equilibrium (LTE). The structure of 2p-3d transitions has been studied with the help of the statistical superconfiguration opacity code sco and with the fine-structure atomic physics codes hullac and fac. A new mixed version of the sco code allowing one to treatmore » part of the configurations by detailed calculation based on the Cowan's code rcg has been also used in these comparisons. Special attention was paid to comparisons between theory and experiment concerning the term features which cannot be reproduced by sco. The differences in the spin-orbit splitting and the statistical (thermal) broadening of the 2p-3d transitions have been investigated as a function of the atomic number Z. It appears that at the conditions of the experiment the role of the term and configuration broadening was different in the three analyzed elements, this broadening being sensitive to the atomic number. Some effects of the temperature gradients and possible non-LTE effects have been studied with the help of the radiative-collisional code scric. The sensitivity of the 2p-3d structures with respect to temperature and density in medium-Z plasmas may be helpful for diagnostics of LTE plasmas especially in future experiments on the {Delta}n=0 absorption in medium-Z plasmas for astrophysical applications.« less

  16. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  17. Recent Updates to the MELCOR 1.8.2 Code for ITER Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrill, Brad J

    This report documents recent changes made to the MELCOR 1.8.2 computer code for application to the International Thermonuclear Experimental Reactor (ITER), as required by ITER Task Agreement ITA 81-18. There are four areas of change documented by this report. The first area is the addition to this code of a model for transporting HTO. The second area is the updating of the material oxidation correlations to match those specified in the ITER Safety Analysis Data List (SADL). The third area replaces a modification to an aerosol tranpsort subroutine that specified the nominal aerosol density internally with one that now allowsmore » the user to specify this density through user input. The fourth area corrected an error that existed in an air condensation subroutine of previous versions of this modified MELCOR code. The appendices of this report contain FORTRAN listings of the coding for these modifications.« less

  18. Simulation of surface processes

    PubMed Central

    Jónsson, Hannes

    2011-01-01

    Computer simulations of surface processes can reveal unexpected insight regarding atomic-scale structure and transitions. Here, the strengths and weaknesses of some commonly used approaches are reviewed as well as promising avenues for improvements. The electronic degrees of freedom are usually described by gradient-dependent functionals within Kohn–Sham density functional theory. Although this level of theory has been remarkably successful in numerous studies, several important problems require a more accurate theoretical description. It is important to develop new tools to make it possible to study, for example, localized defect states and band gaps in large and complex systems. Preliminary results presented here show that orbital density-dependent functionals provide a promising avenue, but they require the development of new numerical methods and substantial changes to codes designed for Kohn–Sham density functional theory. The nuclear degrees of freedom can, in most cases, be described by the classical equations of motion; however, they still pose a significant challenge, because the time scale of interesting transitions, which typically involve substantial free energy barriers, is much longer than the time scale of vibrations—often 10 orders of magnitude. Therefore, simulation of diffusion, structural annealing, and chemical reactions cannot be achieved with direct simulation of the classical dynamics. Alternative approaches are needed. One such approach is transition state theory as implemented in the adaptive kinetic Monte Carlo algorithm, which, thus far, has relied on the harmonic approximation but could be extended and made applicable to systems with rougher energy landscape and transitions through quantum mechanical tunneling. PMID:21199939

  19. Alfven resonance mode conversion in the Phaedrus-T current drive experiments: Modelling and density fluctuations measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vukovic, M.; Harper, M.; Breun, R.

    1995-12-31

    Current drive experiments on the Phaedrus-T tokamak performed with a low field side two-strap fast wave antenna at frequencies below {omega}{sub cH} show loop volt drops of up to 30% with strap phasing (0, {pi}/2). RF induced density fluctuations in the plasma core have also been observed with a microwave reflectometer. It is believed that they are caused by kinetic Alfven waves generated by mode conversion of fast waves at the Alfven resonance. Correlation of the observed density fluctuations with the magnitude of the {Delta}V{sub loop} suggest that the {Delta}V{sub loop} is attributable to current drive/heating due to mode convertedmore » kinetic Alfven waves. The toroidal cold plasma wave code LION is used to model the Alfven resonance mode conversion surfaces in the experiments while the cylindrical hot plasma kinetic wave code ISMENE is used to model the behavior of kinetic Alfven waves at the Alfven resonance location. Initial results obtained from limited density, magnetic field, antenna phase, and impurity scans show good agreement between the RF induced density fluctuations and the predicted behavior of the kinetic Alfven waves. Detailed comparisons between the density fluctuations and the code predictions are presented.« less

  20. Solving the Vlasov equation in two spatial dimensions with the Schrödinger method

    NASA Astrophysics Data System (ADS)

    Kopp, Michael; Vattis, Kyriakos; Skordis, Constantinos

    2017-12-01

    We demonstrate that the Vlasov equation describing collisionless self-gravitating matter may be solved with the so-called Schrödinger method (ScM). With the ScM, one solves the Schrödinger-Poisson system of equations for a complex wave function in d dimensions, rather than the Vlasov equation for a 2 d -dimensional phase space density. The ScM also allows calculating the d -dimensional cumulants directly through quasilocal manipulations of the wave function, avoiding the complexity of 2 d -dimensional phase space. We perform for the first time a quantitative comparison of the ScM and a conventional Vlasov solver in d =2 dimensions. Our numerical tests were carried out using two types of cold cosmological initial conditions: the classic collapse of a sine wave and those of a Gaussian random field as commonly used in cosmological cold dark matter N-body simulations. We compare the first three cumulants, that is, the density, velocity and velocity dispersion, to those obtained by solving the Vlasov equation using the publicly available code ColDICE. We find excellent qualitative and quantitative agreement between these codes, demonstrating the feasibility and advantages of the ScM as an alternative to N-body simulations. We discuss, the emergence of effective vorticity in the ScM through the winding number around the points where the wave function vanishes. As an application we evaluate the background pressure induced by the non-linearity of large scale structure formation, thereby estimating the magnitude of cosmological backreaction. We find that it is negligibly small and has time dependence and magnitude compatible with expectations from the effective field theory of large scale structure.

  1. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  2. NR-code: Nonlinear reconstruction code

    NASA Astrophysics Data System (ADS)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  3. Benchmark of 3D halo neutral simulation in TRANSP and FIDASIM and application to projected neutral-beam-heated NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Liu, D.; Medley, S. S.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2014-10-01

    A cloud of halo neutrals is created in the vicinity of beam footprint during the neutral beam injection and the halo neutral density can be comparable with beam neutral density. Proper modeling of halo neutrals is critical to correctly interpret neutral particle analyzers (NPA) and fast ion D-alpha (FIDA) signals since these signals strongly depend on local beam and halo neutral density. A 3D halo neutral model has been recently developed and implemented inside TRANSP code. The 3D halo neutral code uses a ``beam-in-a-box'' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce thermal halo neutrals that are tracked through successive halo neutral generations until an ionization event occurs or a descendant halo exits the box. A benchmark between 3D halo neural model in TRANSP and in FIDA/NPA synthetic diagnostic code FIDASIM is carried out. Detailed comparison of halo neutral density profiles from two codes will be shown. The NPA and FIDA simulations with and without 3D halos are applied to projections of plasma performance for the National Spherical Tours eXperiment-Upgrade (NSTX-U) and the effects of halo neutral density on NPA and FIDA signal amplitude and profile will be presented. Work supported by US DOE.

  4. Analysis of the Effect of Electron Density Perturbations Generated by Gravity Waves on HF Communication Links

    NASA Astrophysics Data System (ADS)

    Fagre, M.; Elias, A. G.; Chum, J.; Cabrera, M. A.

    2017-12-01

    In the present work, ray tracing of high frequency (HF) signals in ionospheric disturbed conditions is analyzed, particularly in the presence of electron density perturbations generated by gravity waves (GWs). The three-dimensional numerical ray tracing code by Jones and Stephenson, based on Hamilton's equations, which is commonly used to study radio propagation through the ionosphere, is used. An electron density perturbation model is implemented to this code based upon the consideration of atmospheric GWs generated at a height of 150 km in the thermosphere and propagating up into the ionosphere. The motion of the neutral gas at these altitudes induces disturbances in the background plasma which affects HF signals propagation. To obtain a realistic model of GWs in order to analyze the propagation and dispersion characteristics, a GW ray tracing method with kinematic viscosity and thermal diffusivity was applied. The IRI-2012, HWM14 and NRLMSISE-00 models were incorporated to assess electron density, wind velocities, neutral temperature and total mass density needed for the ray tracing codes. Preliminary results of gravity wave effects on ground range and reflection height are presented for low-mid latitude ionosphere.

  5. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discussmore » how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.« less

  6. SurfKin: an ab initio kinetic code for modeling surface reactions.

    PubMed

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.

  7. Computational Thermodynamics of Materials Zi-Kui Liu and Yi Wang

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devanathan, Ram

    This authoritative volume introduces the reader to computational thermodynamics and the use of this approach to the design of material properties by tailoring the chemical composition. The text covers applications of this approach, introduces the relevant computational codes, and offers exercises at the end of each chapter. The book has nine chapters and two appendices that provide background material on computer codes. Chapter 1 covers the first and second laws of thermodynamics, introduces the spinodal as the limit of stability, and presents the Gibbs-Duhem equation. Chapter 2 focuses on the Gibbs energy function. Starting with a homogeneous system with amore » single phase, the authors proceed to phases with variable compositions, and polymer blends. The discussion includes the contributions of external electric and magnetic fields to the Gibbs energy. Chapter 3 deals with phase equilibria in heterogeneous systems, the Gibbs phase rule, and phase diagrams. Chapter 4 briefly covers experimental measurements of thermodynamic properties used as input for thermodynamic modeling by Calculation of Phase Diagrams (CALPHAD). Chapter 5 discusses the use of density functional theory to obtain thermochemical data and fill gaps where experimental data is missing. The reader is introduced to the Vienna Ab Initio Simulation Package (VASP) for density functional theory and the YPHON code for phonon calculations. Chapter 6 introduces the modeling of Gibbs energy of phases with the CALPHAD method. Chapter 7 deals with chemical reactions and the Ellingham diagram for metal-oxide systems and presents the calculation of the maximum reaction rate from equilibrium thermodynamics. Chapter 8 is devoted to electrochemical reactions and Pourbaix diagrams with application examples. Chapter 9 concludes this volume with the application of a model of multiple microstates to Ce and Fe3Pt. CALPHAD modeling is briefly discussed in the context of genomics of materials. The book introduces basic thermodynamic concepts clearly and directs readers to appropriate references for advanced concepts and details of software implementation. The list of references is quite comprehensive. The authors make liberal use of diagrams to illustrate key concepts. The two Appendices at the end discuss software requirements and the file structure, and present templates for special quasi-random structures. There is also a link to download pre-compiled binary files of the YPHON code for Linux or Microsoft Windows systems. The exercises at the end of the chapters assume that the reader has access to VASP, which is not freeware. Readers without access to this code can work on a limited number of exercises. However, results from other first principles codes can be organized in the YPHON format as explained in the Appendix. This book will serve as an excellent reference on computational thermodynamics and the exercises provided at the end of each chapter make it valuable as a graduate level textbook. Reviewer: Ram Devanathan is Acting Director of Earth Systems Science Division, Pacific Northwest National Laboratory, USA.« less

  8. Comparative study of beam losses and heat loads reduction methods in MITICA beam source

    NASA Astrophysics Data System (ADS)

    Sartori, E.; Agostinetti, P.; Dal Bello, S.; Marcuzzi, D.; Serianni, G.; Sonato, P.; Veltri, P.

    2014-02-01

    In negative ion electrostatic accelerators a considerable fraction of extracted ions is lost by collision processes causing efficiency loss and heat deposition over the components. Stripping is proportional to the local density of gas, which is steadily injected in the plasma source; its pumping from the extraction and acceleration stages is a key functionality for the prototype of the ITER Neutral Beam Injector, and it can be simulated with the 3D code AVOCADO. Different geometric solutions were tested aiming at the reduction of the gas density. The parameter space considered is limited by constraints given by optics, aiming, voltage holding, beam uniformity, and mechanical feasibility. The guidelines of the optimization process are presented together with the proposed solutions and the results of numerical simulations.

  9. Correlation between CT numbers and tissue parameters needed for Monte Carlo simulations of clinical dose distributions

    NASA Astrophysics Data System (ADS)

    Schneider, Wilfried; Bortfeld, Thomas; Schlegel, Wolfgang

    2000-02-01

    We describe a new method to convert CT numbers into mass density and elemental weights of tissues required as input for dose calculations with Monte Carlo codes such as EGS4. As a first step, we calculate the CT numbers for 71 human tissues. To reduce the effort for the necessary fits of the CT numbers to mass density and elemental weights, we establish four sections on the CT number scale, each confined by selected tissues. Within each section, the mass density and elemental weights of the selected tissues are interpolated. For this purpose, functional relationships between the CT number and each of the tissue parameters, valid for media which are composed of only two components in varying proportions, are derived. Compared with conventional data fits, no loss of accuracy is accepted when using the interpolation functions. Assuming plausible values for the deviations of calculated and measured CT numbers, the mass density can be determined with an accuracy better than 0.04 g cm-3 . The weights of phosphorus and calcium can be determined with maximum uncertainties of 1 or 2.3 percentage points (pp) respectively. Similar values can be achieved for hydrogen (0.8 pp) and nitrogen (3 pp). For carbon and oxygen weights, errors up to 14 pp can occur. The influence of the elemental weights on the results of Monte Carlo dose calculations is investigated and discussed.

  10. Error floor behavior study of LDPC codes for concatenated codes design

    NASA Astrophysics Data System (ADS)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  11. Uncertainty Propagation in OMFIT

    NASA Astrophysics Data System (ADS)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  12. UCLA-LANL Reanalysis Project

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Chen, Y.; Friedel, R.; Kondrashov, D.; Ni, B.; Subbotin, D.; Reeves, G.; Ghil, M.

    2009-04-01

    We present first results of the UCLA-LANL Reanalysis Project. Radiation belt relativistic electron Phase Space Density is obtained using the data assimilative VERB code combined with observations from GEO, CRRES, and Akebono data. Reanalysis of data shows the pronounced peaks in the phase space density and pronounced dropouts of fluxes during the main phase of a storm. The results of the reanalysis are discussed and compared to the simulations with the recently developed VERB 3D code.

  13. Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.

    2013-06-01

    Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  14. Sensitivity of PBX-9502 after ratchet growth

    NASA Astrophysics Data System (ADS)

    Mulford, Roberta N.; Swift, Damian

    2012-03-01

    Ratchet growth, or irreversible thermal expansion of the TATB-based plastic-bonded explosive PBX-9502, leads to increased sensitivity, as a result of increased porosity. The observed increase of between 3.1 and 3.5 volume percent should increase sensitivity according to the published Pop-plots for PBX-9502 [1]. Because of the variable size, shape, and location of the increased porosity, the observed sensitivity of the ratchet-grown sample is less than the sensitivity of a sample pressed to the same density. Modeling of the composite, using a quasi-harmonic EOS for unreacted components [2] and a robust porosity model for variations in density [3], allowed comparison of the initiation observed in experiment with behavior modeled as a function of density. An Arrhenius model was used to describe reaction, and the EOS for products was generated using the CHEETAH code [4]. A 1-D Lagrangian hydrocode was used to model in-material gauge records and the measured turnover to detonation, predicting greater sensitivity to density than observed for ratchet-grown material. This observation is consistent with gauge records indicating intermittent growth of the reactive wave, possibly due to inhomogeneities in density, as observed in SEM images of the material [5].

  15. X-ray clusters from a high-resolution hydrodynamic PPM simulation of the cold dark matter universe

    NASA Technical Reports Server (NTRS)

    Bryan, Greg L.; Cen, Renyue; Norman, Michael L.; Ostriker, Jermemiah P.; Stone, James M.

    1994-01-01

    A new three-dimensional hydrodynamic code based on the piecewise parabolic method (PPM) is utilized to compute the distribution of hot gas in the standard Cosmic Background Explorer (COBE)-normalized cold dark matter (CDM) universe. Utilizing periodic boundary conditions, a box with size 85 h(exp-1) Mpc, having cell size 0.31 h(exp-1) Mpc, is followed in a simulation with 270(exp 3)=10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, Sigma(sub 8)=1.05, Omega(sub b)=0.06, we find the X-ray-emitting clusters, compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. The results, which are compared with those obtained in the preceding paper (Kang et al. 1994a), may be used in conjuction with ROSAT and other observational data sets. Overall, the results of the two computations are qualitatively very similar with regard to the trends of cluster properties, i.e., how the number density, radius, and temeprature depend on luminosity and redshift. The total luminosity from clusters is approximately a factor of 2 higher using the PPM code (as compared to the 'total variation diminishing' (TVD) code used in the previous paper) with the number of bright clusters higher by a similar factor. The primary conclusions of the prior paper, with regard to the power spectrum of the primeval density perturbations, are strengthened: the standard CDM model, normalized to the COBE microwave detection, predicts too many bright X-ray emitting clusters, by a factor probably in excess of 5. The comparison between observations and theoretical predictions for the evolution of cluster properties, luminosity functions, and size and temperature distributions should provide an important discriminator among competing scenarios for the development of structure in the universe.

  16. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  17. Calculation of thermodynamic functions of aluminum plasma for high-energy-density systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaev, V. V., E-mail: shumaev@student.bmstu.ru

    The results of calculating the degree of ionization, the pressure, and the specific internal energy of aluminum plasma in a wide temperature range are presented. The TERMAG computational code based on the Thomas–Fermi model was used at temperatures T > 105 K, and the ionization equilibrium model (Saha model) was applied at lower temperatures. Quantitatively similar results were obtained in the temperature range where both models are applicable. This suggests that the obtained data may be joined to produce a wide-range equation of state.

  18. First-Principles Approach to Model Electrochemical Reactions: Understanding the Fundamental Mechanisms behind Mg Corrosion

    NASA Astrophysics Data System (ADS)

    Surendralal, Sudarsan; Todorova, Mira; Finnis, Michael W.; Neugebauer, Jörg

    2018-06-01

    Combining concepts of semiconductor physics and corrosion science, we develop a novel approach that allows us to perform ab initio calculations under controlled potentiostat conditions for electrochemical systems. The proposed approach can be straightforwardly applied in standard density functional theory codes. To demonstrate the performance and the opportunities opened by this approach, we study the chemical reactions that take place during initial corrosion at the water-Mg interface under anodic polarization. Based on this insight, we derive an atomistic model that explains the origin of the anodic hydrogen evolution.

  19. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  20. Dissemination and support of ARGUS for accelerator applications. Technical progress report, April 24, 1991--January 20, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  1. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  2. GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT

    NASA Astrophysics Data System (ADS)

    Strubbe, David A.

    GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.

  3. molgw 1: Many-body perturbation theory software for atoms, molecules, and clusters

    DOE PAGES

    Bruneval, Fabien; Rangel, Tonatiuh; Hamed, Samia M.; ...

    2016-07-12

    Here, we summarize the MOLGW code that implements density-functional theory and many-body perturbation theory in a Gaussian basis set. The code is dedicated to the calculation of the many-body self-energy within the GW approximation and the solution of the Bethe–Salpeter equation. These two types of calculations allow the user to evaluate physical quantities that can be compared to spectroscopic experiments. Quasiparticle energies, obtained through the calculation of the GW self-energy, can be compared to photoemission or transport experiments, and neutral excitation energies and oscillator strengths, obtained via solution of the Bethe–Salpeter equation, are measurable by optical absorption. The implementation choicesmore » outlined here have aimed at the accuracy and robustness of calculated quantities with respect to measurements. Furthermore, the algorithms implemented in MOLGW allow users to consider molecules or clusters containing up to 100 atoms with rather accurate basis sets, and to choose whether or not to apply the resolution-of-the-identity approximation. Finally, we demonstrate the parallelization efficacy of the MOLGW code over several hundreds of processors.« less

  4. CLUMPY: A code for γ-ray signals from dark matter structures

    NASA Astrophysics Data System (ADS)

    Charbonnier, Aldée; Combet, Céline; Maurin, David

    2012-03-01

    We present the first public code for semi-analytical calculation of the γ-ray flux astrophysical J-factor from dark matter annihilation/decay in the Galaxy, including dark matter substructures. The core of the code is the calculation of the line of sight integral of the dark matter density squared (for annihilations) or density (for decaying dark matter). The code can be used in three modes: i) to draw skymaps from the Galactic smooth component and/or the substructure contributions, ii) to calculate the flux from a specific halo (that is not the Galactic halo, e.g. dwarf spheroidal galaxies) or iii) to perform simple statistical operations from a list of allowed DM profiles for a given object. Extragalactic contributions and other tracers of DM annihilation (e.g. positrons, anti-protons) will be included in a second release.

  5. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  6. Performance optimization of Qbox and WEST on Intel Knights Landing

    NASA Astrophysics Data System (ADS)

    Zheng, Huihuo; Knight, Christopher; Galli, Giulia; Govoni, Marco; Gygi, Francois

    We present the optimization of electronic structure codes Qbox and WEST targeting the Intel®Xeon Phi™processor, codenamed Knights Landing (KNL). Qbox is an ab-initio molecular dynamics code based on plane wave density functional theory (DFT) and WEST is a post-DFT code for excited state calculations within many-body perturbation theory. Both Qbox and WEST employ highly scalable algorithms which enable accurate large-scale electronic structure calculations on leadership class supercomputer platforms beyond 100,000 cores, such as Mira and Theta at the Argonne Leadership Computing Facility. In this work, features of the KNL architecture (e.g. hierarchical memory) are explored to achieve higher performance in key algorithms of the Qbox and WEST codes and to develop a road-map for further development targeting next-generation computing architectures. In particular, the optimizations of the Qbox and WEST codes on the KNL platform will target efficient large-scale electronic structure calculations of nanostructured materials exhibiting complex structures and prediction of their electronic and thermal properties for use in solar and thermal energy conversion device. This work was supported by MICCoM, as part of Comp. Mats. Sci. Program funded by the U.S. DOE, Office of Sci., BES, MSE Division. This research used resources of the ALCF, which is a DOE Office of Sci. User Facility under Contract DE-AC02-06CH11357.

  7. PYFLOW 2.0. A new open-source software for quantifying the impact and depositional properties of dilute pyroclastic density currents

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2017-04-01

    Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46

  8. Structure and properties of fullerene molecular crystals with linear-scaling van der Waals density functional theory

    NASA Astrophysics Data System (ADS)

    Mostofi, Arash; Andrinopoulos, Lampros; Hine, Nicholas

    2014-03-01

    Fullerene molecular crystals are of technological promise for their use in heterojunction photovoltaic cells. An improved theoretical understanding of their structure and properties would be a step towards the rational design of new devices. Simulations based on density-functional theory (DFT) are invaluable for developing such insight, but standard semi-local functionals do not capture the important inter-molecular van der Waals (vdW) interactions in fullerene crystals. Furthermore the computational cost associated with the large unit cells needed are at the limit or beyond the capabilities of traditional DFT methods. In this work we overcome these limitations by using our implementation of a number of vdW-DFs in the ONETEP linear-scaling DFT code to study the structural properties of C60 molecular crystals. Powder neutron diffraction shows that the low-temperature Pa-3 phase is orientationally ordered with individual C60 units rotated around the [111] direction. We fully explore the energy landscape associated with the rotation angle and find two stable structures that are energetically very close, one of which corresponds to the experimentally observed structure. We further consider the effect of orientational disorder in very large supercells of thousands of atoms.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  10. High-density functional-RNA arrays as a versatile platform for studying RNA-based interactions.

    PubMed

    Phillips, Jack O; Butt, Louise E; Henderson, Charlotte A; Devonshire, Martin; Healy, Jess; Conway, Stuart J; Locker, Nicolas; Pickford, Andrew R; Vincent, Helen A; Callaghan, Anastasia J

    2018-05-28

    We are just beginning to unravel the myriad of interactions in which non-coding RNAs participate. The intricate RNA interactome is the foundation of many biological processes, including bacterial virulence and human disease, and represents unexploited resources for the development of potential therapeutic interventions. However, identifying specific associations of a given RNA from the multitude of possible binding partners within the cell requires robust high-throughput systems for their rapid screening. Here, we present the first demonstration of functional-RNA arrays as a novel platform technology designed for the study of such interactions using immobilized, active RNAs. We have generated high-density RNA arrays by an innovative method involving surface-capture of in vitro transcribed RNAs. This approach has significant advantages over existing technologies, particularly in its versatility in regards to binding partner character. Indeed, proof-of-principle application of RNA arrays to both RNA-small molecule and RNA-RNA pairings is demonstrated, highlighting their potential as a platform technology for mapping RNA-based networks and for pharmaceutical screening. Furthermore, the simplicity of the method supports greater user-accessibility over currently available technologies. We anticipate that functional-RNA arrays will find broad utility in the expanding field of RNA characterization.

  11. The effect of density fluctuations on electron cyclotron beam broadening and implications for ITER

    NASA Astrophysics Data System (ADS)

    Snicker, A.; Poli, E.; Maj, O.; Guidi, L.; Köhn, A.; Weber, H.; Conway, G.; Henderson, M.; Saibene, G.

    2018-01-01

    We present state-of-the-art computations of propagation and absorption of electron cyclotron waves, retaining the effects of scattering due to electron density fluctuations. In ITER, injected microwaves are foreseen to suppress neoclassical tearing modes (NTMs) by driving current at the q=2 and q=3/2 resonant surfaces. Scattering of the beam can spoil the good localization of the absorption and thus impair NTM control capabilities. A novel tool, the WKBeam code, has been employed here in order to investigate this issue. The code is a Monte Carlo solver for the wave kinetic equation and retains diffraction, full axisymmetric tokamak geometry, determination of the absorption profile and an integral form of the scattering operator which describes the effects of turbulent density fluctuations within the limits of the Born scattering approximation. The approach has been benchmarked against the paraxial WKB code TORBEAM and the full-wave code IPF-FDMC. In particular, the Born approximation is found to be valid for ITER parameters. In this paper, we show that the radiative transport of EC beams due to wave scattering in ITER is diffusive unlike in present experiments, thus causing up to a factor of 2-4 broadening in the absorption profile. However, the broadening depends strongly on the turbulence model assumed for the density fluctuations, which still has large uncertainties.

  12. Carrington 2? Estimated response of the magnetosphere to a major outburst'

    NASA Astrophysics Data System (ADS)

    Bala, R.; Reiff, P. H.; Russell, C. T.

    2013-12-01

    On July 23, 2012, a major CME outburst on the far side of the Sun was observed by STEREO A [Russell et al, 2013]. Because of its intensity and by the fact that it included a significant flux of SEP's, it has been hailed as "Carrington 2" by some, warning that, had that CME been heading towards the Earth, it might have caused a major space weather event. We then used our neural network algorithm to use the solar wind and IMF parameters measured in situ by STEREO A to infer what the geoeffectiveness of that storm might have been. We presently show three of our neural network models on our realtime prediction site: http://mms.rice.edu/realtime/forecast.html. The three models use different base functions, trained by a solar cycle worth of solar wind input and geomagnetic response data. One model uses the "Boyle Index" (BI) as the base transfer function (which includes Bz and velocity but not density). The "Ram" function includes the Boyle Index plus a pressure term. The "Newell" function uses the Newell formula which does include density. Statistically, each of them is good for either a one-hour or three-hour prediction to better than one unit in Kp. (Another talk will show the relative success of each as a realtime predictor). STEREO density data were not available for this event, so we chose as a density proxy the density from a similar event in April 2001. Running this "C2" event through our neural network predictors showed that, in fact, this would have been an exceptional (but perhaps not devastating) event. The BI prediction resulted in a Kp of 8+, a Dst of less than -300 nT, but an AE index of only 1000 nT. Using the "Ram" code, the Kp prediction increased to almost 9+, with Dst again below -300 nT and AE of 1200 nT. Results of a range of possible assumptions about the density structure will be shown.

  13. LDPC coded OFDM over the atmospheric turbulence channel.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  14. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  15. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  16. Thermally activated decomposition of (Ga,Mn)As thin layer at medium temperature post growth annealing

    NASA Astrophysics Data System (ADS)

    Melikhov, Y.; Konstantynov, P.; Domagala, J.; Sadowski, J.; Chernyshova, M.; Wojciechowski, T.; Syryanyy, Y.; Demchenko, I. N.

    2016-05-01

    The redistribution of Mn atoms in Ga1-xMnxAs layer during medium-temperature annealing, 250-450 oC, by Mn K-edge X-ray absorption fine structure (XAFS) recorded at ALBA facility, was studied. For this purpose Ga1-xMnxAs thin layer with x=0.01 was grown on AlAs buffer layer deposited on GaAs(100) substrate by molecular beam epitaxy (MBE) followed by annealing. The examined layer was detached from the substrate using a “lift-off” procedure in order to eliminate elastic scattering in XAFS spectra. Fourier transform analysis of experimentally obtained EXAFS spectra allowed to propose a model which describes a redistribution/diffusion of Mn atoms in the host matrix. Theoretical XANES spectra, simulated using multiple scattering formalism (FEFF code) with the support of density functional theory (WIEN2k code), qualitatively describe the features observed in the experimental fine structure.

  17. Probabilistic Analysis of Large-Scale Composite Structures Using the IPACS Code

    NASA Technical Reports Server (NTRS)

    Lemonds, Jeffrey; Kumar, Virendra

    1995-01-01

    An investigation was performed to ascertain the feasibility of using IPACS (Integrated Probabilistic Assessment of Composite Structures) for probabilistic analysis of a composite fan blade, the development of which is being pursued by various industries for the next generation of aircraft engines. A model representative of the class of fan blades used in the GE90 engine has been chosen as the structural component to be analyzed with IPACS. In this study, typical uncertainties are assumed in the level, and structural responses for ply stresses and frequencies are evaluated in the form of cumulative probability density functions. Because of the geometric complexity of the blade, the number of plies varies from several hundred at the root to about a hundred at the tip. This represents a extremely complex composites application for the IPACS code. A sensitivity study with respect to various random variables is also performed.

  18. Comparing contribution of flexural and planar modes to thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Mann, Sarita; Rani, Pooja; Jindal, V. K.

    2017-05-01

    Graphene, the most studied and explored 2D structure has unusual thermal properties such as negative thermal expansion, high thermal conductivity etc. We have already studied the thermal expansion behavior and various thermodynamic properties of pure graphene like heat capacity, entropy and free energy. The results of thermal expansion and various thermodynamic properties match well with available theoretical studies. For a deeper understanding of these properties, we analyzed the contribution of each phonon branch towards the total value of the individual property. To compute these properties, the dynamical matrix was calculated using VASP code where the density functional perturbation theory (DFPT) is employed under quasi-harmonic approximation in interface with phonopy code. It is noticed that transverse mode has major contribution to negative thermal expansion and all branches have almost same contribution towards the various thermodynamic properties with the contribution of ZA mode being the highest.

  19. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    NASA Astrophysics Data System (ADS)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  20. Quantum Monte Carlo Simulations of the Quartz to Stishovite Transition in SiO2

    NASA Astrophysics Data System (ADS)

    Cohen, R. E.; Towler, Mike; Lopez Rios, Pablo; Drummond, Neil; Needs, Richard

    2007-03-01

    The quartz-stishovite transition has been a long standing problem for density functional theory (DFT). Although conventional DFT computations within the local density approximation (LDA) give reasonably good properties of silica phases individually, they do not give the energy difference between quartz and stishovite accurately. The LDA gives stishovite as a lower energy structure than quartz at zero pressure, which is incorrect. The generalized gradient approximation (GGA) has been shown to give the correct energy difference between quartz and stishovite (about 0.5 eV/formula unit) (Hamann, PRL 76, 660, 1996; Zupan et al., PRB 58, 11266, 1998), and it was generally thought that the GGA was simply a better approximation than the LDA. However, closer inspection shows that other properties are not better for the GGA than the LDA, so there is room for improvement. A new density functional that is an improvement for most materials unfortunately does not improve the quartz-stishovite transition (Wu and Cohen, PRB 73, 235116, 2006). We are performing QMC computations using the CASINO code to obtain the accurate energy difference between quartz and stishovite to obtain more accurate high pressure properties, and to better understand the errors on DFT and how DFT can be improved.

  1. Simulating the dust content of galaxies: successes and failures

    NASA Astrophysics Data System (ADS)

    McKinnon, Ryan; Torrey, Paul; Vogelsberger, Mark; Hayward, Christopher C.; Marinacci, Federico

    2017-06-01

    We present full-volume cosmological simulations, using the moving-mesh code arepo to study the coevolution of dust and galaxies. We extend the dust model in arepo to include thermal sputtering of grains and investigate the evolution of the dust mass function, the cosmic distribution of dust beyond the interstellar medium and the dependence of dust-to-stellar mass ratio on galactic properties. The simulated dust mass function is well described by a Schechter fit and lies closest to observations at z = 0. The radial scaling of projected dust surface density out to distances of 10 Mpc around galaxies with magnitudes 17 < I < 21 is similar to that seen in Sloan Digital Sky Survey data, albeit with a lower normalization. At z = 0, the predicted dust density of Ωdust ≈ 1.3 × 10-6 lies in the range of Ωdust values seen in low-redshift observations. We find that the dust-to-stellar mass ratio anticorrelates with stellar mass for galaxies living along the star formation main sequence. Moreover, we estimate the 850 μm number density functions for simulated galaxies and analyse the relation between dust-to-stellar flux and mass ratios at z = 0. At high redshift, our model fails to produce enough dust-rich galaxies, and this tension is not alleviated by adopting a top-heavy initial mass function. We do not capture a decline in Ωdust from z = 2 to 0, which suggests that dust production mechanisms more strongly dependent on star formation may help to produce the observed number of dusty galaxies near the peak of cosmic star formation.

  2. A Novel Loss-of-Sclerostin Function Mutation in a First Egyptian Family with Sclerosteosis

    PubMed Central

    Fayez, Alaaeldin; Aglan, Mona; Esmaiel, Nora; El Zanaty, Taher; Abdel Kader, Mohamed; El Ruby, Mona

    2015-01-01

    Sclerosteosis is a rare autosomal recessive condition characterized by increased bone density. Mutations in SOST gene coding for sclerostin are linked to sclerosteosis. Two Egyptian brothers with sclerosteosis and their apparently normal consanguineous parents were included in this study. Clinical evaluation and genomic sequencing of the SOST gene were performed followed by in silico analysis of the resulting variation. A novel homozygous frameshift mutation in the SOST gene, characterized as one nucleotide cytosine insertion that led to premature stop codon and loss of functional sclerostin, was identified in the two affected brothers. Their parents were heterozygous for the same mutation. To our knowledge this is the first Egyptian study of sclerosteosis and SOST gene causing mutation. PMID:25984533

  3. Vanadium impurity effects on optical properties of Ti3N2 mono-layer: An ab-initio study

    NASA Astrophysics Data System (ADS)

    Babaeipour, Manuchehr; Eslam, Farzaneh Ghafari; Boochani, Arash; Nezafat, Negin Beryani

    2018-06-01

    The present work is investigated the effect of vanadium impurity on electronic and optical properties of Ti3N2 monolayer by using density function theory (DFT) implemented in Wien2k code. In order to study optical properties in two polarization directions of photons, namely E||x and E||z, dielectric function, absorption coefficient, optical conductivity, refraction index, extinction index, reflectivity, and energy loss function of Ti3N2 and Ti3N2-V monolayer have been evaluated within GGA (PBE) approximation. Although, Ti3N2 monolayer is a good infrared reflector and can be used as an infrared mirror, introducing V atom in the infrared area will decrease optical conductivity because optical conductivity of a pure form of a material is higher than its doped form.

  4. High-density digital recording

    NASA Technical Reports Server (NTRS)

    Kalil, F. (Editor); Buschman, A. (Editor)

    1985-01-01

    The problems associated with high-density digital recording (HDDR) are discussed. Five independent users of HDDR systems and their problems, solutions, and insights are provided as guidance for other users of HDDR systems. Various pulse code modulation coding techniques are reviewed. An introduction to error detection and correction head optimization theory and perpendicular recording are provided. Competitive tape recorder manufacturers apply all of the above theories and techniques and present their offerings. The methodology used by the HDDR Users Subcommittee of THIC to evaluate parallel HDDR systems is presented.

  5. Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.

    PubMed

    Djordjevic, Ivan B

    2010-05-01

    I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.

  6. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    ERIC Educational Resources Information Center

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  7. Self-Configuration and Localization in Ad Hoc Wireless Sensor Networks

    DTIC Science & Technology

    2010-08-31

    Goddard I. SUMMARY OF CONTRIBUTIONS We explored the error mechanisms of iterative decoding of low-density parity-check ( LDPC ) codes . This work has resulted...important problems in the area of channel coding , as their unpredictable behavior has impeded the deployment of LDPC codes in many real-world applications. We...tree-based decoders of LDPC codes , including the extrinsic tree decoder, and an investigation into their performance and bounding capabilities [5], [6

  8. Matrix-Product-State Algorithm for Finite Fractional Quantum Hall Systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Bhatt, R. N.

    2015-09-01

    Exact diagonalization is a powerful tool to study fractional quantum Hall (FQH) systems. However, its capability is limited by the exponentially increasing computational cost. In order to overcome this difficulty, density-matrix-renormalization-group (DMRG) algorithms were developed for much larger system sizes. Very recently, it was realized that some model FQH states have exact matrix-product-state (MPS) representation. Motivated by this, here we report a MPS code, which is closely related to, but different from traditional DMRG language, for finite FQH systems on the cylinder geometry. By representing the many-body Hamiltonian as a matrix-product-operator (MPO) and using single-site update and density matrix correction, we show that our code can efficiently search the ground state of various FQH systems. We also compare the performance of our code with traditional DMRG. The possible generalization of our code to infinite FQH systems and other physical systems is also discussed.

  9. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.

    2014-10-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.

  10. Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il

    2014-08-15

    Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

  11. Polystyrene foam products equation of state as a function of porosity and fill gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulford, Roberta N; Swift, Damian C

    2009-01-01

    An accurate EOS for polystyrene foam is necessary for analysis of numerous experiments in shock compression, inertial confinement fusion, and astrophysics. Plastic to gas ratios vary between various samples of foam, according to the density and cell-size of the foam. A matrix of compositions has been investigated, allowing prediction of foam response as a function of the plastic-to-air ratio. The EOS code CHEETAH allows participation of the air in the decomposition reaction of the foam. Differences between air-filled, Ar-blown, and CO{sub 2}-blown foams are investigated, to estimate the importance of allowing air to react with products of polystyrene decomposition. O{submore » 2}-blown foams are included in some comparisons, to amplify any consequences of reaction with oxygen in air. He-blown foams are included in some comparisons, to provide an extremum of density. Product pressures are slightly higher for oxygen-containing fill gases than for non-oxygen-containing fill gases. Examination of product species indicates that CO{sub 2} decomposes at high temperatures.« less

  12. Understanding the inelastic electron-tunneling spectra of alkanedithiols on gold.

    PubMed

    Solomon, Gemma C; Gagliardi, Alessio; Pecchia, Alessandro; Frauenheim, Thomas; Di Carlo, Aldo; Reimers, Jeffrey R; Hush, Noel S

    2006-03-07

    We present results for a simulated inelastic electron-tunneling spectra (IETS) from calculations using the "gDFTB" code. The geometric and electronic structure is obtained from calculations using a local-basis density-functional scheme, and a nonequilibrium Green's function formalism is employed to deal with the transport aspects of the problem. The calculated spectrum of octanedithiol on gold(111) shows good agreement with experimental results and suggests further details in the assignment of such spectra. We show that some low-energy peaks, unassigned in the experimental spectrum, occur in a region where a number of molecular modes are predicted to be active, suggesting that these modes are the cause of the peaks rather than a matrix signal, as previously postulated. The simulations also reveal the qualitative nature of the processes dominating IETS. It is highly sensitive only to the vibrational motions that occur in the regions of the molecule where there is electron density in the low-voltage conduction channel. This result is illustrated with an examination of the predicted variation of IETS with binding site and alkane chain length.

  13. Calculation of photodetachment cross sections and photoelectron angular distributions of negative ions using density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuan; Ning, Chuangang, E-mail: ningcg@tsinghua.edu.cn; Collaborative Innovation Center of Quantum Matter, Beijing

    2015-10-14

    Recently, the development of photoelectron velocity map imaging makes it much easier to obtain the photoelectron angular distributions (PADs) experimentally. However, explanations of PADs are only qualitative in most cases, and very limited works have been reported on how to calculate PAD of anions. In the present work, we report a method using the density-functional-theory Kohn-Sham orbitals to calculate the photodetachment cross sections and the anisotropy parameter β. The spherical average over all random molecular orientation is calculated analytically. A program which can handle both the Gaussian type orbital and the Slater type orbital has been coded. The testing calculationsmore » on Li{sup −}, C{sup −}, O{sup −}, F{sup −}, CH{sup −}, OH{sup −}, NH{sub 2}{sup −}, O{sub 2}{sup −}, and S{sub 2}{sup −} show that our method is an efficient way to calculate the photodetachment cross section and anisotropy parameter β for anions, thus promising for large systems.« less

  14. Calculation of ion distribution functions and neoclassical transport in the edge of single-null divertor tokamaks

    NASA Astrophysics Data System (ADS)

    Rognlien, T. D.; Cohen, R. H.; Xu, X. Q.

    2007-11-01

    The ion distribution function in the H-mode pedestal region and outward across the magnetic separatrix is expected to have a substantial non-Maxwellian character owing to the large banana orbits and steep gradients in temperature and density. The 4D (2r,2v) version of the TEMPEST continuum gyrokinetic code is used with a Coulomb collision model to calculate the ion distribution in a single-null tokamak geometry throughout the pedestal/scrape-off-layer regions. The mean density, parallel velocity, and energy radial profiles are shown at various poloidal locations. The collisions cause neoclassical energy transport through the pedestal that is then lost to the divertor plates along the open field lines outside the separatrix. The resulting heat flux profiles at the inner and outer divertor plates are presented and discussed, including asymmetries that depend on the B-field direction. Of particular focus is the effect on ion profiles and fluxes of a radial electric field exhibiting a deep well just inside the separatrix, which reduces the width of the banana orbits by the well-known squeezing effect.

  15. Challenges in Wireless System Integration as Enablers for Indoor Context Aware Environments

    PubMed Central

    Aguirre, Erik

    2017-01-01

    The advent of fully interactive environments within Smart Cities and Smart Regions requires the use of multiple wireless systems. In the case of user-device interaction, which finds multiple applications such as Ambient Assisted Living, Intelligent Transportation Systems or Smart Grids, among others, large amount of transceivers are employed in order to achieve anytime, anyplace and any device connectivity. The resulting combination of heterogeneous wireless network exhibits fundamental limitations derived from Coverage/Capacity relations, as a function of required Quality of Service parameters, required bit rate, energy restrictions and adaptive modulation and coding schemes. In this context, inherent transceiver density poses challenges in overall system operation, given by multiple node operation which increases overall interference levels. In this work, a deterministic based analysis applied to variable density wireless sensor network operation within complex indoor scenarios is presented, as a function of topological node distribution. The extensive analysis derives interference characterizations, both for conventional transceivers as well as wearables, which provide relevant information in terms of individual node configuration as well as complete network layout. PMID:28704963

  16. First principles calculation of thermo-mechanical properties of thoria using Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Malakkal, Linu; Szpunar, Barbara; Zuniga, Juan Carlos; Siripurapu, Ravi Kiran; Szpunar, Jerzy A.

    2016-05-01

    In this work, we have used Quantum ESPRESSO (QE), an open source first principles code, based on density-functional theory, plane waves, and pseudopotentials, along with quasi-harmonic approximation (QHA) to calculate the thermo-mechanical properties of thorium dioxide (ThO2). Using Python programming language, our group developed qe-nipy-advanced, an interface to QE, which can evaluate the structural and thermo-mechanical properties of materials. We predicted the phonon contribution to thermal conductivity (kL) using the Slack model. We performed the calculations within local density approximation (LDA) and generalized gradient approximation (GGA) with the recently proposed version for solids (PBEsol). We employed a Monkhorst-Pack 5 × 5 × 5 k-points mesh in reciprocal space with a plane wave cut-off energy of 150 Ry to obtain the convergence of the structure. We calculated the dynamical matrices of the lattice on a 4 × 4 × 4 mesh. We have predicted the heat capacity, thermal expansion and the phonon contribution to thermal conductivity, as a function of temperature up to 1400K, and compared them with the previous work and known experimental results.

  17. First-principles study of the covalently functionalized graphene

    NASA Astrophysics Data System (ADS)

    Jha, Sanjiv Kumar

    Theoretical investigations of nanoscale systems, such as functionalized graphene, present major challenges to conventional computational methods employed in quantum chemistry and solid state physics. The properties of graphene can be affected by chemical functionalization. The surface functionalization of graphene offers a promising way to increase the solubility and reactivity of graphene for use in nanocomposites and chemical sensors. Covalent functionalization is an efficient way to open band-gap in graphene for applications in nanoelectronics. We apply ab initio computational methods based on density functional theory to study the covalent functionalization of graphene with benzyne (C6H4), tetracyanoethylene oxide (TCNEO), and carboxyl (COOH) groups. Our calculations are carried out using the SIESTA and Quantum-ESPRESSO electronic structure codes combined with the generalized gradient (GGA) and local density approximations (LDA) for the exchange correlation functionals and norm-conserving Troullier-Martins pseudopotentials. Calculated binding energies, densities of states (DOS), band structures, and vibrational spectra of functionalized graphene are analyzed in comparison with the available experimental data. Our calculations show that the reactions of [2 + 2] and [2 + 4] cycloaddition of C6H4 to the surface of pristine graphene are exothermic, with binding energies of --0.73 eV and --0.58 eV, respectively. Calculated band structures indicate that the [2 + 2] and [2 + 4] attachments of benzyne results in opening small band gap in graphene. The study of graphene--TCNEO interactions suggests that the reaction of cycloaddition of TCNEO to the surface of pristine graphene is endothermic. On the other hand, the reaction of cycloaddition of TCNEO is found to be exothermic for the edge of an H-terminated graphene sheet. Simulated Raman and infrared spectra of graphene functionalized with TCNEO are consistent with experimental results. The Raman (non-resonant) and infrared (IR) spectra of graphene functionalized with carboxyl (COON) groups are studied in graphene with no surface defects, di-vacancies (DV), and Stone-Wales (SW) defects. Simulated Raman and IR spectra of carboxylated graphene are consistent with available experimental results. Computed vibrational spectra of carboxylated graphene show that the presence of point defects near the functionalization site affect the Raman and IR spectroscopic signatures of the functionalized graphene.

  18. Directional power absorption in helicon plasma sources excited by a half-helix antenna

    NASA Astrophysics Data System (ADS)

    Afsharmanesh, Mohsen; Habibi, Morteza

    2017-10-01

    This paper deals with the investigation of the power absorption in helicon plasma excited through a half-helix antenna driven at 13.56 {{MHz}}. The simulations were carried out by means of a code, HELIC. They were carried out by taking into account different inhomogeneous radial density profiles and for a wide range of plasma densities, from {10}11 {{{cm}}}-3 to {10}13 {{{cm}}}-3. The magnetic field was 200, 400, 600 and 1000 {{G}}. A three-parameter function was used for generating various density profiles with different volume gradients, edge gradients and density widths. The density profile had a large effect on the efficient Trivelpiece-Gould (TG) and helicon mode excitation and antenna coupling to the plasma. The fraction of power deposition via the TG mode was extremely dependent on the plasma density near the plasma boundary. Interestingly, the obtained efficient parallel helicon wavelength was close to the anticipated value for Gaussian radial density profile. Power deposition was considerably asymmetric when the \\tfrac{n}{{B}0} ratio was more than a specific value for a determined density width. The longitudinal power absorption was symmetric at approximately {n}0={10}11 {{{cm}}}-3, irrespective of the magnetic field supposed. The asymmetry became more pronounced when the plasma density was {10}12 {{{cm}}}-3. The ratio of density width to the magnetic field was an important parameter in the power coupling. At high magnetic fields, the maximum of the power absorption was reached at higher plasma density widths. There was at least one combination of the plasma density, magnetic field and density width for which the RF power deposition at both side of the tube reached its maximum value.

  19. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  20. Ab initio calculation of transport properties between PbSe quantum dots facets with iodide ligands

    NASA Astrophysics Data System (ADS)

    Wang, B.; Patterson, R.; Chen, W.; Zhang, Z.; Yang, J.; Huang, S.; Shrestha, S.; Conibeer, G.

    2018-01-01

    The transport properties between Lead Selenide (PbSe) quantum dots decorated with iodide ligands has been studied using density functional theory (DFT). Quantum conductance at each selected energy levels has been calculated along with total density of states and projected density of states. The DFT calculation is carried on using a grid-based planar augmented wave (GPAW) code incorporated with the linear combination of atomic orbital (LCAO) mode and Perdew Burke Ernzerhof (PBE) exchange-correlation functional. Three iodide ligand attached low index facets including (001), (011), (111) are investigated in this work. P-orbital of iodide ligand majorly contributes to density of state (DOS) at near top valence band resulting a significant quantum conductance, whereas DOS of Pb p-orbital shows minor influence. Various values of quantum conductance observed along different planes are possibly reasoned from a combined effect electrical field over topmost surface and total distance between adjacent facets. Ligands attached to (001) and (011) planes possess similar bond length whereas it is significantly shortened in (111) plane, whereas transport between (011) has an overall low value due to newly formed electric field. On the other hand, (111) plane with a net surface dipole perpendicular to surface layers leading to stronger electron coupling suggests an apparent increase of transport probability. Apart from previously mentioned, the maximum transport energy levels located several eVs (1 2 eVs) from the edge of valence band top.

  1. Quantum image pseudocolor coding based on the density-stratified method

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  2. Varying impacts of alcohol outlet densities on violent assaults: explaining differences across neighborhoods.

    PubMed

    Mair, Christina; Gruenewald, Paul J; Ponicki, William R; Remer, Lillian

    2013-01-01

    Groups of potentially violent drinkers may frequent areas of communities with large numbers of alcohol outlets, especially bars, leading to greater rates of alcohol-related assaults. This study assessed direct and moderating effects of bar densities on assaults across neighborhoods. We analyzed longitudinal population data relating alcohol outlet densities (total outlet density, proportion bars/pubs, proportion off-premise outlets) to hospitalizations for assault injuries in California across residential ZIP code areas from 1995 through 2008 (23,213 space-time units). Because few ZIP codes were consistently defined over 14 years and these units are not independent, corrections for unit misalignment and spatial autocorrelation were implemented using Bayesian space-time conditional autoregressive models. Assaults were related to outlet densities in local and surrounding areas, the mix of outlet types, and neighborhood characteristics. The addition of one outlet per square mile was related to a small 0.23% increase in assaults. A 10% greater proportion of bars in a ZIP code was related to 7.5% greater assaults, whereas a 10% greater proportion of bars in surrounding areas was related to 6.2% greater assaults. The impacts of bars were much greater in areas with low incomes and dense populations. The effect of bar density on assault injuries was well supported and positive, and the magnitude of the effect varied by neighborhood characteristics. Posterior distributions from these models enabled the identification of locations most vulnerable to problems related to alcohol outlets.

  3. Arbitrariness is not enough: towards a functional approach to the genetic code.

    PubMed

    Lacková, Ľudmila; Matlach, Vladimír; Faltýnek, Dan

    2017-12-01

    Arbitrariness in the genetic code is one of the main reasons for a linguistic approach to molecular biology: the genetic code is usually understood as an arbitrary relation between amino acids and nucleobases. However, from a semiotic point of view, arbitrariness should not be the only condition for definition of a code, consequently it is not completely correct to talk about "code" in this case. Yet we suppose that there exist a code in the process of protein synthesis, but on a higher level than the nucleic bases chains. Semiotically, a code should be always associated with a function and we propose to define the genetic code not only relationally (in basis of relation between nucleobases and amino acids) but also in terms of function (function of a protein as meaning of the code). Even if the functional definition of meaning in the genetic code has been discussed in the field of biosemiotics, its further implications have not been considered. In fact, if the function of a protein represents the meaning of the genetic code (the sign's object), then it is crucial to reconsider the notion of its expression (the sign) as well. In our contribution, we will show that the actual model of the genetic code is not the only possible and we will propose a more appropriate model from a semiotic point of view.

  4. Variable Coded Modulation software simulation

    NASA Astrophysics Data System (ADS)

    Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise

    This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.

  5. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  6. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  7. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    PubMed

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  8. Effect on magnetic properties of germanium encapsulated C60 fullerene

    NASA Astrophysics Data System (ADS)

    Umran, Nibras Mossa; Kumar, Ranjan

    2013-02-01

    Structural and electronic properties of Gen(n = 1-4) doped C60 fullerene are investigated with ab initio density functional theory calculations by using an efficient computer code, known as SIESTA. The pseudopotentials are constructed using a Trouiller-Martins scheme, to describe the interaction of valence electrons with the atomic cores. In endohedral doped embedding of more germanium atoms complexes we have seen that complexes are stable and thereafter cage break down. We have also investigated that binding energy, electronic affinity increases and magnetic moment oscillating behavior as the number of semiconductor atoms in C60 fullerene goes on increasing.

  9. DFTB+ and lanthanides

    NASA Astrophysics Data System (ADS)

    Hourahine, B.; Aradi, B.; Frauenheim, T.

    2010-07-01

    DFTB+ is a recent general purpose implementation of density-functional based tight binding. One of the early motivators to develop this code was to investigate lanthanide impurities in nitride semiconductors, leading to a series of successful studies into structure and electrical properties of these systems. Here we describe our general framework to treat the physical effects needed for these problematic impurities within a tight-binding formalism, additionally discussing forces and stresses in DFTB. We also present an approach to evaluate the general case of Slater-Koster transforms and all of their derivatives in Cartesian coordinates. These developments are illustrated by simulating isolated Gd impurities in GaN.

  10. Dissociation of methane on the surface of charged defective carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Yan, X. H.; Xiao, Y.

    2010-03-01

    Based on the framework of density functional theory (CASTEP and DMOL 3 codes), we simulate the dissociation of methane (CH 4) molecule on the surface of charged defective carbon nanotubes (CNTs). The results display that a charged CNT with carbon (C) and molybdenum (Mo) dopants can effectively dissociate CH 4 molecule, and the adsorption strength of H and CH 3 can be controlled by the injected negative charges. Moreover, the barrier between the transition state (TS) and the reactant is 0.1014 eV, and a single imaginary frequency of -0.3 cm is found for the transition state structure.

  11. A High-Resolution InDel (Insertion–Deletion) Markers-Anchored Consensus Genetic Map Identifies Major QTLs Governing Pod Number and Seed Yield in Chickpea

    PubMed Central

    Srivastava, Rishi; Singh, Mohar; Bajaj, Deepak; Parida, Swarup K.

    2016-01-01

    Development and large-scale genotyping of user-friendly informative genome/gene-derived InDel markers in natural and mapping populations is vital for accelerating genomics-assisted breeding applications of chickpea with minimal resource expenses. The present investigation employed a high-throughput whole genome next-generation resequencing strategy in low and high pod number parental accessions and homozygous individuals constituting the bulks from each of two inter-specific mapping populations [(Pusa 1103 × ILWC 46) and (Pusa 256 × ILWC 46)] to develop non-erroneous InDel markers at a genome-wide scale. Comparing these high-quality genomic sequences, 82,360 InDel markers with reference to kabuli genome and 13,891 InDel markers exhibiting differentiation between low and high pod number parental accessions and bulks of aforementioned mapping populations were developed. These informative markers were structurally and functionally annotated in diverse coding and non-coding sequence components of genome/genes of kabuli chickpea. The functional significance of regulatory and coding (frameshift and large-effect mutations) InDel markers for establishing marker-trait linkages through association/genetic mapping was apparent. The markers detected a greater amplification (97%) and intra-specific polymorphic potential (58–87%) among a diverse panel of cultivated desi, kabuli, and wild accessions even by using a simpler cost-efficient agarose gel-based assay implicating their utility in large-scale genetic analysis especially in domesticated chickpea with narrow genetic base. Two high-density inter-specific genetic linkage maps generated using aforesaid mapping populations were integrated to construct a consensus 1479 InDel markers-anchored high-resolution (inter-marker distance: 0.66 cM) genetic map for efficient molecular mapping of major QTLs governing pod number and seed yield per plant in chickpea. Utilizing these high-density genetic maps as anchors, three major genomic regions harboring each of pod number and seed yield robust QTLs (15–28% phenotypic variation explained) were identified on chromosomes 2, 4, and 6. The integration of genetic and physical maps at these QTLs mapped on chromosomes scaled-down the long major QTL intervals into high-resolution short pod number and seed yield robust QTL physical intervals (0.89–2.94 Mb) which were essentially got validated in multiple genetic backgrounds of two chickpea mapping populations. The genome-wide InDel markers including natural allelic variants and genomic loci/genes delineated at major six especially in one colocalized novel congruent robust pod number and seed yield robust QTLs mapped on a high-density consensus genetic map were found most promising in chickpea. These functionally relevant molecular tags can drive marker-assisted genetic enhancement to develop high-yielding cultivars with increased seed/pod number and yield in chickpea. PMID:27695461

  12. A Simple and Accurate Network for Hydrogen and Carbon Chemistry in the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Gong, Munan; Ostriker, Eve C.; Wolfire, Mark G.

    2017-07-01

    Chemistry plays an important role in the interstellar medium (ISM), regulating the heating and cooling of the gas and determining abundances of molecular species that trace gas properties in observations. Although solving the time-dependent equations is necessary for accurate abundances and temperature in the dynamic ISM, a full chemical network is too computationally expensive to incorporate into numerical simulations. In this paper, we propose a new simplified chemical network for hydrogen and carbon chemistry in the atomic and molecular ISM. We compare results from our chemical network in detail with results from a full photodissociation region (PDR) code, and also with the Nelson & Langer (NL99) network previously adopted in the simulation literature. We show that our chemical network gives similar results to the PDR code in the equilibrium abundances of all species over a wide range of densities, temperature, and metallicities, whereas the NL99 network shows significant disagreement. Applying our network to 1D models, we find that the CO-dominated regime delimits the coldest gas and that the corresponding temperature tracks the cosmic-ray ionization rate in molecular clouds. We provide a simple fit for the locus of CO-dominated regions as a function of gas density and column. We also compare with observations of diffuse and translucent clouds. We find that the CO, {{CH}}x, and {{OH}}x abundances are consistent with equilibrium predictions for densities n=100{--}1000 {{cm}}-3, but the predicted equilibrium C abundance is higher than that seen in observations, signaling the potential importance of non-equilibrium/dynamical effects.

  13. Exome sequencing identifies rare LDLR and APOA5 alleles conferring risk for myocardial infarction.

    PubMed

    Do, Ron; Stitziel, Nathan O; Won, Hong-Hee; Jørgensen, Anders Berg; Duga, Stefano; Angelica Merlini, Pier; Kiezun, Adam; Farrall, Martin; Goel, Anuj; Zuk, Or; Guella, Illaria; Asselta, Rosanna; Lange, Leslie A; Peloso, Gina M; Auer, Paul L; Girelli, Domenico; Martinelli, Nicola; Farlow, Deborah N; DePristo, Mark A; Roberts, Robert; Stewart, Alexander F R; Saleheen, Danish; Danesh, John; Epstein, Stephen E; Sivapalaratnam, Suthesh; Hovingh, G Kees; Kastelein, John J; Samani, Nilesh J; Schunkert, Heribert; Erdmann, Jeanette; Shah, Svati H; Kraus, William E; Davies, Robert; Nikpay, Majid; Johansen, Christopher T; Wang, Jian; Hegele, Robert A; Hechter, Eliana; Marz, Winfried; Kleber, Marcus E; Huang, Jie; Johnson, Andrew D; Li, Mingyao; Burke, Greg L; Gross, Myron; Liu, Yongmei; Assimes, Themistocles L; Heiss, Gerardo; Lange, Ethan M; Folsom, Aaron R; Taylor, Herman A; Olivieri, Oliviero; Hamsten, Anders; Clarke, Robert; Reilly, Dermot F; Yin, Wu; Rivas, Manuel A; Donnelly, Peter; Rossouw, Jacques E; Psaty, Bruce M; Herrington, David M; Wilson, James G; Rich, Stephen S; Bamshad, Michael J; Tracy, Russell P; Cupples, L Adrienne; Rader, Daniel J; Reilly, Muredach P; Spertus, John A; Cresci, Sharon; Hartiala, Jaana; Tang, W H Wilson; Hazen, Stanley L; Allayee, Hooman; Reiner, Alex P; Carlson, Christopher S; Kooperberg, Charles; Jackson, Rebecca D; Boerwinkle, Eric; Lander, Eric S; Schwartz, Stephen M; Siscovick, David S; McPherson, Ruth; Tybjaerg-Hansen, Anne; Abecasis, Goncalo R; Watkins, Hugh; Nickerson, Deborah A; Ardissino, Diego; Sunyaev, Shamil R; O'Donnell, Christopher J; Altshuler, David; Gabriel, Stacey; Kathiresan, Sekar

    2015-02-05

    Myocardial infarction (MI), a leading cause of death around the world, displays a complex pattern of inheritance. When MI occurs early in life, genetic inheritance is a major component to risk. Previously, rare mutations in low-density lipoprotein (LDL) genes have been shown to contribute to MI risk in individual families, whereas common variants at more than 45 loci have been associated with MI risk in the population. Here we evaluate how rare mutations contribute to early-onset MI risk in the population. We sequenced the protein-coding regions of 9,793 genomes from patients with MI at an early age (≤50 years in males and ≤60 years in females) along with MI-free controls. We identified two genes in which rare coding-sequence mutations were more frequent in MI cases versus controls at exome-wide significance. At low-density lipoprotein receptor (LDLR), carriers of rare non-synonymous mutations were at 4.2-fold increased risk for MI; carriers of null alleles at LDLR were at even higher risk (13-fold difference). Approximately 2% of early MI cases harbour a rare, damaging mutation in LDLR; this estimate is similar to one made more than 40 years ago using an analysis of total cholesterol. Among controls, about 1 in 217 carried an LDLR coding-sequence mutation and had plasma LDL cholesterol > 190 mg dl(-1). At apolipoprotein A-V (APOA5), carriers of rare non-synonymous mutations were at 2.2-fold increased risk for MI. When compared with non-carriers, LDLR mutation carriers had higher plasma LDL cholesterol, whereas APOA5 mutation carriers had higher plasma triglycerides. Recent evidence has connected MI risk with coding-sequence mutations at two genes functionally related to APOA5, namely lipoprotein lipase and apolipoprotein C-III (refs 18, 19). Combined, these observations suggest that, as well as LDL cholesterol, disordered metabolism of triglyceride-rich lipoproteins contributes to MI risk.

  14. Residus de 2-formes differentielles sur les surfaces algebriques et applications aux codes correcteurs d'erreurs

    NASA Astrophysics Data System (ADS)

    Couvreur, A.

    2009-05-01

    The theory of algebraic-geometric codes has been developed in the beginning of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic curve X over a finite field, there are two different constructions of error-correcting codes. The first one, called "functional", uses some rational functions on X and the second one, called "differential", involves some rational 1-forms on this curve. Hundreds of papers are devoted to the study of such codes. In addition, a generalization of the functional construction for algebraic varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A few papers about such codes has been published, but nothing has been done concerning a generalization of the differential construction to the higher-dimensional case. In this thesis, we propose a differential construction of codes on algebraic surfaces. Afterwards, we study the properties of these codes and particularly their relations with functional codes. A pretty surprising fact is that a main difference with the case of curves appears. Indeed, if in the case of curves, a differential code is always the orthogonal of a functional one, this assertion generally fails for surfaces. Last observation motivates the study of codes which are the orthogonal of some functional code on a surface. Therefore, we prove that, under some condition on the surface, these codes can be realized as sums of differential codes. Moreover, we show that some answers to some open problems "a la Bertini" could give very interesting informations on the parameters of these codes.

  15. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  16. Rationale for switching to nonlocal functionals in density functional theory

    NASA Astrophysics Data System (ADS)

    Lazić, P.; Atodiresei, N.; Caciuc, V.; Brako, R.; Gumhalter, B.; Blügel, S.

    2012-10-01

    Density functional theory (DFT) has been steadily improving over the past few decades, becoming the standard tool for electronic structure calculations. The early local functionals (LDA) were eventually replaced by more accurate semilocal functionals (GGA) which are in use today. A major persisting drawback is the lack of the nonlocal correlation which is at the core of dispersive (van der Waals) forces, so that a large and important class of systems remains outside the scope of DFT. The vdW-DF correlation functional of Langreth and Lundqvist, published in 2004, was the first nonlocal functional which could be easily implemented. Beyond expectations, the nonlocal functional has brought significant improvement to systems that were believed not to be sensitive to nonlocal correlations. In this paper, we use the example of graphene nanodomes growing on the Ir(111) surface, where with an increase of the size of the graphene islands the character of the bonding changes from strong chemisorption towards almost pure physisorption. We demonstrate how the seamless character of the vdW-DF functionals makes it possible to treat all regimes self-consistently, proving to be a systematic and consistent improvement of DFT regardless of the nature of bonding. We also discuss the typical surface science example of CO adsorption on (111) surfaces of metals, which shows that the nonlocal correlation may also be crucial for strongly chemisorbed systems. We briefly discuss open questions, in particular the choice of the most appropriate exchange part of the functional. As the vdW-DF begins to appear implemented self-consistently in a number of popular DFT codes, with numerical costs close to the GGA calculations, we draw the attention of the DFT community to the advantages and benefits of the adoption of this new class of functionals.

  17. Rationale for switching to nonlocal functionals in density functional theory.

    PubMed

    Lazić, P; Atodiresei, N; Caciuc, V; Brako, R; Gumhalter, B; Blügel, S

    2012-10-24

    Density functional theory (DFT) has been steadily improving over the past few decades, becoming the standard tool for electronic structure calculations. The early local functionals (LDA) were eventually replaced by more accurate semilocal functionals (GGA) which are in use today. A major persisting drawback is the lack of the nonlocal correlation which is at the core of dispersive (van der Waals) forces, so that a large and important class of systems remains outside the scope of DFT. The vdW-DF correlation functional of Langreth and Lundqvist, published in 2004, was the first nonlocal functional which could be easily implemented. Beyond expectations, the nonlocal functional has brought significant improvement to systems that were believed not to be sensitive to nonlocal correlations. In this paper, we use the example of graphene nanodomes growing on the Ir(111) surface, where with an increase of the size of the graphene islands the character of the bonding changes from strong chemisorption towards almost pure physisorption. We demonstrate how the seamless character of the vdW-DF functionals makes it possible to treat all regimes self-consistently, proving to be a systematic and consistent improvement of DFT regardless of the nature of bonding. We also discuss the typical surface science example of CO adsorption on (111) surfaces of metals, which shows that the nonlocal correlation may also be crucial for strongly chemisorbed systems. We briefly discuss open questions, in particular the choice of the most appropriate exchange part of the functional. As the vdW-DF begins to appear implemented self-consistently in a number of popular DFT codes, with numerical costs close to the GGA calculations, we draw the attention of the DFT community to the advantages and benefits of the adoption of this new class of functionals.

  18. A code for optically thick and hot photoionized media

    NASA Astrophysics Data System (ADS)

    Dumont, A.-M.; Abrassart, A.; Collin, S.

    2000-05-01

    We describe a code designed for hot media (T >= a few 104 K), optically thick to Compton scattering. It computes the structure of a plane-parallel slab of gas in thermal and ionization equilibrium, illuminated on one or on both sides by a given spectrum. Contrary to the other photoionization codes, it solves the transfer of the continuum and of the lines in a two stream approximation, without using the local escape probability formalism to approximate the line transfer. We stress the importance of taking into account the returning flux even for small column densities (1022 cm-2), and we show that the escape probability approximation can lead to strong errors in the thermal and ionization structure, as well as in the emitted spectrum, for a Thomson thickness larger than a few tenths. The transfer code is coupled with a Monte Carlo code which allows to take into account Compton and inverse Compton diffusions, and to compute the spectrum emitted up to MeV energies, in any geometry. Comparisons with cloudy show that it gives similar results for small column densities. Several applications are mentioned.

  19. Effects of convection electric field on upwelling and escape of ionospheric O(+)

    NASA Technical Reports Server (NTRS)

    Cladis, J. B.; Chiu, Yam T.; Peterson, William K.

    1992-01-01

    A Monte Carlo code is used to explore the full effects of the convection electric field on distributions of upflowing O(+) ions from the cusp/cleft ionosphere. Trajectories of individual ions/neutrals are computed as they undergo multiple charge-exchange collisions. In the ion state, the trajectories are computed in realistic models of the magnetic field and the convection, corotation, and ambipolar electric fields. The effects of ion-ion collisions are included, and the trajectories are computed with and without simultaneous stochastic heating perpendicular to the magnetic field by a realistic model of broadband, low frequency waves. In the neutral state, ballistic trajectories in the gravitational field are computed. The initial conditions of the ions, in addition to ambipolar electric field and the number densities and temperatures of O(+), H(+), and electrons as a function of height in the cusp/cleft region were obtained from the results of Gombosi and Killeen (1987), who used a hydrodynamic code to simulate the time-dependent frictional-heating effects in a magnetic tube during its motion though the convection throat. The distribution of the ion fluxes as a function of height are constructed from the case histories.

  20. Gravity data inversion to determine 3D topographycal density contrast of Banten area, Indonesia based on fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Windhari, Ayuty; Handayani, Gunawan

    2015-04-01

    The 3D inversion gravity anomaly to estimate topographical density using a matlab source code from gridded data provided by Parker Oldenburg algorithm based on fast Fourier transform was computed. We extend and improved the source code of 3DINVERT.M invented by Gomez Ortiz and Agarwal (2005) using the relationship between Fourier transform of the gravity anomaly and the sum of the Fourier transform from the topography density. We gave density contrast between the two media to apply the inversion. FFT routine was implemented to construct amplitude spectrum to the given mean depth. The results were presented as new graphics of inverted topography density, the gravity anomaly due to the inverted topography and the difference between the input gravity data and the computed ones. It terminates when the RMS error is lower than pre-assigned value used as convergence criterion or until maximum of iterations is reached. As an example, we used the matlab program on gravity data of Banten region, Indonesia.

  1. ARES: automated response function code. Users manual. [HPGAM and LSQVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maung, T.; Reynolds, G.M.

    This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries.

  2. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  3. A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong

    2013-01-01

    Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.

  4. Measurement Techniques for Clock Jitter

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin; Schlesinger, Adam

    2012-01-01

    NASA is in the process of modernizing its communications infrastructure to accompany the development of a Crew Exploration Vehicle (CEV) to replace the shuttle. With this effort comes the opportunity to infuse more advanced coded modulation techniques, including low-density parity-check (LDPC) codes that offer greater coding gains than the current capability. However, in order to take full advantage of these codes, the ground segment receiver synchronization loops must be able to operate at a lower signal-to-noise ratio (SNR) than supported by equipment currently in use.

  5. Circulating microRNAs and long non-coding RNAs in gastric cancer diagnosis: An update and review

    PubMed Central

    Huang, Ya-Kai; Yu, Jian-Chun

    2015-01-01

    Gastric cancer (GC) is the fourth most common cancer and the third leading cause of cancer mortality worldwide. MicroRNAs (miRNAs) and long non-coding RNAs (lncRNAs) are the most popular non-coding RNAs in cancer research. To date, the roles of miRNAs and lncRNAs have been extensively studied in GC, suggesting that miRNAs and lncRNAs represent a vital component of tumor biology. Furthermore, circulating miRNAs and lncRNAs are found to be dysregulated in patients with GC compared with healthy individuals. Circulating miRNAs and lncRNAs may function as promising biomarkers to improve the early detection of GC. Multiple possibilities for miRNA secretion have been elucidated, including active secretion by microvesicles, exosomes, apoptotic bodies, high-density lipoproteins and protein complexes as well as passive leakage from cells. However, the mechanism underlying lncRNA secretion and the functions of circulating miRNAs and lncRNAs have not been fully illuminated. Concurrently, to standardize results of global investigations of circulating miRNAs and lncRNAs biomarker studies, several recommendations for pre-analytic considerations are put forward. In this review, we summarize the known circulating miRNAs and lncRNAs for GC diagnosis. The possible mechanism of miRNA and lncRNA secretion as well as methodologies for identification of circulating miRNAs and lncRNAs are also discussed. The topics covered here highlight new insights into GC diagnosis and screening. PMID:26379393

  6. Density, Velocity and Ionization Structure in Accretion-Disc Winds

    NASA Technical Reports Server (NTRS)

    Sonneborn, George (Technical Monitor); Long, Knox

    2004-01-01

    This was a project to exploit the unique capabilities of FUSE to monitor variations in the wind- formed spectral lines of the luminous, low-inclination, cataclysmic variables(CV) -- RW Sex. (The original proposal contained two additional objects but these were not approved.) These observations were intended to allow us to determine the relative roles of density and ionization state changes in the outflow and to search for spectroscopic signatures of stochastic small-scale structure and shocked gas. By monitoring the temporal behavior of blue-ward extended absorption lines with a wide range of ionization potentials and excitation energies, we proposed to track the changing physical conditions in the outflow. We planned to use a new Monte Carlo code to calculate the ionization structure of and radiative transfer through the CV wind. The analysis therefore was intended to establish the wind geometry, kinematics and ionization state, both in a time-averaged sense and as a function of time.

  7. Reload of an industrial cylindrical cobalt source rack

    NASA Astrophysics Data System (ADS)

    Gharbi, F.; Kadri, O.; Trabelsi, A.

    2006-10-01

    This work presents a Monte Carlo study of the cylindrical cobalt source rack geometry of the Tunisian gamma irradiation facility, using the GEANT code developed at CERN. The study investigates the question of the reload of the source rack. The studied configurations consist in housing four new cobalt pencils, two in the upper and two in the lower cylinder of the source rack. Global dose rate uniformity inside a "dummy" product for the case of routine and nonroutine irradiation, and as function of the product bulk density, was calculated for eight hypothetical configurations. The same calculation was also performed for both of the original and the ideal (but not practical) configurations. It was shown that hypothetical cases produced dose uniformity variations, according to product density, that were statistically no different than the original and the ideal configurations and that the reload procedure cannot improve the irradiation quality inside the facilities using cylindrical cobalt source racks.

  8. Investigation of electronic structure and chemical bonding of intermetallic Pd2HfIn: An ab-initio study

    NASA Astrophysics Data System (ADS)

    Bano, Amreen; Gaur, N. K.

    2018-05-01

    Ab-initio calculations are carried out to study the electronic and chemical bonding properties of Intermetallic full Heusler compound Pd2HfIn which crystallizes in F-43m structure. All calculations are performed by using density functional theory (DFT) based code Quantum Espresso. Generalized gradient approximations (GGA) of Perdew- Burke- Ernzerhof (PBE) have been adopted for exchange-correlation potential. Calculated electronic band structure reveals the metallic character of the compound. From partial density of states (PDoS), we found the presence of relatively high intensity electronic states of 4d-Pd atom at Fermi level. We have found a pseudo-gap just abouve the Fermi level and N(E) at Fermi level is observed to be 0.8 states/eV, these finding indicates the existence of superconducting character in Pd2HfIn.

  9. Ab-initio atomic level stress and role of d-orbitals in CuZr, CuZn and CuY

    NASA Astrophysics Data System (ADS)

    Ojha, Madhusudan; Nicholson, Don M.; Egami, Takeshi

    2015-03-01

    Atomic level stress offers a new tool to characterize materials within the local approximation to density functional theory (DFT). Ab-initio atomic level stresses in B2 structures of CuZr, CuZn and CuY are calculated and results are explained on the basis of d-orbital contributions to Density of States (DOS). The overlap of d-orbital DOS plays an important role in the relative magnitude of atomic level stresses in these structures. The trends in atomic level stresses that we observed in these simple B2 structures are also seen in complex structures such as liquids, glasses and solid solutions. The stresses are however modified by the different coordination and relaxed separation distances in these complex structures. We used the Locally Self-Consistent Multiple Scattering (LSMS) code and Vienna Ab-initio Simulation Package (VASP) for ab-initio calculations.

  10. Brown Mycelial Mat as an Essential Morphological Structure of the Shiitake Medicinal Mushroom Lentinus edodes (Agaricomycetes).

    PubMed

    Vetchinkina, Elena; Gorshkov, Vladimir; Ageeva, Marina; Gogolev, Yuri; Nikitina, Valentina E

    2017-01-01

    We show here, to our knowledge for the first time, that the brown mycelial mat of the xylotrophic shiitake medicinal mushroom, Lentinus edodes, not only performs a protective function owing to significant changes in the ultrastructure (thickening of the cell wall, increased density, and pigmentation of the fungal hyphae) but also is a metabolically active stage in the development of the mushroom. The cells of this morphological structure exhibit repeated activation of expression of the genes lcc4, tir, exp1, chi, and exg1, coding for laccase, tyrosinase, a specific transcription factor, chitinase, and glucanase, which are required for fungal growth and morphogenesis. This study revealed the maximum activity of functionally important proteins with phenol oxidase and lectin activities, and the emergence of additional laccases, tyrosinases, and lectins, which are typical of only this stage of morphogenesis and have a regulatory function in the development and formation of fruiting bodies.

  11. Computational Investigation of Graphene-Carbon Nanotube-Polymer Composite

    NASA Astrophysics Data System (ADS)

    Jha, Sanjiv; Roth, Michael; Todde, Guido; Subramanian, Gopinath; Shukla, Manoj; Univ of Southern Mississippi Collaboration; US Army Engineer Research; Development Center 3909 Halls Ferry Road Vicksburg, MS 39180, USA Collaboration

    Graphene is a single atom thick two dimensional carbon sheet where sp2 -hybridized carbon atoms are arranged in a honeycomb structure. The functionalization of graphene and carbon nanotubes (CNTs) with polymer is a route for developing high performance nanocomposite materials. We study the interfacial interactions among graphene, CNT, and Nylon 6 polymer using computational methods based on density functional theory (DFT) and empirical force-field. Our DFT calculations are carried out using Quantum-ESPRESSO electronic structure code with van der Waals functional (vdW-DF2), whereas the empirical calculations are performed using LAMMPS with the COMPASS force-field. Our results demonstrated that the interactions between (8,8) CNT and graphene, and between CNT/graphene and Nylon 6 consist mostly of van der Waals type. The computed Young's moduli indicated that the mechanical properties of carbon nanostructures are enhanced by their interactions with polymer. The presence of Stone-Wales (SW) defects lowered the Young's moduli of carbon nanostructures.

  12. Code Compression for DSP

    DTIC Science & Technology

    1998-12-01

    PAGES 6 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8...Automation Conference, June 1998. [Liao95] S. Liao, S. Devadas , K. Keutzer, “Code Density Optimization for Embedded DSP Processors Using Data Compression

  13. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.

    Proton double-differential cross sections from 59Co(α,p) 62Ni, 57Fe(α,p) 60Co, 56Fe( 7Li,p) 62Ni, and 55Mn( 6Li,p) 60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtainedmore » with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  14. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  15. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.

  16. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Oblozinsky, P.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  17. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Capote,R.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  18. Deformation potentials for band-to-band tunneling in silicon and germanium from first principles

    NASA Astrophysics Data System (ADS)

    Vandenberghe, William G.; Fischetti, Massimo V.

    2015-01-01

    The deformation potentials for phonon-assisted band-to-band tunneling (BTBT) in silicon and germanium are calculated using a plane-wave density functional theory code. Using hybrid functionals, we obtain: DTA = 4.1 × 108 eV/cm, DTO = 1.2 × 109 eV/cm, and DLO = 2.2 × 109 eV/cm for BTBT in silicon and DTA = 7.8 × 108 eV/cm and DLO = 1.3 × 109 eV/cm for BTBT in germanium. These values agree with experimentally measured values and we explain why in diodes, the TA/TO phonon-assisted BTBT dominates over LO phonon-assisted BTBT despite the larger deformation potential for the latter. We also explain why LO phonon-assisted BTBT can nevertheless dominate in many practical applications.

  19. Fluorescent protein tagging of endogenous protein in brain neurons using CRISPR/Cas9-mediated knock-in and in utero electroporation techniques

    PubMed Central

    Uemura, Takeshi; Mori, Takuma; Kurihara, Taiga; Kawase, Shiori; Koike, Rie; Satoga, Michiru; Cao, Xueshan; Li, Xue; Yanagawa, Toru; Sakurai, Takayuki; Shindo, Takayuki; Tabuchi, Katsuhiko

    2016-01-01

    Genome editing is a powerful technique for studying gene functions. CRISPR/Cas9-mediated gene knock-in has recently been applied to various cells and organisms. Here, we successfully knocked in an EGFP coding sequence at the site immediately after the first ATG codon of the β-actin gene in neurons in the brain by the combined use of the CRISPR/Cas9 system and in utero electroporation technique, resulting in the expression of the EGFP-tagged β-actin protein in cortical layer 2/3 pyramidal neurons. We detected EGFP fluorescence signals in the soma and neurites of EGFP knock-in neurons. These signals were particularly abundant in the head of dendritic spines, corresponding to the localization of the endogenous β-actin protein. EGFP knock-in neurons showed no detectable changes in spine density and basic electrophysiological properties. In contrast, exogenously overexpressed EGFP-β-actin showed increased spine density and EPSC frequency, and changed resting membrane potential. Thus, our technique provides a potential tool to elucidate the localization of various endogenous proteins in neurons by epitope tagging without altering neuronal and synaptic functions. This technique can be also useful for introducing a specific mutation into genes to study the function of proteins and genomic elements in brain neurons. PMID:27782168

  20. Fluorescent protein tagging of endogenous protein in brain neurons using CRISPR/Cas9-mediated knock-in and in utero electroporation techniques.

    PubMed

    Uemura, Takeshi; Mori, Takuma; Kurihara, Taiga; Kawase, Shiori; Koike, Rie; Satoga, Michiru; Cao, Xueshan; Li, Xue; Yanagawa, Toru; Sakurai, Takayuki; Shindo, Takayuki; Tabuchi, Katsuhiko

    2016-10-26

    Genome editing is a powerful technique for studying gene functions. CRISPR/Cas9-mediated gene knock-in has recently been applied to various cells and organisms. Here, we successfully knocked in an EGFP coding sequence at the site immediately after the first ATG codon of the β-actin gene in neurons in the brain by the combined use of the CRISPR/Cas9 system and in utero electroporation technique, resulting in the expression of the EGFP-tagged β-actin protein in cortical layer 2/3 pyramidal neurons. We detected EGFP fluorescence signals in the soma and neurites of EGFP knock-in neurons. These signals were particularly abundant in the head of dendritic spines, corresponding to the localization of the endogenous β-actin protein. EGFP knock-in neurons showed no detectable changes in spine density and basic electrophysiological properties. In contrast, exogenously overexpressed EGFP-β-actin showed increased spine density and EPSC frequency, and changed resting membrane potential. Thus, our technique provides a potential tool to elucidate the localization of various endogenous proteins in neurons by epitope tagging without altering neuronal and synaptic functions. This technique can be also useful for introducing a specific mutation into genes to study the function of proteins and genomic elements in brain neurons.

  1. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  2. Cross sections of proton-induced nuclear reactions on bismuth and lead up to 100 MeV

    NASA Astrophysics Data System (ADS)

    Mokhtari Oranj, L.; Jung, N. S.; Bakhtiari, M.; Lee, A.; Lee, H. S.

    2017-04-01

    Production cross sections of 209Bi(p , x n )207,206,205,204,203Po, 209Bi(p , pxn) 207,206,205,204,203,202Bi, and natPb(p , x n ) 206,205,204,203,202,201Bi reactions were measured to fill the gap in the excitation functions up to 100 MeV as well as to figure out the effects of different nuclear properties on proton-induced reactions including heavy nuclei. The targets were arranged in two different stacks consisting of Bi, Pb, Al, Au foils and Pb plates. The proton beam intensity was determined by the activation analysis method using 27Al(p ,3 p n )24Na, 197Au(p ,p n )196Au, and 197Au(p , p 3 n )194Au monitor reactions in parallel as well as the Gafchromic film dosimetry method. The activities of produced radionuclei in the foils were measured by the HPGe spectroscopy system. Over 40 new cross sections were measured in the investigated energy range. A satisfactory agreement was observed between the present experimental data and the previously published data. Excitation functions of mentioned reactions were calculated by using the theoretical model based on the latest version of the TALYS code and compared to the new data as well as with other data in the literature. Additionally, the effects of various combinations of the nuclear input parameters of different level density models, optical model potentials, and γ-ray strength functions were considered. It was concluded that if certain level density models are used, the calculated cross sections could be comparable to the measured data. Furthermore, the effects of optical model potential and γ-ray strength functions were considerably lower than that of nuclear level densities.

  3. Heat transfer in rocket engine combustion chambers and regeneratively cooled nozzles

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A conjugate heat transfer computational fluid dynamics (CFD) model to describe regenerative cooling in the main combustion chamber and nozzle and in the injector faceplate region for a launch vehicle class liquid rocket engine was developed. An injector model for sprays which treats the fluid as a variable density, single-phase media was formulated, incorporated into a version of the FDNS code, and used to simulate the injector flow typical of that in the Space Shuttle Main Engine (SSME). Various chamber related heat transfer analyses were made to verify the predictive capability of the conjugate heat transfer analysis provided by the FDNS code. The density based version of the FDNS code with the real fluid property models developed was successful in predicting the streamtube combustion of individual injector elements.

  4. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  5. Single-Shot Scalar-Triplet Measurements in High-Pressure Swirl-Stabilized Flames for Combustion Code Validation

    NASA Technical Reports Server (NTRS)

    Kojima, Jun; Nguyen, Quang-Viet

    2007-01-01

    In support of NASA ARMD's code validation project, we have made significant progress by providing the first quantitative single-shot multi-scalar data from a turbulent elevated-pressure (5 atm), swirl-stabilized, lean direct injection (LDI) type research burner operating on CH4-air using a spatially-resolved pulsed-laser spontaneous Raman diagnostic technique. The Raman diagnostics apparatus and data analysis that we present here were developed over the past 6 years at Glenn Research Center. From the Raman scattering data, we produce spatially-mapped probability density functions (PDFs) of the instantaneous temperature, determined using a newly developed low-resolution effective rotational bandwidth (ERB) technique. The measured 3-scalar (triplet) correlations, between temperature, CH4, and O2 concentrations, as well as their PDF s, also provide a high-level of detail into the nature and extent of the turbulent mixing process and its impact on chemical reactions in a realistic gas turbine injector flame at elevated pressures. The multi-scalar triplet data set presented here provides a good validation case for CFD combustion codes to simulate by providing both average and statistical values for the 3 measured scalars.

  6. Code Mixing and Modernization across Cultures.

    ERIC Educational Resources Information Center

    Kamwangamalu, Nkonko M.

    A review of recent studies addressed the functional uses of code mixing across cultures. Expressions of code mixing (CM) are not random; in fact, a number of functions of code mixing can easily be delineated, for example, the concept of "modernization.""Modernization" is viewed with respect to how bilingual code mixers perceive…

  7. Electron transport model of dielectric charging

    NASA Technical Reports Server (NTRS)

    Beers, B. L.; Hwang, H. C.; Lin, D. L.; Pine, V. W.

    1979-01-01

    A computer code (SCCPOEM) was assembled to describe the charging of dielectrics due to irradiation by electrons. The primary purpose for developing the code was to make available a convenient tool for studying the internal fields and charge densities in electron-irradiated dielectrics. The code, which is based on the primary electron transport code POEM, is applicable to arbitrary dielectrics, source spectra, and current time histories. The code calculations are illustrated by a series of semianalytical solutions. Calculations to date suggest that the front face electric field is insufficient to cause breakdown, but that bulk breakdown fields can easily be exceeded.

  8. The application of LDPC code in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  9. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  10. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  11. Numerical calculations of non-inductive current driven by microwaves in JET

    NASA Astrophysics Data System (ADS)

    Kirov, K. K.; Baranov, Yu; Mailloux, J.; Nave, M. F. F.; Contributors, JET

    2016-12-01

    Recent studies at JET focus on analysis of the lower hybrid (LH) wave power absorption and current drive (CD) calculations by means of a new ray tracing (RT)/Fokker-Planck (FP) package. The RT code works in real 2D geometry accounting for the plasma boundary and the launcher shape. LH waves with different parallel refractive index, {{N}\\parallel} , spectra in poloidal direction can be launched thus simulating authentic antenna spectrum with rows fed by different combinations of klystrons. Various FP solvers were tested most advanced of which is a relativistic bounce averaged FP code. LH wave power deposition profiles from the new RT/FP code were compared to the experimental results from electron cyclotron emission (ECE) analysis of pulses at 3.4 T low and high density. This kind of direct comparison between power deposition profiles from experimental ECE data and numerical model were carried out for the first time for waves in the LH range of frequencies. The results were in a reasonable agreement with experimental data at lower density, line averaged values of {{n}\\text{e}}≈ 2.4× {{10}19} {{\\text{m}}-3} . At higher density, {{n}\\text{e}}≈ 3× {{10}19} {{\\text{m}}-3} , the code predicted larger on-axis LH power deposition, which is inconsistent with the experimental observations. Both calculations were unable to produce LH wave absorption at the plasma periphery, which contradicts to the analysis of the ECE data and possible sources of these discrepancies have been briefly discussed in the paper. The code was also used to calculate the LH power deposition and CD profiles for the low-density preheat phase of JET’s advanced tokamak (AT) scenario. It was found that as the density evolves from hollow to flat and then to a more peaked profile the LH power and driven current move inward i.e. towards the plasma axis. A total driven current of about 70 kA for 1 MW of launched LH power was predicted in these conditions.

  12. Thermophysical Properties of Liquid Te: Density, Electrical Conductivity, and Viscosity

    NASA Technical Reports Server (NTRS)

    Li, C.; Su, C.; Lehoczky, S. L.; Scripa, R. N.; Ban, H.; Lin, B.

    2004-01-01

    The thermophysical properties of liquid Te, namely, density, electrical conductivity, and viscosity, were determined using the pycnometric and transient torque methods from the melting point of Te (723 K) to approximately 1150 K. A maximum was observed in the density of liquid Te as the temperature was increased. The electrical conductivity of liquid Te increased to a constant value of 2.89 x 10(exp 5 OMEGA-1m-1) as the temperature was raised above 1000 K. The viscosity decreased rapidly upon heating the liquid to elevated temperatures. The anomalous behaviors of the measured properties are explained as caused by the structural transitions in the liquid and discussed in terms of Eyring's and Bachiskii's predicted behaviors for homogeneous liquids. The Properties were also measured as a function of time after the liquid was coded from approximately 1173 or 1123 to 823 K. No relaxation phenomena were observed in the properties after the temperature of liquid Te was decreased to 823 K, in contrast to the relaxation behavior observed for some of the Te compounds.

  13. Modeling the binary circumstellar medium of Type IIb/L/n supernova progenitors

    NASA Astrophysics Data System (ADS)

    Kolb, Christopher; Blondin, John; Borkowski, Kazik; Reynolds, Stephen

    2018-01-01

    Circumstellar interaction in close binary systems can produce a highly asymmetric environment, particularly for systems with a mass outflow velocity comparable to the binary orbital speed. This asymmetric circumstellar medium (CSM) becomes visible after a supernova explosion, when SN radiation illuminates the gas and when SN ejecta collide with the CSM. We aim to better understand the development of this asymmetric CSM, particularly for binary systems containing a red supergiant progenitor, and to study its impact on supernova morphology. To achieve this, we model the asymmetric wind and subsequent supernova explosion in full 3D hydrodynamics using the shock-capturing hydro code VH-1 on a spherical yin-yang grid. Wind interaction is computed in a frame co-rotating with the binary system, and gas is accelerated using a radiation pressure-driven wind model where optical depth of the radiative force is dependent on azimuthally-averaged gas density. We present characterization of our asymmetric wind density distribution model by fitting a polar-to-equatorial density contrast function to free parameters such as binary separation distance, primary mass loss rate, and binary mass ratio.

  14. A Graphical-User Interface for the U. S. Geological Survey's SUTRA Code using Argus ONE (for simulation of variable-density saturated-unsaturated ground-water flow with solute or energy transport)

    USGS Publications Warehouse

    Voss, Clifford I.; Boldt, David; Shapiro, Allen M.

    1997-01-01

    This report describes a Graphical-User Interface (GUI) for SUTRA, the U.S. Geological Survey (USGS) model for saturated-unsaturated variable-fluid-density ground-water flow with solute or energy transport,which combines a USGS-developed code that interfaces SUTRA with Argus ONE, a commercial software product developed by Argus Interware. This product, known as Argus Open Numerical Environments (Argus ONETM), is a programmable system with geographic-information-system-like (GIS-like) functionality that includes automated gridding and meshing capabilities for linking geospatial information with finite-difference and finite-element numerical model discretizations. The GUI for SUTRA is based on a public-domain Plug-In Extension (PIE) to Argus ONE that automates the use of ArgusONE to: automatically create the appropriate geospatial information coverages (information layers) for SUTRA, provide menus and dialogs for inputting geospatial information and simulation control parameters for SUTRA, and allow visualization of SUTRA simulation results. Following simulation control data and geospatial data input bythe user through the GUI, ArgusONE creates text files in a format required for normal input to SUTRA,and SUTRA can be executed within the Argus ONE environment. Then, hydraulic head, pressure, solute concentration, temperature, saturation and velocity results from the SUTRA simulation may be visualized. Although the GUI for SUTRA discussed in this report provides all of the graphical pre- and post-processor functions required for running SUTRA, it is also possible for advanced users to apply programmable features within Argus ONE to modify the GUI to meet the unique demands of particular ground-water modeling projects.

  15. Retrotransposons Are the Major Contributors to the Expansion of the Drosophila ananassae Muller F Element

    PubMed Central

    Shaffer, Christopher D.; Chen, Elizabeth J.; Quisenberry, Thomas J.; Ko, Kevin; Braverman, John M.; Giarla, Thomas C.; Mortimer, Nathan T.; Reed, Laura K.; Smith, Sheryl T.; Robic, Srebrenka; McCartha, Shannon R.; Perry, Danielle R.; Prescod, Lindsay M.; Sheppard, Zenyth A.; Saville, Ken J.; McClish, Allison; Morlock, Emily A.; Sochor, Victoria R.; Stanton, Brittney; Veysey-White, Isaac C.; Revie, Dennis; Jimenez, Luis A.; Palomino, Jennifer J.; Patao, Melissa D.; Patao, Shane M.; Himelblau, Edward T.; Campbell, Jaclyn D.; Hertz, Alexandra L.; McEvilly, Maddison F.; Wagner, Allison R.; Youngblom, James; Bedi, Baljit; Bettincourt, Jeffery; Duso, Erin; Her, Maiye; Hilton, William; House, Samantha; Karimi, Masud; Kumimoto, Kevin; Lee, Rebekah; Lopez, Darryl; Odisho, George; Prasad, Ricky; Robbins, Holly Lyn; Sandhu, Tanveer; Selfridge, Tracy; Tsukashima, Kara; Yosif, Hani; Kokan, Nighat P.; Britt, Latia; Zoellner, Alycia; Spana, Eric P.; Chlebina, Ben T.; Chong, Insun; Friedman, Harrison; Mammo, Danny A.; Ng, Chun L.; Nikam, Vinayak S.; Schwartz, Nicholas U.; Xu, Thomas Q.; Burg, Martin G.; Batten, Spencer M.; Corbeill, Lindsay M.; Enoch, Erica; Ensign, Jesse J.; Franks, Mary E.; Haiker, Breanna; Ingles, Judith A.; Kirkland, Lyndsay D.; Lorenz-Guertin, Joshua M.; Matthews, Jordan; Mittig, Cody M.; Monsma, Nicholaus; Olson, Katherine J.; Perez-Aragon, Guillermo; Ramic, Alen; Ramirez, Jordan R.; Scheiber, Christopher; Schneider, Patrick A.; Schultz, Devon E.; Simon, Matthew; Spencer, Eric; Wernette, Adam C.; Wykle, Maxine E.; Zavala-Arellano, Elizabeth; McDonald, Mitchell J.; Ostby, Kristine; Wendland, Peter; DiAngelo, Justin R.; Ceasrine, Alexis M.; Cox, Amanda H.; Docherty, James E.B.; Gingras, Robert M.; Grieb, Stephanie M.; Pavia, Michael J.; Personius, Casey L.; Polak, Grzegorz L.; Beach, Dale L.; Cerritos, Heaven L.; Horansky, Edward A.; Sharif, Karim A.; Moran, Ryan; Parrish, Susan; Bickford, Kirsten; Bland, Jennifer; Broussard, Juliana; Campbell, Kerry; Deibel, Katelynn E.; Forka, Richard; Lemke, Monika C.; Nelson, Marlee B.; O'Keeffe, Catherine; Ramey, S. Mariel; Schmidt, Luke; Villegas, Paola; Jones, Christopher J.; Christ, Stephanie L.; Mamari, Sami; Rinaldi, Adam S.; Stity, Ghazal; Hark, Amy T.; Scheuerman, Mark; Silver Key, S. Catherine; McRae, Briana D.; Haberman, Adam S.; Asinof, Sam; Carrington, Harriette; Drumm, Kelly; Embry, Terrance; McGuire, Richard; Miller-Foreman, Drew; Rosen, Stella; Safa, Nadia; Schultz, Darrin; Segal, Matt; Shevin, Yakov; Svoronos, Petros; Vuong, Tam; Skuse, Gary; Paetkau, Don W.; Bridgman, Rachael K.; Brown, Charlotte M.; Carroll, Alicia R.; Gifford, Francesca M.; Gillespie, Julie Beth; Herman, Susan E.; Holtcamp, Krystal L.; Host, Misha A.; Hussey, Gabrielle; Kramer, Danielle M.; Lawrence, Joan Q.; Martin, Madeline M.; Niemiec, Ellen N.; O'Reilly, Ashleigh P.; Pahl, Olivia A.; Quintana, Guadalupe; Rettie, Elizabeth A.S.; Richardson, Torie L.; Rodriguez, Arianne E.; Rodriguez, Mona O.; Schiraldi, Laura; Smith, Joanna J.; Sugrue, Kelsey F.; Suriano, Lindsey J.; Takach, Kaitlyn E.; Vasquez, Arielle M.; Velez, Ximena; Villafuerte, Elizabeth J.; Vives, Laura T.; Zellmer, Victoria R.; Hauke, Jeanette; Hauser, Charles R.; Barker, Karolyn; Cannon, Laurie; Parsamian, Perouza; Parsons, Samantha; Wichman, Zachariah; Bazinet, Christopher W.; Johnson, Diana E.; Bangura, Abubakarr; Black, Jordan A.; Chevee, Victoria; Einsteen, Sarah A.; Hilton, Sarah K.; Kollmer, Max; Nadendla, Rahul; Stamm, Joyce; Fafara-Thompson, Antoinette E.; Gygi, Amber M.; Ogawa, Emmy E.; Van Camp, Matt; Kocsisova, Zuzana; Leatherman, Judith L.; Modahl, Cassie M.; Rubin, Michael R.; Apiz-Saab, Susana S.; Arias-Mejias, Suzette M.; Carrion-Ortiz, Carlos F.; Claudio-Vazquez, Patricia N.; Espada-Green, Debbie M.; Feliciano-Camacho, Marium; Gonzalez-Bonilla, Karina M.; Taboas-Arroyo, Mariela; Vargas-Franco, Dorianmarie; Montañez-Gonzalez, Raquel; Perez-Otero, Joseph; Rivera-Burgos, Myrielis; Rivera-Rosario, Francisco J.; Eisler, Heather L.; Alexander, Jackie; Begley, Samatha K.; Gabbard, Deana; Allen, Robert J.; Aung, Wint Yan; Barshop, William D.; Boozalis, Amanda; Chu, Vanessa P.; Davis, Jeremy S.; Duggal, Ryan N.; Franklin, Robert; Gavinski, Katherine; Gebreyesus, Heran; Gong, Henry Z.; Greenstein, Rachel A.; Guo, Averill D.; Hanson, Casey; Homa, Kaitlin E.; Hsu, Simon C.; Huang, Yi; Huo, Lucy; Jacobs, Sarah; Jia, Sasha; Jung, Kyle L.; Wai-Chee Kong, Sarah; Kroll, Matthew R.; Lee, Brandon M.; Lee, Paul F.; Levine, Kevin M.; Li, Amy S.; Liu, Chengyu; Liu, Max Mian; Lousararian, Adam P.; Lowery, Peter B.; Mallya, Allyson P.; Marcus, Joseph E.; Ng, Patrick C.; Nguyen, Hien P.; Patel, Ruchik; Precht, Hashini; Rastogi, Suchita; Sarezky, Jonathan M.; Schefkind, Adam; Schultz, Michael B.; Shen, Delia; Skorupa, Tara; Spies, Nicholas C.; Stancu, Gabriel; Vivian Tsang, Hiu Man; Turski, Alice L.; Venkat, Rohit; Waldman, Leah E.; Wang, Kaidi; Wang, Tracy; Wei, Jeffrey W.; Wu, Dennis Y.; Xiong, David D.; Yu, Jack; Zhou, Karen; McNeil, Gerard P.; Fernandez, Robert W.; Menzies, Patrick Gomez; Gu, Tingting; Buhler, Jeremy; Mardis, Elaine R.; Elgin, Sarah C.R.

    2017-01-01

    The discordance between genome size and the complexity of eukaryotes can partly be attributed to differences in repeat density. The Muller F element (∼5.2 Mb) is the smallest chromosome in Drosophila melanogaster, but it is substantially larger (>18.7 Mb) in D. ananassae. To identify the major contributors to the expansion of the F element and to assess their impact, we improved the genome sequence and annotated the genes in a 1.4-Mb region of the D. ananassae F element, and a 1.7-Mb region from the D element for comparison. We find that transposons (particularly LTR and LINE retrotransposons) are major contributors to this expansion (78.6%), while Wolbachia sequences integrated into the D. ananassae genome are minor contributors (0.02%). Both D. melanogaster and D. ananassae F-element genes exhibit distinct characteristics compared to D-element genes (e.g., larger coding spans, larger introns, more coding exons, and lower codon bias), but these differences are exaggerated in D. ananassae. Compared to D. melanogaster, the codon bias observed in D. ananassae F-element genes can primarily be attributed to mutational biases instead of selection. The 5′ ends of F-element genes in both species are enriched in dimethylation of lysine 4 on histone 3 (H3K4me2), while the coding spans are enriched in H3K9me2. Despite differences in repeat density and gene characteristics, D. ananassae F-element genes show a similar range of expression levels compared to genes in euchromatic domains. This study improves our understanding of how transposons can affect genome size and how genes can function within highly repetitive domains. PMID:28667019

  16. Need for and Access to Supportive Services in the Child Welfare System

    PubMed Central

    Freisthler, Bridget

    2011-01-01

    Objective The purpose of this paper is to examine how geographical availability of social services is related to foster care entry rates and referrals for child maltreatment investigations. The primary concerns are to (1) determine locations across Los Angeles County where the availability of social services is low but display a high need for those services and (2) begin to examine how the geographic distribution of social services is related to rates of referrals and foster care entries in child maltreatment. Methods Archival data for all 288 zip codes within Los Angeles County were collected on rates of referrals, foster care entries, location and types of social service agencies, and zip code demographics. Data were analyzed using point process models and spatial regressions. Results Higher densities of child welfare services in local areas (for referrals) and lagged areas (for referrals and foster care entries) were related to lower rates of child maltreatment. The density of housing and housing-related services was negatively related to referrals in local areas and foster care entry rates in lagged areas. Areas with higher densities of substance abuse and domestic violence service agencies had significantly higher rates of both Child Protective Services (CPS) referrals and entries into foster care in local areas. Conclusions While the total density of child welfare services within and surrounding zip code areas is related to lower rates of referrals and foster care entries, the findings are less clear about what those specific services are. Living in and around “resource rich” zip codes may reduce rates of child maltreatment. PMID:23788827

  17. First-principles equation-of-state table of beryllium based on density-functional theory calculations

    DOE PAGES

    Ding, Y. H.; Hu, S. X.

    2017-06-06

    Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less

  18. First-principles equation-of-state table of beryllium based on density-functional theory calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Y. H.; Hu, S. X.

    Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less

  19. A toolbox of lectins for translating the sugar code: the galectin network in phylogenesis and tumors.

    PubMed

    Kaltner, H; Gabius, H-J

    2012-04-01

    Lectin histochemistry has revealed cell-type-selective glycosylation. It is under dynamic and spatially controlled regulation. Since their chemical properties allow carbohydrates to reach unsurpassed structural diversity in oligomers, they are ideal for high density information coding. Consequently, the concept of the sugar code assigns a functional dimension to the glycans of cellular glycoconjugates. Indeed, multifarious cell processes depend on specific recognition of glycans by their receptors (lectins), which translate the sugar-encoded information into effects. Duplication of ancestral genes and the following divergence of sequences account for the evolutionary dynamics in lectin families. Differences in gene number can even appear among closely related species. The adhesion/growth-regulatory galectins are selected as an instructive example to trace the phylogenetic diversification in several animals, most of them popular models in developmental and tumor biology. Chicken galectins are identified as a low-level-complexity set, thus singled out for further detailed analysis. The various operative means for establishing protein diversity among the chicken galectins are delineated, and individual characteristics in expression profiles discerned. To apply this galectin-fingerprinting approach in histopathology has potential for refining differential diagnosis and for obtaining prognostic assessments. On the grounds of in vitro work with tumor cells a strategically orchestrated co-regulation of galectin expression with presentation of cognate glycans is detected. This coordination epitomizes the far-reaching physiological significance of sugar coding.

  20. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  1. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  2. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  3. MTpy: A Python toolbox for magnetotellurics

    NASA Astrophysics Data System (ADS)

    Krieger, Lars; Peacock, Jared R.

    2014-11-01

    We present the software package MTpy that allows handling, processing, and imaging of magnetotelluric (MT) data sets. Written in Python, the code is open source, containing sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides the independent definition of classes and functions, MTpy provides wrappers and convenience scripts to call standard external data processing and modelling software. In its current state, modules and functions of MTpy work on raw and pre-processed MT data. However, opposite to providing a static compilation of software, we prefer to introduce MTpy as a flexible software toolbox, whose contents can be combined and utilised according to the respective needs of the user. Just as the overall functionality of a mechanical toolbox can be extended by adding new tools, MTpy is a flexible framework, which will be dynamically extended in the future. Furthermore, it can help to unify and extend existing codes and algorithms within the (academic) MT community. In this paper, we introduce the structure and concept of MTpy. Additionally, we show some examples from an everyday work-flow of MT data processing: the generation of standard EDI data files from raw electric (E-) and magnetic flux density (B-) field time series as input, the conversion into MiniSEED data format, as well as the generation of a graphical data representation in the form of a Phase Tensor pseudosection.

  4. Influence of the plasma environment on atomic structure using an ion-sphere model

    NASA Astrophysics Data System (ADS)

    Belkhiri, Madeny; Fontes, Christopher J.; Poirier, Michel

    2015-09-01

    Plasma environment effects on atomic structure are analyzed using various atomic structure codes. To monitor the effect of high free-electron density or low temperatures, Fermi-Dirac and Maxwell-Boltzmann statistics are compared. After a discussion of the implementation of the Fermi-Dirac approach within the ion-sphere model, several applications are considered. In order to check the consistency of the modifications brought here to extant codes, calculations have been performed using the Los Alamos Cowan Atomic Structure (cats) code in its Hartree-Fock or Hartree-Fock-Slater form and the parametric potential Flexible Atomic Code (fac). The ground-state energy shifts due to the plasma effects for the six most ionized aluminum ions have been calculated using the fac and cats codes and fairly agree. For the intercombination resonance line in Fe22 +, the plasma effect within the uniform electron gas model results in a positive shift that agrees with the multiconfiguration Dirac-Fock value of B. Saha and S. Fritzsche [J. Phys. B 40, 259 (2007), 10.1088/0953-4075/40/2/002]. Last, the present model is compared to experimental data in titanium measured on the terawatt Astra facility and provides values for electron temperature and density in agreement with the maria code.

  5. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  6. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  7. Function-Space-Based Solution Scheme for the Size-Modified Poisson-Boltzmann Equation in Full-Potential DFT.

    PubMed

    Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten

    2016-08-09

    The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.

  8. Warthog: Coupling Status Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Reardon, Bradley T.

    The Warthog code was developed to couple codes that are developed in both the Multi-Physics Object-Oriented Simulation Environment (MOOSE) from Idaho National Laboratory (INL) and SHARP from Argonne National Laboratory (ANL). The initial phase of this work, focused on coupling the neutronics code PROTEUS with the fuel performance code BISON. The main technical challenge involves mapping the power density solution determined by PROTEUS to the fuel in BISON. This presents a challenge since PROTEUS uses the MOAB mesh format, but BISON, like all other MOOSE codes, uses the libMesh format. When coupling the different codes, one must consider that Warthogmore » is a light-weight MOOSE-based program that uses the Data Transfer Kit (DTK) to transfer data between the various mesh types. Users set up inputs for the codes they want to run, and then Warthog transfers the data between them. Currently Warthog supports XSProc from SCALE or the Sub-Group Application Programming Interface (SGAPI) in PROTEUS for generating cross sections. It supports arbitrary geometries using PROTEUS and BISON. DTK will transfer power densities and temperatures between the codes where the domains overlap. In the past fiscal year (FY), much work has gone into demonstrating two-way coupling for simple pin cells of various materials. XSProc was used to calculate the cross sections, which were then passed to PROTEUS in an external file. PROTEUS calculates the fission/power density, and Warthog uses DTK to pass this information to BISON, where it is used as the heat source. BISON then calculates the temperature profile of the pin cell and sends it back to XSProc to obtain the temperature corrected cross sections. This process is repeated until the convergence criteria (tolerance on BISON solve, or number of time steps) is reached. Models have been constructed and run for both uranium oxide and uranium silicide fuels. These models demonstrate a clear difference in power shape that is not accounted for in a stand-alone BISON run. Future work involves improving the user interface (UI), likely through integration with the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Workbench. Furthermore, automating the input creation would ease the user experience. The next priority is to continue coupling the work with other codes in the SHARP package. Efforts on other projects include work to couple the Nek5000 thermo-hydraulics code to MOOSE, but this is in the preliminary stages.« less

  9. Uncoupling cis-Acting RNA Elements from Coding Sequences Revealed a Requirement of the N-Terminal Region of Dengue Virus Capsid Protein in Virus Particle Formation

    PubMed Central

    Samsa, Marcelo M.; Mondotte, Juan A.; Caramelo, Julio J.

    2012-01-01

    Little is known about the mechanism of flavivirus genome encapsidation. Here, functional elements of the dengue virus (DENV) capsid (C) protein were investigated. Study of the N-terminal region of DENV C has been limited by the presence of overlapping cis-acting RNA elements within the protein-coding region. To dissociate these two functions, we used a recombinant DENV RNA with a duplication of essential RNA structures outside the C coding sequence. By the use of this system, the highly conserved amino acids FNML, which are encoded in the RNA cyclization sequence 5′CS, were found to be dispensable for C function. In contrast, deletion of the N-terminal 18 amino acids of C impaired DENV particle formation. Two clusters of basic residues (R5-K6-K7-R9 and K17-R18-R20-R22) were identified as important. A systematic mutational analysis indicated that a high density of positive charges, rather than particular residues at specific positions, was necessary. Furthermore, a differential requirement of N-terminal sequences of C for viral particle assembly was observed in mosquito and human cells. While no viral particles were observed in human cells with a virus lacking the first 18 residues of C, DENV propagation was detected in mosquito cells, although to a level about 50-fold less than that observed for a wild-type (WT) virus. We conclude that basic residues at the N terminus of C are necessary for efficient particle formation in mosquito cells but that they are crucial for propagation in human cells. This is the first report demonstrating that the N terminus of C plays a role in DENV particle formation. In addition, our results suggest that this function of C is differentially modulated in different host cells. PMID:22072762

  10. Stallion sperm transcriptome comprises functionally coherent coding and regulatory RNAs as revealed by microarray analysis and RNA-seq.

    PubMed

    Das, Pranab J; McCarthy, Fiona; Vishnoi, Monika; Paria, Nandina; Gresham, Cathy; Li, Gang; Kachroo, Priyanka; Sudderth, A Kendrick; Teague, Sheila; Love, Charles C; Varner, Dickson D; Chowdhary, Bhanu P; Raudsepp, Terje

    2013-01-01

    Mature mammalian sperm contain a complex population of RNAs some of which might regulate spermatogenesis while others probably play a role in fertilization and early development. Due to this limited knowledge, the biological functions of sperm RNAs remain enigmatic. Here we report the first characterization of the global transcriptome of the sperm of fertile stallions. The findings improved understanding of the biological significance of sperm RNAs which in turn will allow the discovery of sperm-based biomarkers for stallion fertility. The stallion sperm transcriptome was interrogated by analyzing sperm and testes RNA on a 21,000-element equine whole-genome oligoarray and by RNA-seq. Microarray analysis revealed 6,761 transcripts in the sperm, of which 165 were sperm-enriched, and 155 were differentially expressed between the sperm and testes. Next, 70 million raw reads were generated by RNA-seq of which 50% could be aligned with the horse reference genome. A total of 19,257 sequence tags were mapped to all horse chromosomes and the mitochondrial genome. The highest density of mapped transcripts was in gene-rich ECA11, 12 and 13, and the lowest in gene-poor ECA9 and X; 7 gene transcripts originated from ECAY. Structural annotation aligned sperm transcripts with 4,504 known horse and/or human genes, rRNAs and 82 miRNAs, whereas 13,354 sequence tags remained anonymous. The data were aligned with selected equine gene models to identify additional exons and splice variants. Gene Ontology annotations showed that sperm transcripts were associated with molecular processes (chemoattractant-activated signal transduction, ion transport) and cellular components (membranes and vesicles) related to known sperm functions at fertilization, while some messenger and micro RNAs might be critical for early development. The findings suggest that the rich repertoire of coding and non-coding RNAs in stallion sperm is not a random remnant from spermatogenesis in testes but a selectively retained and functionally coherent collection of RNAs.

  11. Stallion Sperm Transcriptome Comprises Functionally Coherent Coding and Regulatory RNAs as Revealed by Microarray Analysis and RNA-seq

    PubMed Central

    Das, Pranab J.; McCarthy, Fiona; Vishnoi, Monika; Paria, Nandina; Gresham, Cathy; Li, Gang; Kachroo, Priyanka; Sudderth, A. Kendrick; Teague, Sheila; Love, Charles C.; Varner, Dickson D.; Chowdhary, Bhanu P.; Raudsepp, Terje

    2013-01-01

    Mature mammalian sperm contain a complex population of RNAs some of which might regulate spermatogenesis while others probably play a role in fertilization and early development. Due to this limited knowledge, the biological functions of sperm RNAs remain enigmatic. Here we report the first characterization of the global transcriptome of the sperm of fertile stallions. The findings improved understanding of the biological significance of sperm RNAs which in turn will allow the discovery of sperm-based biomarkers for stallion fertility. The stallion sperm transcriptome was interrogated by analyzing sperm and testes RNA on a 21,000-element equine whole-genome oligoarray and by RNA-seq. Microarray analysis revealed 6,761 transcripts in the sperm, of which 165 were sperm-enriched, and 155 were differentially expressed between the sperm and testes. Next, 70 million raw reads were generated by RNA-seq of which 50% could be aligned with the horse reference genome. A total of 19,257 sequence tags were mapped to all horse chromosomes and the mitochondrial genome. The highest density of mapped transcripts was in gene-rich ECA11, 12 and 13, and the lowest in gene-poor ECA9 and X; 7 gene transcripts originated from ECAY. Structural annotation aligned sperm transcripts with 4,504 known horse and/or human genes, rRNAs and 82 miRNAs, whereas 13,354 sequence tags remained anonymous. The data were aligned with selected equine gene models to identify additional exons and splice variants. Gene Ontology annotations showed that sperm transcripts were associated with molecular processes (chemoattractant-activated signal transduction, ion transport) and cellular components (membranes and vesicles) related to known sperm functions at fertilization, while some messenger and micro RNAs might be critical for early development. The findings suggest that the rich repertoire of coding and non-coding RNAs in stallion sperm is not a random remnant from spermatogenesis in testes but a selectively retained and functionally coherent collection of RNAs. PMID:23409192

  12. Analysis of filament statistics in fast camera data on MAST

    NASA Astrophysics Data System (ADS)

    Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James

    2017-10-01

    Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.

  13. SPHYNX: an accurate density-based SPH method for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Cabezón, R. M.; García-Senz, D.; Figueira, J.

    2017-10-01

    Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.

  14. Genome-scale deletion screening of human long non-coding RNAs using a paired-guide RNA CRISPR library

    PubMed Central

    Zhu, Shiyou; Li, Wei; Liu, Jingze; Chen, Chen-Hao; Liao, Qi; Xu, Ping; Xu, Han; Xiao, Tengfei; Cao, Zhongzheng; Peng, Jingyu; Yuan, Pengfei; Brown, Myles; Liu, Xiaole Shirley; Wei, Wensheng

    2017-01-01

    CRISPR/Cas9 screens have been widely adopted to analyse coding gene functions, but high throughput screening of non-coding elements using this method is more challenging, because indels caused by a single cut in non-coding regions are unlikely to produce a functional knockout. A high-throughput method to produce deletions of non-coding DNA is needed. Herein, we report a high throughput genomic deletion strategy to screen for functional long non-coding RNAs (lncRNAs) that is based on a lentiviral paired-guide RNA (pgRNA) library. Applying our screening method, we identified 51 lncRNAs that can positively or negatively regulate human cancer cell growth. We individually validated 9 lncRNAs using CRISPR/Cas9-mediated genomic deletion and functional rescue, CRISPR activation or inhibition, and gene expression profiling. Our high-throughput pgRNA genome deletion method should enable rapid identification of functional mammalian non-coding elements. PMID:27798563

  15. Gaseous hydrogen/oxygen injector performance characterization

    NASA Technical Reports Server (NTRS)

    Degroot, W. A.; Tsuei, H. H.

    1994-01-01

    Results are presented of spontaneous Raman scattering measurements in the combustion chamber of a 110 N thrust class gaseous hydrogen/oxygen rocket. Temperature, oxygen number density, and water number density profiles at the injector exit plane are presented. These measurements are used as input profiles to a full Navier-Stokes computational fluid dynamics (CFD) code. Predictions of this code while using the measured profiles are compared with predictions while using assumed uniform injector profiles. Axial and radial velocity profiles derived from both sets of predictions are compared with Rayleigh scattering measurements in the exit plane of a 33:1 area ratio nozzle. Temperature and number density Raman scattering measurements at the exit plane of a test rocket with a 1:1.36 area ratio nozzle are also compared with results from both sets of predictions.

  16. Review of the 9th NLTE code comparison workshop

    DOE PAGES

    Piron, Robin; Gilleron, Franck; Aglitskiy, Yefim; ...

    2017-02-24

    Here, we review the 9th NLTE code comparison workshop, which was held in the Jussieu campus, Paris, from November 30th to December 4th, 2015. This time, the workshop was mainly focused on a systematic investigation of iron NLTE steady-state kinetics and emissivity, over a broad range of temperature and density. Through these comparisons, topics such as modeling of the dielectronic processes, density effects or the effect of an external radiation field were addressed. The K-shell spectroscopy of iron plasmas was also addressed, notably through the interpretation of tokamak and laser experimental spectra.

  17. Performance of Low-Density Parity-Check Coded Modulation

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  18. Review of the 9th NLTE code comparison workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piron, Robin; Gilleron, Franck; Aglitskiy, Yefim

    Here, we review the 9th NLTE code comparison workshop, which was held in the Jussieu campus, Paris, from November 30th to December 4th, 2015. This time, the workshop was mainly focused on a systematic investigation of iron NLTE steady-state kinetics and emissivity, over a broad range of temperature and density. Through these comparisons, topics such as modeling of the dielectronic processes, density effects or the effect of an external radiation field were addressed. The K-shell spectroscopy of iron plasmas was also addressed, notably through the interpretation of tokamak and laser experimental spectra.

  19. Review of the 9th NLTE code comparison workshop

    NASA Astrophysics Data System (ADS)

    Piron, R.; Gilleron, F.; Aglitskiy, Y.; Chung, H.-K.; Fontes, C. J.; Hansen, S. B.; Marchuk, O.; Scott, H. A.; Stambulchik, E.; Ralchenko, Yu.

    2017-06-01

    We review the 9th NLTE code comparison workshop, which was held in the Jussieu campus, Paris, from November 30th to December 4th, 2015. This time, the workshop was mainly focused on a systematic investigation of iron NLTE steady-state kinetics and emissivity, over a broad range of temperature and density. Through these comparisons, topics such as modeling of the dielectronic processes, density effects or the effect of an external radiation field were addressed. The K-shell spectroscopy of iron plasmas was also addressed, notably through the interpretation of tokamak and laser experimental spectra.

  20. Multi-code analysis of scrape-off layer filament dynamics in MAST

    NASA Astrophysics Data System (ADS)

    Militello, F.; Walkden, N. R.; Farley, T.; Gracias, W. A.; Olsen, J.; Riva, F.; Easy, L.; Fedorczak, N.; Lupelli, I.; Madsen, J.; Nielsen, A. H.; Ricci, P.; Tamain, P.; Young, J.

    2016-11-01

    Four numerical codes are employed to investigate the dynamics of scrape-off layer filaments in tokamak relevant conditions. Experimental measurements were taken in the MAST device using visual camera imaging, which allows the evaluation of the perpendicular size and velocity of the filaments, as well as the combination of density and temperature associated with the perturbation. A new algorithm based on the light emission integrated along the field lines associated with the position of the filament is developed to ensure that it is properly detected and tracked. The filaments are found to have velocities of the order of 1~\\text{km}~{{\\text{s}}-1} , a perpendicular diameter of around 2-3 cm and a density amplitude 2-3.5 times the background plasma. 3D and 2D numerical codes (the STORM module of BOUT++, GBS, HESEL and TOKAM3X) are used to reproduce the motion of the observed filaments with the purpose of validating the codes and of better understanding the experimental data. Good agreement is found between the 3D codes. The seeded filament simulations are also able to reproduce the dynamics observed in experiments with accuracy up to the experimental errorbar levels. In addition, the numerical results showed that filaments characterised by similar size and light emission intensity can have quite different dynamics if the pressure perturbation is distributed differently between density and temperature components. As an additional benefit, several observations on the dynamics of the filaments in the presence of evolving temperature fields were made and led to a better understanding of the behaviour of these coherent structures.

  1. Local variations in the timing of RSV epidemics.

    PubMed

    Noveroske, Douglas B; Warren, Joshua L; Pitzer, Virginia E; Weinberger, Daniel M

    2016-11-11

    Respiratory syncytial virus (RSV) is a primary cause of hospitalizations in children worldwide. The timing of seasonal RSV epidemics needs to be known in order to administer prophylaxis to high-risk infants at the appropriate time. We used data from the Connecticut State Inpatient Database to identify RSV hospitalizations based on ICD-9 diagnostic codes. Harmonic regression analyses were used to evaluate RSV epidemic timing at the county level and ZIP code levels. Linear regression was used to investigate associations between the socioeconomic status of a locality and RSV epidemic timing. 9,740 hospitalizations coded as RSV occurred among children less than 2 years old between July 1, 1997 and June 30, 2013. The earliest ZIP code had a seasonal RSV epidemic that peaked, on average, 4.64 weeks earlier than the latest ZIP code. Earlier epidemic timing was significantly associated with demographic characteristics (higher population density and larger fraction of the population that was black). Seasonal RSV epidemics in Connecticut occurred earlier in areas that were more urban (higher population density and larger fraction of the population that was). These findings could be used to better time the administration of prophylaxis to high-risk infants.

  2. Special features of isomeric ratios in nuclear reactions induced by various projectile particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danagulyan, A. S.; Hovhannisyan, G. H., E-mail: hov-gohar@ysu.am; Bakhshiyan, T. M.

    2016-05-15

    Calculations for (p, n) and (α, p3n) reactions were performed with the aid of the TALYS-1.4 code. Reactions in which the mass numbers of target and product nuclei were identical were examined in the range of A = 44–124. Excitation functions were obtained for product nuclei in ground and isomeric states, and isomeric ratios were calculated. The calculated data reflect well the dependence of the isomeric ratios on the projectile type. A comparison of the calculated and experimental data reveals, that, for some nuclei in a high-spin state, the calculated data fall greatly short of their experimental counterparts. These discrepanciesmore » may be due to the presence of high-spin yrast states and rotational bands in these nuclei. Calculations involving various level-density models included in the TALYS-1.4 code with allowance for the enhancement of collective effects do not remove the discrepancies in the majority of cases.« less

  3. Ab-initio Calculation of the XANES of Lithium Phosphates and LiFePO4

    NASA Astrophysics Data System (ADS)

    Yiu, Y. M.; Yang, Songlan; Wang, Dongniu; Sun, Xueliang; Sham, T. K.

    2013-04-01

    Lithium iron phosphate has been regarded as a promising cathode material for the next generation lithium ion batteries due to its high specific capacity, superior thermal and cyclic stability [1]. In this study, the XANES (X-ray Absorption Near Edge Structure) spectra of lithium iron phosphate and lithium phosphates of various compositions at the Li K, P L3,2, Fe M3,2 and O K-edges have been simulated self-consistently using ab-initio calculations based on multiple scattering theory (the FEFF9 code) and DFT (Density Functional Theory, the Wien2k code). The lithium phosphates under investigation include LiFePO4, γ-Li3PO4, Li4P2O7 and LiPO3. The calculated spectra are compared to the experimental XANES recorded in total electron yield (TEY) and fluorescence yield (FLY). This work was carried out to assess the XANES of possible phases presented in LiFePO4 based Li ion battery applications [2].

  4. A MATLAB-based finite-element visualization of quantum reactive scattering. I. Collinear atom-diatom reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warehime, Mick; Alexander, Millard H., E-mail: mha@umd.edu

    We restate the application of the finite element method to collinear triatomic reactive scattering dynamics with a novel treatment of the scattering boundary conditions. The method provides directly the reactive scattering wave function and, subsequently, the probability current density field. Visualizing these quantities provides additional insight into the quantum dynamics of simple chemical reactions beyond simplistic one-dimensional models. Application is made here to a symmetric reaction (H+H{sub 2}), a heavy-light-light reaction (F+H{sub 2}), and a heavy-light-heavy reaction (F+HCl). To accompany this article, we have written a MATLAB code which is fast, simple enough to be accessible to a wide audience,more » as well as generally applicable to any problem that can be mapped onto a collinear atom-diatom reaction. The code and user's manual are available for download from http://www2.chem.umd.edu/groups/alexander/FEM.« less

  5. 500  Gb/s free-space optical transmission over strong atmospheric turbulence channels.

    PubMed

    Qu, Zhen; Djordjevic, Ivan B

    2016-07-15

    We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.

  6. Measuring Time-of-Flight in an Ultrasonic LPS System Using Generalized Cross-Correlation

    PubMed Central

    Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Álvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos

    2011-01-01

    In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic Local Positioning System (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (Direct-Sequence Code Division Multiple Access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the Generalized Cross-Correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the Time Differences of Arrival (TDOA) between a reference beacon and the others. PMID:22346645

  7. Measuring time-of-flight in an ultrasonic LPS system using generalized cross-correlation.

    PubMed

    Villladangos, José Manuel; Ureña, Jesús; García, Juan Jesús; Mazo, Manuel; Hernández, Alvaro; Jiménez, Ana; Ruíz, Daniel; De Marziani, Carlos

    2011-01-01

    In this article, a time-of-flight detection technique in the frequency domain is described for an ultrasonic local positioning system (LPS) based on encoded beacons. Beacon transmissions have been synchronized and become simultaneous by means of the DS-CDMA (direct-sequence code Division multiple access) technique. Every beacon has been associated to a 255-bit Kasami code. The detection of signal arrival instant at the receiver, from which the distance to each beacon can be obtained, is based on the application of the generalized cross-correlation (GCC), by using the cross-spectral density between the received signal and the sequence to be detected. Prior filtering to enhance the frequency components around the carrier frequency (40 kHz) has improved estimations when obtaining the correlation function maximum, which implies an improvement in distance measurement precision. Positioning has been achieved by using hyperbolic trilateration, based on the time differences of arrival (TDOA) between a reference beacon and the others.

  8. A Monte Carlo Simulation of Prompt Gamma Emission from Fission Fragments

    NASA Astrophysics Data System (ADS)

    Regnier, D.; Litaize, O.; Serot, O.

    2013-03-01

    The prompt fission gamma spectra and multiplicities are investigated through the Monte Carlo code FIFRELIN which is developed at the Cadarache CEA research center. Knowing the fully accelerated fragment properties, their de-excitation is simulated through a cascade of neutron, gamma and/or electron emissions. This paper presents the recent developments in the FIFRELIN code and the results obtained on the spontaneous fission of 252Cf. Concerning the decay cascades simulation, a fully Hauser-Feshbach model is compared with a previous one using a Weisskopf spectrum for neutron emission. A particular attention is paid to the treatment of the neutron/gamma competition. Calculations lead using different level density and gamma strength function models show significant discrepancies of the slope of the gamma spectra at high energy. The underestimation of the prompt gamma spectra obtained regardless our de-excitation cascade modeling choice is discussed. This discrepancy is probably linked to an underestimation of the post-neutron fragments spin in our calculation.

  9. The present state and future directions of PDF methods

    NASA Technical Reports Server (NTRS)

    Pope, S. B.

    1992-01-01

    The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.

  10. streamgap-pepper: Effects of peppering streams with many small impacts

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Erkal, Denis; Sanders, Jason

    2017-02-01

    streamgap-pepper computes the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. A line-of-parallel-angle approach is used to calculate the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. This code uses galpy (ascl:1411.008) and the streampepperdf.py galpy extension, which implements the fast calculation of the perturbed stream structure.

  11. Changes in the reflectivity of a lithium niobate crystal decorated with a graphene layer

    NASA Astrophysics Data System (ADS)

    Salas, O.; Garcés, E.; Castillo, F. L.; Magaña, L. F.

    2017-01-01

    Density functional theory and molecular dynamics were used to study the interaction of a graphene layer with the surface of lithium niobate. The simulations were performed at atmospheric pressure and 300K. We found that the graphene layer is physisorbed with an adsorption energy of -0.8205 eV/C-atom. Subsequently, the optical absorption of the graphene-(lithium niobate) system was calculated and compared with that of graphene solo and lithium niobate alone, respectively. The calculations were performed using the Quantum Espresso code with the GGA approximation and Vdw-DF2 (which includes long-range correlation effects as Van der Waals interactions).

  12. Theoretical study of Ag doping-induced vacancies defects in armchair graphene

    NASA Astrophysics Data System (ADS)

    Benchallal, L.; Haffad, S.; Lamiri, L.; Boubenider, F.; Zitoune, H.; Kahouadji, B.; Samah, M.

    2018-06-01

    We have performed a density functional theory (DFT) study of the absorption of silver atoms (Ag,Ag2 and Ag3) in graphene using SIESTA code, in the generalized gradient approximation (GGA). The absorption energy, geometry, magnetic moments and charge transfer of Ag clusters-graphene system are calculated. The minimum energy configuration demonstrates that all structures remain planar and silver atoms fit into this plane. The charge transfer between the silver clusters and carbon atoms constituting the graphene surface is an indicative of a strong bond. The structure doped with a single silver atom has a magnetic moment and the two other are nonmagnetic.

  13. ICF-CY code set for infants with early delay and disabilities (EDD Code Set) for interdisciplinary assessment: a global experts survey.

    PubMed

    Pan, Yi-Ling; Hwang, Ai-Wen; Simeonsson, Rune J; Lu, Lu; Liao, Hua-Fang

    2015-01-01

    Comprehensive description of functioning is important in providing early intervention services for infants with developmental delay/disabilities (DD). A code set of the International Classification of Functioning, Disability and Health: Children and Youth Version (ICF-CY) could facilitate the practical use of the ICF-CY in team evaluation. The purpose of this study was to derive an ICF-CY code set for infants under three years of age with early delay and disabilities (EDD Code Set) for initial team evaluation. The EDD Code Set based on the ICF-CY was developed on the basis of a Delphi survey of international professionals experienced in implementing the ICF-CY and professionals in early intervention service system in Taiwan. Twenty-five professionals completed the Delphi survey. A total of 82 ICF-CY second-level categories were identified for the EDD Code Set, including 28 categories from the domain Activities and Participation, 29 from body functions, 10 from body structures and 15 from environmental factors. The EDD Code Set of 82 ICF-CY categories could be useful in multidisciplinary team evaluations to describe functioning of infants younger than three years of age with DD, in a holistic manner. Future validation of the EDD Code Set and examination of its clinical utility are needed. The EDD Code Set with 82 essential ICF-CY categories could be useful in the initial team evaluation as a common language to describe functioning of infants less than three years of age with developmental delay/disabilities, with a more holistic view. The EDD Code Set including essential categories in activities and participation, body functions, body structures and environmental factors could be used to create a functional profile for each infant with special needs and to clarify the interaction of child and environment accounting for the child's functioning.

  14. Hypersonic CFD applications at NASA Langley using CFL3D and CFL3DE

    NASA Technical Reports Server (NTRS)

    Richardson, Pamela F.

    1989-01-01

    The CFL3D/CFL3DE CFD codes and the industrial use status of the codes are outlined. Comparison of grid density, pressure, heat transfer, and aerodynamic coefficience are presented. Future plans related to the National Aerospace Plane Program are briefly outlined.

  15. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer: Intel with ifort; AMD Opteron with pathf90 Operating system: Linux Has the code been vectorized or parallelized?: Yes. Parallelization is implemented through domain composition using MPI. RAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ions. Classification: 7.3 External routines: FFTW 2.1.5 ( http://www.fftw.org) Catalogue identifier of previous version: AEBN_v1_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 839 Does the new version supersede the previous version?: Yes Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. The computation of all terms is effectively linear scaling. Parallelization is implemented through domain decomposition, and up to ˜10,000 ions may be included in the calculation on just a single processor, limited by RAM. For example, when optimizing the geometry of ˜50,000 aluminum ions (plus vacuum) on 48 cores, a single iteration of conjugate gradient ion geometry optimization takes ˜40 minutes wall time. However, each CG geometry step requires two or more electron density optimizations, so step times will vary. Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors. Reasons for new version: To allow much larger systems to be simulated using PROFESS. Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. A variety of local pseudopotential files are available at the Carter group website ( http://www.princeton.edu/mae/people/faculty/carter/homepage/research/localpseudopotentials/). Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors. Running time: Problem dependent: the test example provided with the code takes less than a second to run. Timing results for large scale problems are given in the PROFESS paper and Ref. [1].

  16. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    NASA Astrophysics Data System (ADS)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  17. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  18. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  19. Sharp Interface Algorithm for Large Density Ratio Incompressible Multiphase Magnetohydrodynamic Flows

    DTIC Science & Technology

    2013-01-01

    experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets for the proposed muon collider...validated through the comparison with experiments on liquid metal jets . The FronTier-MHD code has been used for simulations of liquid mercury targets...FronTier-MHD code have been performed using experimental and theoretical studies of liquid mercury jets in magnetic fields. Experimental studies of a

  20. Evaluation of the efficiency and fault density of software generated by code generators

    NASA Technical Reports Server (NTRS)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  1. Electronic structure and charge transfer excited states of endohedral fullerene containing electron donoracceptor complexes utilized in organic photovoltaics

    NASA Astrophysics Data System (ADS)

    Amerikheirabadi, Fatemeh

    Organic Donor-Acceptor complexes form the main component of the organic photovoltaic devices (OPVs). The open circuit voltage of OPVs is directly related to the charge transfer excited state energies of these complexes. Currently a large number of different molecular complexes are being tested for their efficiency in photovoltaic devices. In this work, density functional theory as implemented in the NRLMOL code is used to investigate the electronic structure and related properties of these donor-acceptor complexes. The charge transfer excitation energies are calculated using the perturbative delta self-consistent field method recently developed in our group as the standard time dependent density functional approaches fail to accurately provide them. The model photovoltaics systems analyzed are as follows: Sc3N C 80--ZnTPP, Y3 N C80-- ZnTPP and Sc3 N C80-- ZnPc. In addition, a thorough analysis of the isolated donor and acceptor molecules is also provided. The studied acceptors are chosen from a class of fullerenes named trimetallic nitride endohedral fullerenes. These molecules have shown to possess advantages as acceptors such as long lifetimes of the charge-separated states.

  2. Photo-induced reactions from efficient molecular dynamics with electronic transitions using the FIREBALL local-orbital density functional theory formalism.

    PubMed

    Zobač, Vladimír; Lewis, James P; Abad, Enrique; Mendieta-Moreno, Jesús I; Hapala, Prokop; Jelínek, Pavel; Ortega, José

    2015-05-08

    The computational simulation of photo-induced processes in large molecular systems is a very challenging problem. Firstly, to properly simulate photo-induced reactions the potential energy surfaces corresponding to excited states must be appropriately accessed; secondly, understanding the mechanisms of these processes requires the exploration of complex configurational spaces and the localization of conical intersections; finally, photo-induced reactions are probability events, that require the simulation of hundreds of trajectories to obtain the statistical information for the analysis of the reaction profiles. Here, we present a detailed description of our implementation of a molecular dynamics with electronic transitions algorithm within the local-orbital density functional theory code FIREBALL, suitable for the computational study of these problems. As an example of the application of this approach, we also report results on the [2 + 2] cycloaddition of ethylene with maleic anhydride and on the [2 + 2] photo-induced polymerization reaction of two C60 molecules. We identify different deactivation channels of the initial electron excitation, depending on the time of the electronic transition from LUMO to HOMO, and the character of the HOMO after the transition.

  3. Projector Augmented-Wave formulation of response to strain and electric field perturbation within the density-functional perturbation theory

    NASA Astrophysics Data System (ADS)

    Martin, Alexandre; Torrent, Marc; Caracas, Razvan

    2015-03-01

    A formulation of the response of a system to strain and electric field perturbations in the pseudopotential-based density functional perturbation theory (DFPT) has been proposed by D.R Hamman and co-workers. It uses an elegant formalism based on the expression of DFT total energy in reduced coordinates, the key quantity being the metric tensor and its first and second derivatives. We propose to extend this formulation to the Projector Augmented-Wave approach (PAW). In this context, we express the full elastic tensor including the clamped-atom tensor, the atomic-relaxation contributions (internal stresses) and the response to electric field change (piezoelectric tensor and effective charges). With this we are able to compute the elastic tensor for all materials (metals and insulators) within a fully analytical formulation. The comparison with finite differences calculations on simple systems shows an excellent agreement. This formalism has been implemented in the plane-wave based DFT ABINIT code. We apply it to the computation of elastic properties and seismic-wave velocities of iron with impurity elements. By analogy with the materials contained in meteorites, tested impurities are light elements (H, O, C, S, Si).

  4. Density functional theory in the solid state

    PubMed Central

    Hasnip, Philip J.; Refson, Keith; Probert, Matt I. J.; Yates, Jonathan R.; Clark, Stewart J.; Pickard, Chris J.

    2014-01-01

    Density functional theory (DFT) has been used in many fields of the physical sciences, but none so successfully as in the solid state. From its origins in condensed matter physics, it has expanded into materials science, high-pressure physics and mineralogy, solid-state chemistry and more, powering entire computational subdisciplines. Modern DFT simulation codes can calculate a vast range of structural, chemical, optical, spectroscopic, elastic, vibrational and thermodynamic phenomena. The ability to predict structure–property relationships has revolutionized experimental fields, such as vibrational and solid-state NMR spectroscopy, where it is the primary method to analyse and interpret experimental spectra. In semiconductor physics, great progress has been made in the electronic structure of bulk and defect states despite the severe challenges presented by the description of excited states. Studies are no longer restricted to known crystallographic structures. DFT is increasingly used as an exploratory tool for materials discovery and computational experiments, culminating in ex nihilo crystal structure prediction, which addresses the long-standing difficult problem of how to predict crystal structure polymorphs from nothing but a specified chemical composition. We present an overview of the capabilities of solid-state DFT simulations in all of these topics, illustrated with recent examples using the CASTEP computer program. PMID:24516184

  5. Density- and wavefunction-normalized Cartesian spherical harmonics for l ≤ 20.

    PubMed

    Michael, J Robert; Volkov, Anatoliy

    2015-03-01

    The widely used pseudoatom formalism [Stewart (1976). Acta Cryst. A32, 565-574; Hansen & Coppens (1978). Acta Cryst. A34, 909-921] in experimental X-ray charge-density studies makes use of real spherical harmonics when describing the angular component of aspherical deformations of the atomic electron density in molecules and crystals. The analytical form of the density-normalized Cartesian spherical harmonic functions for up to l ≤ 7 and the corresponding normalization coefficients were reported previously by Paturle & Coppens [Acta Cryst. (1988), A44, 6-7]. It was shown that the analytical form for normalization coefficients is available primarily for l ≤ 4 [Hansen & Coppens, 1978; Paturle & Coppens, 1988; Coppens (1992). International Tables for Crystallography, Vol. B, Reciprocal space, 1st ed., edited by U. Shmueli, ch. 1.2. Dordrecht: Kluwer Academic Publishers; Coppens (1997). X-ray Charge Densities and Chemical Bonding. New York: Oxford University Press]. Only in very special cases it is possible to derive an analytical representation of the normalization coefficients for 4 < l ≤ 7 (Paturle & Coppens, 1988). In most cases for l > 4 the density normalization coefficients were calculated numerically to within seven significant figures. In this study we review the literature on the density-normalized spherical harmonics, clarify the existing notations, use the Paturle-Coppens (Paturle & Coppens, 1988) method in the Wolfram Mathematica software to derive the Cartesian spherical harmonics for l ≤ 20 and determine the density normalization coefficients to 35 significant figures, and computer-generate a Fortran90 code. The article primarily targets researchers who work in the field of experimental X-ray electron density, but may be of some use to all who are interested in Cartesian spherical harmonics.

  6. Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique

    NASA Astrophysics Data System (ADS)

    Boulanger, Olivier

    The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)

  7. Analytic derivation of the next-to-leading order proton structure function F2p(x ,Q2) based on the Laplace transformation

    NASA Astrophysics Data System (ADS)

    Khanpour, Hamzeh; Mirjalili, Abolfazl; Tehrani, S. Atashbar

    2017-03-01

    An analytical solution based on the Laplace transformation technique for the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations is presented at next-to-leading order accuracy in perturbative QCD. This technique is also applied to extract the analytical solution for the proton structure function, F2p(x ,Q2) , in the Laplace s space. We present the results for the separate parton distributions of all parton species, including valence quark densities, the antiquark and strange sea parton distribution functions (PDFs), and the gluon distribution. We successfully compare the obtained parton distribution functions and the proton structure function with the results from GJR08 [Gluck, Jimenez-Delgado, and Reya, Eur. Phys. J. C 53, 355 (2008)], 10.1140/epjc/s10052-007-0462-9 and KKT12 [Khanpour, Khorramian, and Tehrani, J. Phys. G 40, 045002 (2013)], 10.1088/0954-3899/40/4/045002 parametrization models as well as the x -space results using QCDnum code. Our calculations show a very good agreement with the available theoretical models as well as the deep inelastic scattering (DIS) experimental data throughout the small and large values of x . The use of our analytical solution to extract the parton densities and the proton structure function is discussed in detail to justify the analysis method, considering the accuracy and speed of calculations. Overall, the accuracy we obtain from the analytical solution using the inverse Laplace transform technique is found to be better than 1 part in 104 to 105. We also present a detailed QCD analysis of nonsinglet structure functions using all available DIS data to perform global QCD fits. In this regard we employ the Jacobi polynomial approach to convert the results from Laplace s space to Bjorken x space. The extracted valence quark densities are also presented and compared to the JR14, MMHT14, NNPDF, and CJ15 PDFs sets. We evaluate the numerical effects of target mass corrections (TMCs) and higher twist (HT) terms on various structure functions, and compare fits to data with and without these corrections.

  8. Development and validation of a critical gradient energetic particle driven Alfven eigenmode transport model for DIII-D tilted neutral beam experiments

    DOE PAGES

    Waltz, Ronald E.; Bass, Eric M.; Heidbrink, William W.; ...

    2015-10-30

    Recent experiments with the DIII-D tilted neutral beam injection (NBI) varying the beam energetic particle (EP) source profiles have provided strong evidence that unstable Alfven eigenmodes (AE) drive stiff EP transport at a critical EP density gradient. Here the critical gradient is identified by the local AE growth rate being equal to the local ITG/TEM growth rate at the same low toroidal mode number. The growth rates are taken from the gyrokinetic code GYRO. Simulation show that the slowing down beam-like EP distribution has a slightly lower critical gradient than the Maxwellian. The ALPHA EP density transport code, used tomore » validate the model, combines the low-n stiff EP critical density gradient AE mid-core transport with the energy independent high-n ITG/TEM density transport model controling the central core EP density profile. For the on-axis NBI heated DIII-D shot 146102, while the net loss to the edge is small, about half the birth fast ions are transported from the central core r/a < 0.5 and the central density is about half the slowing down density. Lastly, these results are in good agreement with experimental fast ion pressure profiles inferred from MSE constrained EFIT equilibria.« less

  9. Characterization of atomic oxygen from an ECR plasma source

    NASA Astrophysics Data System (ADS)

    Naddaf, M.; Bhoraskar, V. N.; Mandale, A. B.; Sainkar, S. R.; Bhoraskar, S. V.

    2002-11-01

    A low-power microwave-assisted electron cyclotron resonance (ECR) plasma system is shown to be a powerful and effective source of atomic oxygen (AO) useful in material processing. A 2.45 GHz microwave source with maximum power of 600 W was launched into the cavity to generate the ECR plasma. A catalytic nickel probe was used to determine the density of AO. The density of AO is studied as a function of pressure and axial position of the probe in the plasma chamber. It was found to vary from ~1×1020 to ~10×1020 atom m-3 as the plasma pressure was varied from 0.8 to 10 mTorr. The effect of AO in oxidation of silver is investigated by gravimetric analysis. The stoichiometric properties of the oxide are studied using the x-ray photoelectron spectroscopy as well as energy dispersive x-ray analysis. The degradation of the silver surface due to sputtering effect was viewed by scanning electron spectroscopy. The sputtering yield of oxygen ions in the plasma is calculated using the TRIM code. The effects of plasma pressure and the distance from the ECR zone on the AO density were also investigated. The density of AO measured by oxidation of silver is in good agreement with results obtained from the catalytic nickel probe.

  10. Numerical studies on alpha production from high energy proton beam interaction with Boron

    NASA Astrophysics Data System (ADS)

    Moustaizis, S. D.; Lalousis, P.; Hora, H.; Korn, G.

    2017-05-01

    Numerical investigations on high energy proton beam interaction with high density Boron plasma allows to simulate conditions concerning the alpha production from recent experimental measurements . The experiments measure the alpha production due to p11B nuclear fusion reactions when a laser-driven high energy proton beam interacts with Boron plasma produced by laser beam interaction with solid Boron. The alpha production and consequently the efficiency of the process depends on the initial proton beam energy, proton beam density, the Boron plasma density and temperature, and their temporal evolution. The main advantage for the p11B nuclear fusion reaction is the production of three alphas with total energy of 8.9 MeV, which could enhance the alpha heating effect and improve the alpha production. This particular effect is termed in the international literature as the alpha avalanche effect. Numerical results using a multi-fluid, global particle and energy balance, code shows the alpha production efficiency as a function of the initial energy of the proton beam, the Boron plasma density, the initial Boron plasma temperature and the temporal evolution of the plasma parameters. The simulations enable us to determine the interaction conditions (proton beam - B plasma) for which the alpha heating effect becomes important.

  11. CRAX. Cassandra Exoskeleton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.G.; Eubanks, L.

    1998-03-01

    This software assists the engineering designer in characterizing the statistical uncertainty in the performance of complex systems as a result of variations in manufacturing processes, material properties, system geometry or operating environment. The software is composed of a graphical user interface that provides the user with easy access to Cassandra uncertainty analysis routines. Together this interface and the Cassandra routines are referred to as CRAX (CassandRA eXoskeleton). The software is flexible enough, that with minor modification, it is able to interface with large modeling and analysis codes such as heat transfer or finite element analysis software. The current version permitsmore » the user to manually input a performance function, the number of random variables and their associated statistical characteristics: density function, mean, coefficients of variation. Additional uncertainity analysis modules are continuously being added to the Cassandra core.« less

  12. Cassandra Exoskeleton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robiinson, David G.

    1999-02-20

    This software assists the engineering designer in characterizing the statistical uncertainty in the performance of complex systems as a result of variations in manufacturing processes, material properties, system geometry or operating environment. The software is composed of a graphical user interface that provides the user with easy access to Cassandra uncertainty analysis routines. Together this interface and the Cassandra routines are referred to as CRAX (CassandRA eXoskeleton). The software is flexible enough, that with minor modification, it is able to interface with large modeling and analysis codes such as heat transfer or finite element analysis software. The current version permitsmore » the user to manually input a performance function, the number of random variables and their associated statistical characteristics: density function, mean, coefficients of variation. Additional uncertainity analysis modules are continuously being added to the Cassandra core.« less

  13. Experimentally constrained ( p , γ ) Y 89 and ( n , γ ) Y 89 reaction rates relevant to p -process nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, A. C.; Guttormsen, M.; Schwengner, R.

    The nuclear level density and the g-ray strength function have been extracted for 89Y, using the Oslo Method on 89Y(p,p'γ) 89Y coincidence data. The g-ray strength function displays a low-energy enhancement consistent with previous observations in this mass region ( 93-98Mo). Shell-model calculations give support that the observed enhancement is due to strong, low-energy M1 transitions at high excitation energies. The data were further used as input for calculations of the 88Sr(p,γ) 89Y and 88Y(n,γ) 89Y cross sections with the TALYS reaction code. Lastly, comparison with cross-section data, where available, as well as with values from the BRUSLIB library, showsmore » a satisfying agreement.« less

  14. Application of the exact exchange potential method for half metallic intermediate band alloy semiconductor.

    PubMed

    Fernández, J J; Tablero, C; Wahnón, P

    2004-06-08

    In this paper we present an analysis of the convergence of the band structure properties, particularly the influence on the modification of the bandgap and bandwidth values in half metallic compounds by the use of the exact exchange formalism. This formalism for general solids has been implemented using a localized basis set of numerical functions to represent the exchange density. The implementation has been carried out using a code which uses a linear combination of confined numerical pseudoatomic functions to represent the Kohn-Sham orbitals. The application of this exact exchange scheme to a half-metallic semiconductor compound, in particular to Ga(4)P(3)Ti, a promising material in the field of high efficiency solar cells, confirms the existence of the isolated intermediate band in this compound. (c) 2004 American Institute of Physics.

  15. Experimentally constrained ( p , γ ) Y 89 and ( n , γ ) Y 89 reaction rates relevant to p -process nucleosynthesis

    DOE PAGES

    Larsen, A. C.; Guttormsen, M.; Schwengner, R.; ...

    2016-04-21

    The nuclear level density and the g-ray strength function have been extracted for 89Y, using the Oslo Method on 89Y(p,p'γ) 89Y coincidence data. The g-ray strength function displays a low-energy enhancement consistent with previous observations in this mass region ( 93-98Mo). Shell-model calculations give support that the observed enhancement is due to strong, low-energy M1 transitions at high excitation energies. The data were further used as input for calculations of the 88Sr(p,γ) 89Y and 88Y(n,γ) 89Y cross sections with the TALYS reaction code. Lastly, comparison with cross-section data, where available, as well as with values from the BRUSLIB library, showsmore » a satisfying agreement.« less

  16. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life.

    PubMed

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were -1.10 (range: -5.31 to 5.25) and -1.11 (range: -5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis.

  17. Parents’ Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life

    PubMed Central

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    AIM To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. RESULTS The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were −1.10 (range: −5.31 to 5.25) and −1.11 (range: −5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions, and, finally, codes for more complex functions. CONCLUSIONS Parents can assess their own children in a valid and reliable way, and if the WHO ICF-CY second-level code data set is functioning in a clinically sound way, it can be employed as a tool for identifying the severity of disabilities and for monitoring changes in those disabilities over time. The ICF-CY codes selected in this study might be one cornerstone in forming a national or even international generic set of ICF-CY codes for the benefit of children with disabilities, their parents, and caregivers and for the whole community supporting with children with disabilities on a daily and perpetual basis. PMID:28680270

  18. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  19. No cataclysmic variables missing: higher merger rate brings into agreement observed and predicted space densities

    NASA Astrophysics Data System (ADS)

    Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe

    2018-06-01

    The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.

  20. Synthetic Microwave Imaging Reflectometry diagnostic using 3D FDTD Simulations

    NASA Astrophysics Data System (ADS)

    Kruger, Scott; Jenkins, Thomas; Smithe, David; King, Jacob; Nimrod Team Team

    2017-10-01

    Microwave Imaging Reflectometry (MIR) has become a standard diagnostic for understanding tokamak edge perturbations, including the edge harmonic oscillations in QH mode operation. These long-wavelength perturbations are larger than the normal turbulent fluctuation levels and thus normal analysis of synthetic signals become more difficult. To investigate, we construct a synthetic MIR diagnostic for exploring density fluctuation amplitudes in the tokamak plasma edge by using the three-dimensional, full-wave FDTD code Vorpal. The source microwave beam for the diagnostic is generated and refelected at the cutoff surface that is distorted by 2D density fluctuations in the edge plasma. Synthetic imaging optics at the detector can be used to understand the fluctuation and background density profiles. We apply the diagnostic to understand the fluctuations in edge plasma density during QH-mode activity in the DIII-D tokamak, as modeled by the NIMROD code. This work was funded under DOE Grant Number DE-FC02-08ER54972.

Top