Sample records for energy codes double

  1. DOUBLE code simulations of emissivities of fast neutrals for different plasma observation view-lines of neutral particle analyzers on the COMPASS tokamak

    NASA Astrophysics Data System (ADS)

    Mitosinkova, K.; Tomes, M.; Stockel, J.; Varju, J.; Stano, M.

    2018-03-01

    Neutral particle analyzers (NPA) measure line-integrated energy spectra of fast neutral atoms escaping the tokamak plasma, which are a product of charge-exchange (CX) collisions of plasma ions with background neutrals. They can observe variations in the ion temperature T i of non-thermal fast ions created by additional plasma heating. However, the plasma column which a fast atom has to pass through must be sufficiently short in comparison with the fast atom’s mean-free-path. Tokamak COMPASS is currently equipped with one NPA installed at a tangential mid-plane port. This orientation is optimal for observing non-thermal fast ions. However, in this configuration the signal at energies useful for T i derivation is lost in noise due to the too long fast atoms’ trajectories. Thus, a second NPA is planned to be connected for the purpose of measuring T i. We analyzed different possible view-lines (perpendicular mid-plane, tangential mid-plane, and top view) for the second NPA using the DOUBLE Monte-Carlo code and compared the results with the performance of the present NPA with tangential orientation. The DOUBLE code provides fast-atoms’ emissivity functions along the NPA view-line. The position of the median of these emissivity functions is related to the location from where the measured signal originates. Further, we compared the difference between the real central T i used as a DOUBLE code input and the T iCX derived from the exponential decay of simulated energy spectra. The advantages and disadvantages of each NPA location are discussed.

  2. Calculated differential and double differential cross section of DT neutron induced reactions on natural chromium (Cr)

    NASA Astrophysics Data System (ADS)

    Rajput, Mayank; Vala, Sudhirsinh; Srinivasan, R.; Abhangi, M.; Subhash, P. V.; Pandey, B.; Rao, C. V. S.; Bora, D.

    2018-01-01

    Chromium is an important alloying element of stainless steel (SS) and SS is the main constituent of structural material proposed for fusion reactors. Energy and double differential cross section data will be required to estimate nuclear responses in the materials used in fusion reactors. There are no experimental data of energy and double differential cross section, available for neutron induced reactions on natural chromium at 14 MeV neutron energy. In this study, energy and double differential cross section data of (n,p) and (n,α) reactions for all the stable isotopes of chromium have been estimated, using appropriate nuclear models in TALYS code. The cross section data of stable isotopes are later converted into the energy and double differential cross section data of natural Cr using the isotopic abundance. The contribution from compound, pre-equilibrium and direct nuclear reaction to total reaction have also been calculated for 52,50Cr(n,p) and 52Cr(n,α). The calculation of energy differential cross section shows that most of emitted protons and alpha particles are of 3 and 8 MeV respectively. The calculated data is compared with the data from EXFOR data library and is found to be in good agreement.

  3. Flyer Target Acceleration and Energy Transfer at its Collision with Massive Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borodziuk, S.; Kasperczuk, A.; Pisarczyk, T.

    2006-01-15

    Numerical modelling was aimed at simulation of successive events resulting from interaction of laser beam-single and double targets. It was performed by means of the 2D Lagrangian hydrodynamics code ATLANT-HE. This code is based on one-fluid and two-temperature model of plasma with electron and ion heat conductivity considerations. The code has an advanced treatment of laser light propagation and absorption. This numerical modelling corresponds to the experiment, which was carried out with the use of the PALS facility. Two types of planar solid targets, i.e. single massive Al slabs and double targets consisting of 6 {mu}m thick Al foil andmore » Al slab were applied. The targets were irradiated by the iodine laser pulses of two wavelengths: 1.315 and 0.438 {mu}m. A pulse duration of 0.4 ns and a focal spot diameter of 250 {mu}m at a laser energy of 130 J were used. The numerical modelling allowed us to obtain a more detailed description of shock wave propagation and crater formation.« less

  4. Optical planar waveguides in photo-thermal-refractive glasses fabricated by single- or double-energy carbon ion implantation

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Shen, Xiao-Liang; Zheng, Rui-Lin; Guo, Hai-Tao; Lv, Peng; Liu, Chun-Xiao

    2018-01-01

    Ion implantation has demonstrated to be an efficient and reliable technique for the fabrication of optical waveguides in a diversity of transparent materials. Photo-thermal-refractive glass (PTR) is considered to be durable and stable holographic recording medium. Optical planar waveguide structures in the PTR glasses were formed, for the first time to our knowledge, by the C3+-ion implantation with single-energy (6.0 MeV) and double-energy (5.5+6.0 MeV), respectively. The process of the carbon ion implantation was simulated by the stopping and range of ions in matter code. The morphologies of the waveguides were recorded by a microscope operating in transmission mode. The guided beam distributions of the waveguides were measured by the end-face coupling technique. Comparing with the single-energy implantation, the double-energy implantation improves the light confinement for the dark-mode spectrum. The guiding properties suggest that the carbon-implanted PTR glass waveguides have potential for the manufacture of photonic devices.

  5. Measurement of DT and DD components in neutron spectrum with a double-crystal time-of-flight spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, K.; Okamoto, A.; Kitajima, S.

    To investigate the deuteron and triton density ratio in core plasmas, a new methodology with measurement of tritium (DT) and deuterium (DD) neutron count rate ratio using a double-crystal time-of-flight (TOF) spectrometer is proposed. Multi-discriminator electronic circuits for the first and second detectors are used in addition to the TOF technique. The optimum arrangement of the detectors and discrimination window were examined considering the relations between the geometrical arrangement and deposited energy using a Monte Carlo Code, PHITS (Particle and Heavy Ion Transport Code System). An experiment to verify the calculations was performed using DD neutrons from an accelerator.

  6. New double-byte error-correcting codes for memory systems

    NASA Technical Reports Server (NTRS)

    Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.

    1996-01-01

    Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.

  7. Mixed Single/Double Precision in OpenIFS: A Detailed Study of Energy Savings, Scaling Effects, Architectural Effects, and Compilation Effects

    NASA Astrophysics Data System (ADS)

    Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy

    2017-04-01

    It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.

  8. Dynamics, Stability, and Evolutionary Patterns of Mesoscale Intrathermocline Vortices

    DTIC Science & Technology

    2016-12-01

    physical oceanography, namely, the link between the basin-scale forcing of the ocean by air-sea fluxes and the dissipation of energy and thermal variance...at the microscale. 14. SUBJECT TERMS Meddy, intrathermocline, double diffusion, energy cascade, eddy, MITgcm, numerical simulation, interleaving...lateral intrusions, lateral diffusivity, heat flux 15. NUMBER OF PAGES 69 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18

  9. Double Compton and Cyclo-Synchrotron in Super-Eddington Discs, Magnetized Coronae, and Jets

    NASA Astrophysics Data System (ADS)

    McKinney, Jonathan C.; Chluba, Jens; Wielgus, Maciek; Narayan, Ramesh; Sadowski, Aleksander

    2017-05-01

    Black hole accretion discs accreting near the Eddington rate are dominated by bremsstrahlung cooling, but above the Eddington rate, the double Compton process can dominate in radiation-dominated regions, while the cyclo-synchrotron can dominate in strongly magnetized regions like a corona or a jet. We present an extension to the general relativistic radiation magnetohydrodynamic code harmrad to account for emission and absorption by thermal cyclo-synchrotron, double Compton, bremsstrahlung, low-temperature opal opacities, as well as Thomson and Compton scattering. The harmrad code and associated analysis and visualization codes have been made open-source and are publicly available at the github repository website. We approximate the radiation field as a Bose-Einstein distribution and evolve it using the radiation number-energy-momentum conservation equations in order to track photon hardening. We perform various simulations to study how these extensions affect the radiative properties of magnetically arrested discs accreting at Eddington to super-Eddington rates. We find that double Compton dominates bremsstrahlung in the disc within a radius of r ˜ 15rg (gravitational radii) at hundred times the Eddington accretion rate, and within smaller radii at lower accretion rates. Double Compton and cyclo-synchrotron regulate radiation and gas temperatures in the corona, while cyclo-synchrotron regulates temperatures in the jet. Interestingly, as the accretion rate drops to Eddington, an optically thin corona develops whose gas temperature of T ˜ 109K is ˜100 times higher than the disc's blackbody temperature. Our results show the importance of double Compton and synchrotron in super-Eddington discs, magnetized coronae and jets.

  10. Attempt to Measure (n, xn) Double-Differential Cross Sections for Incident Neutron Energies above 100 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Kunieda, S.; Shigyo, N.

    The experimental technique for measurement of (n, xn) double differential cross sections for incident neutron energy above 100 MeV has been attempted to be developed with continuous-energy neutrons up to 400 MeV. Neutrons were produced in the spallation reaction by the 800 MeV proton beam, which was incident on a thick, heavily shielded tungsten target at the WNR facility at Los Alamos National Laboratory. The energies of incident neutrons were determined by the time-of-flight method. Emitted neutrons were detected by the recoil proton method. A phoswich detector consisting of NaI(Tl) and NE102A plastic scintillators was used for detecting recoil protons.more » We compared the preliminary experimental cross section data with the calculations by PHITS and QMD codes.« less

  11. Track structure in radiation biology: theory and applications.

    PubMed

    Nikjoo, H; Uehara, S; Wilson, W E; Hoshi, M; Goodhead, D T

    1998-04-01

    A brief review is presented of the basic concepts in track structure and the relative merit of various theoretical approaches adopted in Monte-Carlo track-structure codes are examined. In the second part of the paper, a formal cluster analysis is introduced to calculate cluster-distance distributions. Total experimental ionization cross-sections were least-square fitted and compared with the calculation by various theoretical methods. Monte-Carlo track-structure code Kurbuc was used to examine and compare the spectrum of the secondary electrons generated by using functions given by Born-Bethe, Jain-Khare, Gryzinsky, Kim-Rudd, Mott and Vriens' theories. The cluster analysis in track structure was carried out using the k-means method and Hartigan algorithm. Data are presented on experimental and calculated total ionization cross-sections: inverse mean free path (IMFP) as a function of electron energy used in Monte-Carlo track-structure codes; the spectrum of secondary electrons generated by different functions for 500 eV primary electrons; cluster analysis for 4 MeV and 20 MeV alpha-particles in terms of the frequency of total cluster energy to the root-mean-square (rms) radius of the cluster and differential distance distributions for a pair of clusters; and finally relative frequency distribution for energy deposited in DNA, single-strand break and double-strand breaks for 10MeV/u protons, alpha-particles and carbon ions. There are a number of Monte-Carlo track-structure codes that have been developed independently and the bench-marking presented in this paper allows a better choice of the theoretical method adopted in a track-structure code to be made. A systematic bench-marking of cross-sections and spectra of the secondary electrons shows differences between the codes at atomic level, but such differences are not significant in biophysical modelling at the macromolecular level. Clustered-damage evaluation shows: that a substantial proportion of dose ( 30%) is deposited by low-energy electrons; the majority of DNA damage lesions are of simple type; the complexity of damage increases with increased LET, while the total yield of strand breaks remains constant; and at high LET values nearly 70% of all double-strand breaks are of complex type.

  12. Double differential neutron spectra generated by the interaction of a 12 MeV/nucleon 36S beam on a thick natCu target

    NASA Astrophysics Data System (ADS)

    Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Ledoux, X.; Laurent, B.; Thomas, J.-C.; Clerc, T.; Desmezières, V.; Dupuis, M.; Madeline, A.; Dessay, E.; Grinyer, G. F.; Grinyer, J.; Menard, N.; Porée, F.; Achouri, L.; Delaunay, F.; Parlog, M.

    2018-07-01

    Double differential neutron spectra (energy, angle) originating from a thick natCu target bombarded by a 12 MeV/nucleon 36S16+ beam were measured by the activation method and the Time-of-flight technique at the Grand Accélérateur National d'Ions Lourds (GANIL). A neutron spectrum unfolding algorithm combining the SAND-II iterative method and Monte-Carlo techniques was developed for the analysis of the activation results that cover a wide range of neutron energies. It was implemented into a graphical user interface program, called GanUnfold. The experimental neutron spectra are compared to Monte-Carlo simulations performed using the PHITS and FLUKA codes.

  13. Simulation of the Formation of DNA Double Strand Breaks and Chromosome Aberrations in Irradiated Cells

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Ponomarev, Artem L.; Wu, Honglu; Blattnig, Steve; George, Kerry

    2014-01-01

    The formation of DNA double-strand breaks (DSBs) and chromosome aberrations is an important consequence of ionizing radiation. To simulate DNA double-strand breaks and the formation of chromosome aberrations, we have recently merged the codes RITRACKS (Relativistic Ion Tracks) and NASARTI (NASA Radiation Track Image). The program RITRACKS is a stochastic code developed to simulate detailed event-by-event radiation track structure: [1] This code is used to calculate the dose in voxels of 20 nm, in a volume containing simulated chromosomes, [2] The number of tracks in the volume is calculated for each simulation by sampling a Poisson distribution, with the distribution parameter obtained from the irradiation dose, ion type and energy. The program NASARTI generates the chromosomes present in a cell nucleus by random walks of 20 nm, corresponding to the size of the dose voxels, [3] The generated chromosomes are located within domains which may intertwine, and [4] Each segment of the random walks corresponds to approx. 2,000 DNA base pairs. NASARTI uses pre-calculated dose at each voxel to calculate the probability of DNA damage at each random walk segment. Using the location of double-strand breaks, possible rejoining between damaged segments is evaluated. This yields various types of chromosomes aberrations, including deletions, inversions, exchanges, etc. By performing the calculations using various types of radiations, it will be possible to obtain relative biological effectiveness (RBE) values for several types of chromosome aberrations.

  14. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  15. Experimental approach to measure thick target neutron yields induced by heavy ions for shielding

    NASA Astrophysics Data System (ADS)

    Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Brouillard, C.; Clerc, T.; Damoy, S.; Desmezières, V.; Dessay, E.; Dupuis, M.; Grinyer, G. F.; Grinyer, J.; Jacquot, B.; Ledoux, X.; Madeline, A.; Menard, N.; Michel, M.; Morel, V.; Porée, F.; Rannou, B.; Savalle, A.

    2017-09-01

    Double differential (angular and energy) neutron distributions were measured using an activation foil technique. Reactions were induced by impinging two low-energy heavy-ion beams accelerated with the GANIL CSS1 cyclotron: (36S (12 MeV/u) and 208Pb (6.25 MeV/u)) onto thick natCu targets. Results have been compared to Monte-Carlo calculations from two codes (PHITS and FLUKA) for the purpose of benchmarking radiation protection and shielding requirements. This comparison suggests a disagreement between calculations and experiment, particularly for high-energy neutrons.

  16. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    NASA Astrophysics Data System (ADS)

    Vincenti, H.; Lobet, M.; Lehe, R.; Sasanka, R.; Vay, J.-L.

    2017-01-01

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈ 20 pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scatter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of × 2 to × 2.5 speed-up in double precision for particle shape factor of orders 1- 3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles).

  17. Transport modeling of convection dominated helicon discharges in Proto-MPEX with the B2.5-Eirene code

    NASA Astrophysics Data System (ADS)

    Owen, L. W.; Rapp, J.; Canik, J.; Lore, J. D.

    2017-11-01

    Data-constrained interpretative analyses of plasma transport in convection dominated helicon discharges in the Proto-MPEX linear device, and predictive calculations with additional Electron Cyclotron Heating/Electron Bernstein Wave (ECH/EBW) heating, are reported. The B2.5-Eirene code, in which the multi-fluid plasma code B2.5 is coupled to the kinetic Monte Carlo neutrals code Eirene, is used to fit double Langmuir probe measurements and fast camera data in front of a stainless-steel target. The absorbed helicon and ECH power (11 kW) and spatially constant anomalous transport coefficients that are deduced from fitting of the probe and optical data are additionally used for predictive simulations of complete axial distributions of the densities, temperatures, plasma flow velocities, particle and energy fluxes, and possible effects of alternate fueling and pumping scenarios. The somewhat hollow electron density and temperature radial profiles from the probe data suggest that Trivelpiece-Gould wave absorption is the dominant helicon electron heating source in the discharges analyzed here. There is no external ion heating, but the corresponding calculated ion temperature radial profile is not hollow. Rather it reflects ion heating by the electron-ion equilibration terms in the energy balance equations and ion radial transport resulting from the hollow density profile. With the absorbed power and the transport model deduced from fitting the sheath limited discharge data, calculated conduction limited higher recycling conditions were produced by reducing the pumping and increasing the gas fueling rate, resulting in an approximate doubling of the target ion flux and reduction of the target heat flux.

  18. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  19. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    ERIC Educational Resources Information Center

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  20. A low-noise wide-dynamic-range event-driven detector using SOI pixel technology for high-energy particle imaging

    NASA Astrophysics Data System (ADS)

    Shrestha, Sumeet; Kamehama, Hiroki; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Takeda, Ayaki; Tsuru, Takeshi Go; Arai, Yasuo

    2015-08-01

    This paper presents a low-noise wide-dynamic-range pixel design for a high-energy particle detector in astronomical applications. A silicon on insulator (SOI) based detector is used for the detection of wide energy range of high energy particles (mainly for X-ray). The sensor has a thin layer of SOI CMOS readout circuitry and a thick layer of high-resistivity detector vertically stacked in a single chip. Pixel circuits are divided into two parts; signal sensing circuit and event detection circuit. The event detection circuit consisting of a comparator and logic circuits which detect the incidence of high energy particle categorizes the incident photon it into two energy groups using an appropriate energy threshold and generate a two-bit code for an event and energy level. The code for energy level is then used for selection of the gain of the in-pixel amplifier for the detected signal, providing a function of high-dynamic-range signal measurement. The two-bit code for the event and energy level is scanned in the event scanning block and the signals from the hit pixels only are read out. The variable-gain in-pixel amplifier uses a continuous integrator and integration-time control for the variable gain. The proposed design allows the small signal detection and wide dynamic range due to the adaptive gain technique and capability of correlated double sampling (CDS) technique of kTC noise canceling of the charge detector.

  1. Improved double-multiple streamtube model for the Darrieus-type vertical axis wind turbine

    NASA Astrophysics Data System (ADS)

    Berg, D. E.

    Double streamtube codes model the curved blade (Darrieus-type) vertical axis wind turbine (VAWT) as a double actuator fish arrangement (one half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. Sandia National Laboratories developed a double multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. The theory underlying this VAWT model is described, as well as the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double multiple streamtube and a vortex filament code. The effects of neglecting dynamic stall and horizontal wind velocity distribution are also illustrated.

  2. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  3. Design of a portable dose rate detector based on a double Geiger-Mueller counter

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Tang, Xiao-Bin; Gong, Pin; Huang, Xi; Wen, Liang-Sheng; Han, Zhen-Yang; He, Jian-Ping

    2018-01-01

    A portable dose rate detector was designed to monitor radioactive pollution and radioactive environments. The portable dose detector can measure background radiation levels (0.1 μSv/h) to nuclear accident radiation levels (>10 Sv/h). Both automatic switch technology of a double Geiger-Mueller counter and time-to-count technology were adopted to broaden the measurement range of the instrument. Global positioning systems and the 3G telecommunication protocol were installed to prevent radiation damage to the human body. In addition, the Monte Carlo N-Particle code was used to design the thin layer of metal for energy compensation, which was used to flatten energy response The portable dose rate detector has been calibrated by the standard radiation field method, and it can be used alone or in combination with additional radiation detectors.

  4. Four-Dimensional Continuum Gyrokinetic Code: Neoclassical Simulation of Fusion Edge Plasmas

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2005-10-01

    We are developing a continuum gyrokinetic code, TEMPEST, to simulate edge plasmas. Our code represents velocity space via a grid in equilibrium energy and magnetic moment variables, and configuration space via poloidal magnetic flux and poloidal angle. The geometry is that of a fully diverted tokamak (single or double null) and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The 4-dimensional code includes kinetic electrons and ions, and electrostatic field-solver options, and simulates neoclassical transport. The present implementation is a Method of Lines approach where spatial finite-differences (higher order upwinding) and implicit time advancement are used. We present results of initial verification and validation studies: transition from collisional to collisionless limits of parallel end-loss in the scrape-off layer, self-consistent electric field, and the effect of the real X-point geometry and edge plasma conditions on the standard neoclassical theory, including a comparison of our 4D code with other kinetic neoclassical codes and experiments.

  5. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  6. Outstanding performance of configuration interaction singles and doubles using exact exchange Kohn-Sham orbitals in real-space numerical grid method

    NASA Astrophysics Data System (ADS)

    Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn

    2016-12-01

    To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.

  7. Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, Luiz C; Ivanov, E.

    2015-01-01

    The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.

  8. Measurement of pion induced neutron-production double-differential cross sections on Fe and Pb at 870 MeV and 2.1 GeV

    NASA Astrophysics Data System (ADS)

    Iwamoto, Y.; Shigyo, N.; Satoh, D.; Kunieda, S.; Watanabe, T.; Ishimoto, S.; Tenzou, H.; Maehata, K.; Ishibashi, K.; Nakamoto, T.; Numajiri, M.; Meigo, S.; Takada, H.

    2004-08-01

    Neutron-production double-differential cross sections for 870 MeV π+ and π- and 2.1 GeV π+ mesons incident on iron and lead targets were measured with NE213 liquid scintillators by time-of-flight technique. NE213 liquid scintillators 12.7 cm in diameter and 12.7 cm thick were placed in directions of 15, 30, 60, 90, 120, and 150° . The typical flight path length was 1.5 m . Neutron detection efficiencies were evaluated by calculation results of SCINFUL and CECIL codes. The experimental results were compared with JAERI quantum molecular dynamics code. For the meson incident reactions, adoption of NN in-medium effects was slightly useful for reproducing 870 MeV π+ -incident neutron yields at neutron energies of 10 30 MeV , as was the case for proton incident reactions. The π- incident reaction generates more neutrons than π+ incidence as the number of nucleons in targets decrease.

  9. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  10. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less

  11. Systematic measurement of double-differential neutron production cross sections for deuteron-induced reactions at an incident energy of 102 MeV

    NASA Astrophysics Data System (ADS)

    Araki, Shouhei; Watanabe, Yukinobu; Kitajima, Mizuki; Sadamatsu, Hiroki; Nakano, Keita; Kin, Tadahiro; Iwamoto, Yosuke; Satoh, Daiki; Hagiwara, Masayuki; Yashima, Hiroshi; Shima, Tatsushi

    2017-01-01

    Double-differential neutron production cross sections (DDXs) for deuteron-induced reactions on Li, Be, C, Al, Cu, and Nb at 102 MeV were measured at forward angles ≤25° by means of a time of flight (TOF) method with NE213 liquid organic scintillators at the Research Center of Nuclear Physics (RCNP), Osaka University. The experimental DDXs and energy-integrated cross sections were compared with TENDL-2015 data and Particle and Heavy Ion Transport code System (PHITS) calculation using a combination of the KUROTAMA model, the Liege Intra-Nuclear Cascade model, and the generalized evaporation model. The PHITS calculation showed better agreement with the experimental results than TENDL-2015 for all target nuclei, although the shape of the broad peak around 50 MeV was not satisfactorily reproduced by the PHITS calculation.

  12. Pseudo-point transport technique: a new method for solving the Boltzmann transport equation in media with highly fluctuating cross sections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhai, B.

    A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less

  13. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.

    Proton double-differential cross sections from 59Co(α,p) 62Ni, 57Fe(α,p) 60Co, 56Fe( 7Li,p) 62Ni, and 55Mn( 6Li,p) 60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtainedmore » with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  14. Chapter 3: Commercial and Industrial Lighting Controls Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Carlson, Stephen

    This Commercial and Industrial Lighting Controls Evaluation Protocol (the protocol) describes methods to account for energy savings resulting from programmatic installation of lighting control equipment in large populations of commercial, industrial, government, institutional, and other nonresidential facilities. This protocol does not address savings resulting from changes in codes and standards, or from education and training activities. When lighting controls are installed in conjunction with a lighting retrofit project, the lighting control savings must be calculated parametrically with the lighting retrofit project so savings are not double counted.

  15. The use of cosmic-ray muons in the energy calibration of the Beta-decay Paul Trap silicon-detector array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirsh, T. Y.; Perez Galvan, A.; Burkey, M.

    This article presents an approach to calibrate the energy response of double-sided silicon strip detectors (DSSDs) for low-energy nuclear-science experiments by utilizing cosmic-ray muons. For the 1-mm-thick detectors used with the Beta-decay Paul Trap, the minimum-ionizing peak from these muons provides a stable and time-independent in situ calibration point at around 300 keV, which supplements the calibration data obtained above 3 MeV from sources. The muon-data calibration is achieved by comparing experimental spectra with detailed Monte Carlo simulations performed using GEANT4 and CRY codes. This additional information constrains the calibration at lower energies, resulting in improvements in quality and accuracy.

  16. The use of cosmic-ray muons in the energy calibration of the Beta-decay Paul Trap silicon-detector array

    NASA Astrophysics Data System (ADS)

    Hirsh, T. Y.; Pérez Gálvan, A.; Burkey, M. T.; Aprahamian, A.; Buchinger, F.; Caldwell, S.; Clark, J. A.; Gallant, A. T.; Heckmaier, E.; Levand, A. F.; Marley, S. T.; Morgan, G. E.; Nystrom, A.; Orford, R.; Savard, G.; Scielzo, N. D.; Segel, R.; Sharma, K. S.; Siegl, K.; Wang, B. S.

    2018-04-01

    This article presents an approach to calibrate the energy response of double-sided silicon strip detectors (DSSDs) for low-energy nuclear-science experiments by utilizing cosmic-ray muons. For the 1-mm-thick detectors used with the Beta-decay Paul Trap, the minimum-ionizing peak from these muons provides a stable and time-independent in situ calibration point at around 300 keV, which supplements the calibration data obtained above 3 MeV from α sources. The muon-data calibration is achieved by comparing experimental spectra with detailed Monte Carlo simulations performed using GEANT4 and CRY codes. This additional information constrains the calibration at lower energies, resulting in improvements in quality and accuracy.

  17. Error control for reliable digital data transmission and storage systems

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, R. H.

    1985-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.

  18. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  19. Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Liu, Quanhua; Weng, Fuzhong

    2006-12-01

    The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.

  20. Nonlinear Modeling of Radial Stellar Pulsations

    NASA Astrophysics Data System (ADS)

    Smolec, R.

    2009-09-01

    In this thesis, I present the results of my work concerning the nonlinear modeling of radial stellar pulsations. I will focus on classical Cepheids, particularly on the double-mode phenomenon. History of nonlinear modeling of radial stellar pulsations begins in the sixties of the previous century. At the beginning convection was disregarded in model equations. Qualitatively, almost all features of the radial pulsators were successfully modeled with purely radiative hydrocodes. Among problems that remained, the most disturbing was modeling of the double-mode phenomenon. This long-standing problem seemed to be finally solved with the inclusion of turbulent convection into the model equations (Kollath et al. 1998, Feuchtinger 1998). Although dynamical aspects of the double-mode behaviour were extensively studied, its origin, particularly the specific role played by convection, remained obscure. To study this and other problems of radial stellar pulsations, I implemented the convection into pulsation hydrocodes. The codes adopt the Kuhfuss (1986) convection model. In other codes, particularly in the Florida-Budapest hydrocode (e.g. Kollath et al. 2002), used in comput! ation of most of the published double-mode models, different approximations concerning e.g. eddy-viscous terms or treatment of convectively stable regions are adopted. Particularly the neglect of negative buoyancy effects in the Florida-Budapest code and its consequences, were never discussed in the literature. These consequences are severe. Concerning the single-mode pulsators, neglect of negative buoyancy leads to smaller pulsation amplitudes, in comparison to amplitudes computed with code including these effects. Particularly, neglect of negative buoyancy reduces the amplitude of the fundamental mode very strong. This property of the Florida-Budapest models is crucial in bringing up the stable non-resonant double-mode Cepheid pulsation involving fundamental and first overtone modes (F/1O). Such pulsation is not observed in models computed including negative buoyancy. As the neglect of negative buoyancy is physically not correct, so are the double-mode Cepheid models computed with the Florida-Budapest hydrocode. Extensive search for F/1O double-mode Cepheid pulsation with the codes including negative buoyancy effects yielded null result. Some resonant double-mode F/1O Cepheid models were found, but their occurrence was restricted to a very narrow domain in the Hertzsprung-Russel diagram. Model computations intended to model the double-overtone (1O/2O) Cepheids in the Large Magellanic Cloud, also revealed some stable double-mode pulsations, however, restricted to a narrow period range. Resonances are most likely conductive in bringing up the double-mode behaviour observed in these models. However, majority of the double-overtone LMC Cepheids cannot be reproduced with our codes. Hence, the modeling of double-overtone Cepheids with convective hydrocodes is not satisfactory, either. Double-mode pulsation still lacks satisfactory explanation, and problem of its modeling remains open.

  1. Building Energy Efficiency in India: Compliance Evaluation of Energy Conservation Building Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Evans, Meredydd; Delgado, Alison

    India is experiencing unprecedented construction boom. The country doubled its floorspace between 2001 and 2005 and is expected to add 35 billion m2 of new buildings by 2050. Buildings account for 35% of total final energy consumption in India today, and building energy use is growing at 8% annually. Studies have shown that carbon policies will have little effect on reducing building energy demand. Chaturvedi et al. predicted that, if there is no specific sectoral policies to curb building energy use, final energy demand of the Indian building sector will grow over five times by the end of this century,more » driven by rapid income and population growth. The growing energy demand in buildings is accompanied by a transition from traditional biomass to commercial fuels, particularly an increase in electricity use. This also leads to a rapid increase in carbon emissions and aggravates power shortage in India. Growth in building energy use poses challenges to the Indian government. To curb energy consumption in buildings, the Indian government issued the Energy Conservation Building Code (ECBC) in 2007, which applies to commercial buildings with a connected load of 100 kW or 120kVA. It is predicted that the implementation of ECBC can help save 25-40% of energy, compared to reference buildings without energy-efficiency measures. However, the impact of ECBC depends on the effectiveness of its enforcement and compliance. Currently, the majority of buildings in India are not ECBC-compliant. The United Nations Development Programme projected that code compliance in India would reach 35% by 2015 and 64% by 2017. Whether the projected targets can be achieved depends on how the code enforcement system is designed and implemented. Although the development of ECBC lies in the hands of the national government – the Bureau of Energy Efficiency under the Ministry of Power, the adoption and implementation of ECBC largely relies on state and local governments. Six years after ECBC’s enactment, only two states and one territory out of 35 Indian states and union territories formally adopted ECBC and six additional states are in the legislative process of approving ECBC. There are several barriers that slow down the process. First, stakeholders, such as architects, developers, and state and local governments, lack awareness of building energy efficiency, and do not have enough capacity and resources to implement ECBC. Second, institution for implementing ECBC is not set up yet; ECBC is not included in local building by-laws or incorporated into the building permit process. Third, there is not a systematic approach to measuring and verifying compliance and energy savings, and thus the market does not have enough confidence in ECBC. Energy codes achieve energy savings only when projects comply with codes, yet only few countries measure compliance consistently and periodic checks often indicate poor compliance in many jurisdictions. China and the U.S. appear to be two countries with comprehensive systems in code enforcement and compliance The United States recently developed methodologies measuring compliance with building energy codes at the state level. China has an annual survey investigating code compliance rate at the design and construction stages in major cities. Like many developing countries, India has only recently begun implementing an energy code and would benefit from international experience on code compliance. In this paper, we examine lessons learned from the U.S. and China on compliance assessment and how India can apply these lessons to develop its own compliance evaluation approach. This paper also provides policy suggestions to national, state, and local governments to improve compliance and speed up ECBC implementation.« less

  2. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  3. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  4. Broadband and wide-angle RCS reduction using a 2-bit coding ultrathin metasurface at terahertz frequencies

    PubMed Central

    Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan

    2016-01-01

    A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than −10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices. PMID:27982089

  5. Broadband and wide-angle RCS reduction using a 2-bit coding ultrathin metasurface at terahertz frequencies.

    PubMed

    Liang, Lanju; Wei, Minggui; Yan, Xin; Wei, Dequan; Liang, Dachuan; Han, Jiaguang; Ding, Xin; Zhang, GaoYa; Yao, Jianquan

    2016-12-16

    A novel broadband and wide-angle 2-bit coding metasurface for radar cross section (RCS) reduction is proposed and characterized at terahertz (THz) frequencies. The ultrathin metasurface is composed of four digital elements based on a metallic double cross line structure. The reflection phase difference of neighboring elements is approximately 90° over a broadband THz frequency. The mechanism of RCS reduction is achieved by optimizing the coding element sequences, which redirects the electromagnetic energies to all directions in broad frequencies. An RCS reduction of less than -10 dB bandwidth from 0.7 THz to 1.3 THz is achieved in the experimental and numerical simulations. The simulation results also show that broadband RCS reduction can be achieved at an incident angle below 60° for TE and TM polarizations under flat and curve coding metasurfaces. These results open a new approach to flexibly control THz waves and may offer widespread applications for novel THz devices.

  6. Measurement of 100- and 290-MeV/A Carbon Incident Neutron Production Cross Sections for Carbon, Nitrogen and Oxygen

    NASA Astrophysics Data System (ADS)

    Shigyo, N.; Uozumi, U.; Uehara, H.; Nishizawa, T.; Mizuno, T.; Takamiya, M.; Hashiguchi, T.; Satoh, D.; Sanami, T.; Koba, Y.; Takada, M.; Matsufuji, N.

    2014-05-01

    Neutron double-differential cross sections from carbon ion incident on carbon, nitrogen and oxygen targets have been measured for neutron energies down to 0.6 MeV in wide directions from 15∘ to 90∘ with 100- and 290-MeV/A incident energies at the Heavy Ion Medical Accelerator in Chiba (HIMAC), National Institute of Radiological Sciences. Two sizes of NE213 scintillators were used as neutron detectors in order to enable neutron energy from below one to several hundred MeV. The neutron energy was measured by the time-of-flight technique between the beam pickup detector and an NE213 scintillator. By using the experimental data, the validity of the calculation results by the PHITS code was examined.

  7. New approach to description of (d,xn) spectra at energies below 50 MeV in Monte Carlo simulation by intra-nuclear cascade code with Distorted Wave Born Approximation

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.

    2014-08-01

    A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.

  8. TiS2 and ZrS2 single- and double-wall nanotubes: first-principles study.

    PubMed

    Bandura, Andrei V; Evarestov, Robert A

    2014-02-15

    Hybrid density functional theory has been applied for investigations of the electronic and atomic structure of bulk phases, nanolayers, and nanotubes based on titanium and zirconium disulfides. Calculations have been performed on the basis of the localized atomic functions by means of the CRYSTAL-2009 computer code. The full optimization of all atomic positions in the regarded systems has been made to study the atomic relaxation and to determine the most favorable structures. The different layered and isotropic bulk phases have been considered as the possible precursors of the nanotubes. Calculations on single-walled TiS2 and ZrS2 nanotubes confirmed that the nanotubes obtained by rolling up the hexagonal crystalline layers with octahedral 1T morphology are the most stable. The strain energy of TiS2 and ZrS2 nanotubes is small, does not depend on the tube chirality, and approximately obeys to D(-2) law (D is nanotube diameter) of the classical elasticity theory. It is greater than the strain energy of the similar TiO2 and ZrO2 nanotubes; however, the formation energy of the disulfide nanotubes is considerably less than the formation energy of the dioxide nanotubes. The distance and interaction energy between the single-wall components of the double-wall nanotubes is proved to be close to the distance and interaction energy between layers in the layered crystals. Analysis of the relaxed nanotube shape using radial coordinate of the metal atoms demonstrates a small but noticeable deviation from completely cylindrical cross-section of the external walls in the armchair-like double-wall nanotubes. Copyright © 2013 Wiley Periodicals, Inc.

  9. Measurement of Continuous-Energy Neutron-Incident Neutron-Production Cross Section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shigyo, Nobuhiro; Kunieda, Satoshi; Watanabe, Takehito

    Continuous energy neutron-incident neutron-production double differential cross sections were measured at the Weapons Neutron Research (WNR) facility of the Los Alamos Neutron Science Center. The energy of emitted neutrons was derived from the energy deposition in a detector. The incident-neutron energy was obtained by the time-of-flight method between the spallation target of WNR and the emitted neutron detector. Two types of detectors were adopted to measure the wide energy range of neutrons. The liquid organic scintillators covered up to 100 MeV. The recoil proton detectors that constitute the recoil proton radiator and phoswich type NaI (Tl) scintillators were used formore » neutrons above several tens of MeV. Iron and lead were used as sample materials. The experimental data were compared with the evaluated nuclear data, the results of GNASH, JQMD, and PHITS codes.« less

  10. Logical qubit fusion

    NASA Astrophysics Data System (ADS)

    Moussa, Jonathan; Ryan-Anderson, Ciaran

    The canonical modern plan for universal quantum computation is a Clifford+T gate set implemented in a topological error-correcting code. This plan has the basic disparity that logical Clifford gates are natural for codes in two spatial dimensions while logical T gates are natural in three. Recent progress has reduced this disparity by proposing logical T gates in two dimensions with doubled, stacked, or gauge color codes, but these proposals lack an error threshold. An alternative universal gate set is Clifford+F, where a fusion (F) gate converts two logical qubits into a logical qudit. We show that logical F gates can be constructed by identifying compatible pairs of qubit and qudit codes that stabilize the same logical subspace, much like the original Bravyi-Kitaev construction of magic state distillation. The simplest example of high-distance compatible codes results in a proposal that is very similar to the stacked color code with the key improvement of retaining an error threshold. Sandia National Labs is a multi-program laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  12. ARES Simulations of a Double Shell Surrogate Target

    NASA Astrophysics Data System (ADS)

    Sacks, Ryan; Tipton, Robert; Graziani, Frank

    2015-11-01

    Double shell targets provide an alternative path to ignition that allows for a less robust laser profile and non-cryogenic initial temperatures. The target designs call for a high-Z material to abut the gas/liquid DT fuel which is cause for concern due to possible mix of the inner shell with the fuel. This research concentrates on developing a surrogate target for a double shell capsule that can be fielded in a current NIF two-shock hohlraum. Through pressure-density scaling the hydrodynamic behavior of the high-Z pusher of a double shell can be approximated allowing for studies of performance and mix. Use of the ARES code allows for investigation of mix in one and two dimensions and analysis of instabilities in two dimensions. Development of a shell material that will allow for experiments similar to CD Mix is also discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. Information Management release number LLNL-ABS-675098.

  13. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.

  14. Physical basis of radiation protection in space travel

    NASA Astrophysics Data System (ADS)

    Durante, Marco; Cucinotta, Francis A.

    2011-10-01

    The health risks of space radiation are arguably the most serious challenge to space exploration, possibly preventing these missions due to safety concerns or increasing their costs to amounts beyond what would be acceptable. Radiation in space is substantially different from Earth: high-energy (E) and charge (Z) particles (HZE) provide the main contribution to the equivalent dose in deep space, whereas γ rays and low-energy α particles are major contributors on Earth. This difference causes a high uncertainty on the estimated radiation health risk (including cancer and noncancer effects), and makes protection extremely difficult. In fact, shielding is very difficult in space: the very high energy of the cosmic rays and the severe mass constraints in spaceflight represent a serious hindrance to effective shielding. Here the physical basis of space radiation protection is described, including the most recent achievements in space radiation transport codes and shielding approaches. Although deterministic and Monte Carlo transport codes can now describe well the interaction of cosmic rays with matter, more accurate double-differential nuclear cross sections are needed to improve the codes. Energy deposition in biological molecules and related effects should also be developed to achieve accurate risk models for long-term exploratory missions. Passive shielding can be effective for solar particle events; however, it is limited for galactic cosmic rays (GCR). Active shielding would have to overcome challenging technical hurdles to protect against GCR. Thus, improved risk assessment and genetic and biomedical approaches are a more likely solution to GCR radiation protection issues.

  15. A Binary-Encounter-Bethe Approach to Simulate DNA Damage by the Direct Effect

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2013-01-01

    The DNA damage is of crucial importance in the understanding of the effects of ionizing radiation. The main mechanisms of DNA damage are by the direct effect of radiation (e.g. direct ionization) and by indirect effect (e.g. damage by.OH radicals created by the radiolysis of water). Despite years of research in this area, many questions on the formation of DNA damage remains. To refine existing DNA damage models, an approach based on the Binary-Encounter-Bethe (BEB) model was developed[1]. This model calculates differential cross sections for ionization of the molecular orbitals of the DNA bases, sugars and phosphates using the electron binding energy, the mean kinetic energy and the occupancy number of the orbital. This cross section has an analytic form which is quite convenient to use and allows the sampling of the energy loss occurring during an ionization event. To simulate the radiation track structure, the code RITRACKS developed at the NASA Johnson Space Center is used[2]. This code calculates all the energy deposition events and the formation of the radiolytic species by the ion and the secondary electrons as well. We have also developed a technique to use the integrated BEB cross section for the bases, sugar and phosphates in the radiation transport code RITRACKS. These techniques should allow the simulation of DNA damage by ionizing radiation, and understanding of the formation of double-strand breaks caused by clustered damage in different conditions.

  16. Simulation studies of muon-produced background events deep underground and consequences for double beta decay experiments

    NASA Astrophysics Data System (ADS)

    Massarczyk, Ralph; Majorana Collaboration

    2015-10-01

    Cosmic radiation creates a significant background for low count rate experiments. The Majorana demonstrator experiment is located at the Sanford Underground Research Facility at a depth of 4850ft below the surface but it can still be penetrated by cosmic muons with initial energies above the TeV range. The interaction of muons with the rock, the shielding material in the lab and the detector itself can produce showers of secondary particles, like fast neutrons, which are able to travel through shielding material and can produce high-energy γ-rays via capture or inelastic scattering. The energy deposition of these γ rays in the detector can overlap with energy region of interest for the neutrino-less double beta decay. Recent studies for cosmic muons penetrating the Majorana demonstrator are made with the Geant4 code. The results of these simulations will be presented in this talk and an overview of the interaction of the shower particles with the detector, shielding and veto system will be given. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, the Particle Astrophysics Program of the National Science Foundation, and the Sanford Underground Research Facility. Supported by U.S. Department of Energy through the LANL/LDRD Program.

  17. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  18. MONTE CARLO POPULATION SYNTHESIS OF POST-COMMON-ENVELOPE WHITE DWARF BINARIES AND TYPE Ia SUPERNOVA RATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablimit, Iminhaji; Maeda, Keiichi; Li, Xiang-Dong

    Binary population synthesis (BPS) studies provide a comprehensive way to understand the evolution of binaries and their end products. Close white dwarf (WD) binaries have crucial characteristics for examining the influence of unresolved physical parameters on binary evolution. In this paper, we perform Monte Carlo BPS simulations, investigating the population of WD/main-sequence (WD/MS) binaries and double WD binaries using a publicly available binary star evolution code under 37 different assumptions for key physical processes and binary initial conditions. We considered different combinations of the binding energy parameter ( λ {sub g}: considering gravitational energy only; λ {sub b}: considering bothmore » gravitational energy and internal energy; and λ {sub e}: considering gravitational energy, internal energy, and entropy of the envelope, with values derived from the MESA code), CE efficiency, critical mass ratio, initial primary mass function, and metallicity. We find that a larger number of post-CE WD/MS binaries in tight orbits are formed when the binding energy parameters are set by λ {sub e} than in those cases where other prescriptions are adopted. We also determine the effects of the other input parameters on the orbital periods and mass distributions of post-CE WD/MS binaries. As they contain at least one CO WD, double WD systems that evolved from WD/MS binaries may explode as type Ia supernovae (SNe Ia) via merging. In this work, we also investigate the frequency of two WD mergers and compare it to the SNe Ia rate. The calculated Galactic SNe Ia rate with λ = λ {sub e} is comparable to the observed SNe Ia rate, ∼8.2 × 10{sup 5} yr{sup 1} – ∼4 × 10{sup 3} yr{sup 1} depending on the other BPS parameters, if a DD system does not require a mass ratio higher than ∼0.8 to become an SNe Ia. On the other hand, a violent merger scenario, which requires the combined mass of two CO WDs ≥ 1.6 M {sub ⊙} and a mass ratio >0.8, results in a much lower SNe Ia rate than is observed.« less

  19. Neutrino-induced reactions on nuclei

    NASA Astrophysics Data System (ADS)

    Gallmeister, K.; Mosel, U.; Weil, J.

    2016-09-01

    Background: Long-baseline experiments such as the planned deep underground neutrino experiment (DUNE) require theoretical descriptions of the complete event in a neutrino-nucleus reaction. Since nuclear targets are used this requires a good understanding of neutrino-nucleus interactions. Purpose: Develop a consistent theory and code framework for the description of lepton-nucleus interactions that can be used to describe not only inclusive cross sections, but also the complete final state of the reaction. Methods: The Giessen-Boltzmann-Uehling-Uhlenbeck (GiBUU) implementation of quantum-kinetic transport theory is used, with improvements in its treatment of the nuclear ground state and of 2p2h interactions. For the latter an empirical structure function from electron scattering data is used as a basis. Results: Results for electron-induced inclusive cross sections are given as a necessary check for the overall quality of this approach. The calculated neutrino-induced inclusive double-differential cross sections show good agreement data from neutrino and antineutrino reactions for different neutrino flavors at MiniBooNE and T2K. Inclusive double-differential cross sections for MicroBooNE, NOvA, MINERvA, and LBNF/DUNE are given. Conclusions: Based on the GiBUU model of lepton-nucleus interactions a good theoretical description of inclusive electron-, neutrino-, and antineutrino-nucleus data over a wide range of energies, different neutrino flavors, and different experiments is now possible. Since no tuning is involved this theory and code should be reliable also for new energy regimes and target masses.

  20. Design of a Double Anode Magnetron Injection Gun for Q-band Gyro-TWT Using Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Li, Zhiliang; Feng, Jinjun; Liu, Bentian

    2018-04-01

    This paper presents a novel design code for double anode magnetron injection guns (MIGs) in gyro-devices based on boundary element method (BEM). The physical and mathematical models were constructed, and then the code using BEM for MIG's calculation was developed. Using the code, a double anode MIG for a Q-band gyrotron traveling-wave tube (gyro-TWT) amplifier operating in the circular TE01 mode at the fundamental cyclotron harmonic was designed. In order to verify the reliability of this code, velocity spread and guiding center radius of the MIG simulated by the BEM code were compared with these from the commonly used EGUN code, showing a reasonable agreement. Then, a Q-band gyro-TWT was fabricated and tested. The testing results show that the device has achieved an average power of 5kW and peak power ≥ 150 kW at a 3% duty cycle within bandwidth of 2 GHz, and maximum output peak power of 220 kW, with a corresponding saturated gain of 50.9 dB and efficiency of 39.8%. This paper demonstrates that the BEM code can be used as an effective approach for analysis of electron optics system in gyro-devices.

  1. GUI to Facilitate Research on Biological Damage from Radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Frances A.; Ponomarev, Artem Lvovich

    2010-01-01

    A graphical-user-interface (GUI) computer program has been developed to facilitate research on the damage caused by highly energetic particles and photons impinging on living organisms. The program brings together, into one computational workspace, computer codes that have been developed over the years, plus codes that will be developed during the foreseeable future, to address diverse aspects of radiation damage. These include codes that implement radiation-track models, codes for biophysical models of breakage of deoxyribonucleic acid (DNA) by radiation, pattern-recognition programs for extracting quantitative information from biological assays, and image-processing programs that aid visualization of DNA breaks. The radiation-track models are based on transport models of interactions of radiation with matter and solution of the Boltzmann transport equation by use of both theoretical and numerical models. The biophysical models of breakage of DNA by radiation include biopolymer coarse-grained and atomistic models of DNA, stochastic- process models of deposition of energy, and Markov-based probabilistic models of placement of double-strand breaks in DNA. The program is designed for use in the NT, 95, 98, 2000, ME, and XP variants of the Windows operating system.

  2. Moving Towards a State of the Art Charge-Exchange Reaction Code

    NASA Astrophysics Data System (ADS)

    Poxon-Pearson, Terri; Nunes, Filomena; Potel, Gregory

    2017-09-01

    Charge-exchange reactions have a wide range of applications, including late stellar evolution, constraining the matrix elements for neutrinoless double β-decay, and exploring symmetry energy and other aspects of exotic nuclear matter. Still, much of the reaction theory needed to describe these transitions is underdeveloped and relies on assumptions and simplifications that are often extended outside of their region of validity. In this work, we have begun to move towards a state of the art charge-exchange reaction code. As a first step, we focus on Fermi transitions using a Lane potential in a few body, Distorted Wave Born Approximation (DWBA) framework. We have focused on maintaining a modular structure for the code so we can later incorporate complications such as nonlocality, breakup, and microscopic inputs. Results using this new charge-exchange code will be shown compared to the analysis in for the case of 48Ca(p,n)48Sc. This work was supported in part by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through the U.S. DOE Cooperative Agreement No. DE- FG52-08NA2855.

  3. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  4. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  5. Optical noise-free image encryption based on quick response code and high dimension chaotic system in gyrator transform domain

    NASA Astrophysics Data System (ADS)

    Sui, Liansheng; Xu, Minjie; Tian, Ailing

    2017-04-01

    A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.

  6. MO-FG-CAMPUS-TeP3-05: Limitations of the Dose Weighted LET Concept for Intensity Modulated Proton Therapy in the Distal Falloff Region and Beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moskvin, V; Pirlepesov, F; Farr, J

    2016-06-15

    Purpose: Dose-weighted linear energy transfer (dLET) has been shown to be useful for the analysis of late effects in proton therapy. This study presents the results of the testing of the dLET concept for intensity modulated proton therapy (IMPT) with a discrete spot scanning beam system without use of an aperture or compensator (AC). Methods: IMPT (no AC) and broad beams (BB) with (AC) were simulated in the TOPAS and FLUKA code systems. Information from the independently tested Monte Carlo Damage Simulation (MCDS) was integrated into the FLUKA code systems to account for spatial variations in the RBE for protonsmore » and other light ions using an endpoint of DNA double strand break (DSB) induction. Results: The proton spectra for IMPT beams at the depths beyond the distal edge contain a tail of high energy protons up to 100 MeV. The integral from the tail is compatible with the number of 5–8 MeV protons at the tip of the Bragg peak (BP). The dose averaged energy (dEav) decreases to 7 MeV at the tip of (BP) and then increases to about 15 MeV beyond the distal edge. Neutrons produced in the nozzle are two orders of magnitude higher for BB with AC than for IMPT in low energy part of the spectra. The dLET values beyond of the distal edge of the BP are 5 times larger for the IMPT than for BB with the AC. Contrarily, negligible differences are seen in the RBE estimates for IMPT and BB with AC beyond the distal edge of the BP. Conclusion: The analysis of late effects in IMPT with a spot scanning and double scattering or scanning techniques with AC may requires both dLET and RBE as quantitative parameters to characterize effects beyond the distal edge of the BP.« less

  7. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models.

    PubMed

    Sato, Tatsuhiko; Furusawa, Yoshiya

    2012-10-01

    Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.

  8. The effects of nuclear data library processing on Geant4 and MCNP simulations of the thermal neutron scattering law

    NASA Astrophysics Data System (ADS)

    Hartling, K.; Ciungu, B.; Li, G.; Bentoumi, G.; Sur, B.

    2018-05-01

    Monte Carlo codes such as MCNP and Geant4 rely on a combination of physics models and evaluated nuclear data files (ENDF) to simulate the transport of neutrons through various materials and geometries. The grid representation used to represent the final-state scattering energies and angles associated with neutron scattering interactions can significantly affect the predictions of these codes. In particular, the default thermal scattering libraries used by MCNP6.1 and Geant4.10.3 do not accurately reproduce the ENDF/B-VII.1 model in simulations of the double-differential cross section for thermal neutrons interacting with hydrogen nuclei in a thin layer of water. However, agreement between model and simulation can be achieved within the statistical error by re-processing ENDF/B-VII.I thermal scattering libraries with the NJOY code. The structure of the thermal scattering libraries and sampling algorithms in MCNP and Geant4 are also reviewed.

  9. Hypersonic Shock Interactions About a 25 deg/65 deg Sharp Double Cone

    NASA Technical Reports Server (NTRS)

    Moss, James N.; LeBeau, Gerald J.; Glass, Christopher E.

    2002-01-01

    This paper presents the results of a numerical study of shock interactions resulting from Mach 10 air flow about a sharp double cone. Computations are made with the direct simulation Monte Carlo (DSMC) method by using two different codes: the G2 code of Bird and the DAC (DSMC Analysis Code) code of LeBeau. The flow conditions are the pretest nominal free-stream conditions specified for the ONERA R5Ch low-density wind tunnel. The focus is on the sensitivity of the interactions to grid resolution while providing information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.

  10. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  11. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE PAGES

    Vincenti, H.; Lobet, M.; Lehe, R.; ...

    2016-09-19

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  12. An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Lobet, M.; Lehe, R.

    In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries:  OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less

  13. An Approach to Assess Delamination Propagation Simulation Capabilities in Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2008-01-01

    An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.

  14. Design of 9.271-pressure-ratio 5-stage core compressor and overall performance for first 3 stages

    NASA Technical Reports Server (NTRS)

    Steinke, Ronald J.

    1986-01-01

    Overall aerodynamic design information is given for all five stages of an axial flow core compressor (74A) having a 9.271 pressure ratio and 29.710 kg/sec flow. For the inlet stage group (first three stages), detailed blade element design information and experimental overall performance are given. At rotor 1 inlet tip speed was 430.291 m/sec, and hub to tip radius ratio was 0.488. A low number of blades per row was achieved by the use of low-aspect-ratio blading of moderate solidity. The high reaction stages have about equal energy addition. Radial energy varied to give constant total pressure at the rotor exit. The blade element profile and shock losses and the incidence and deviation angles were based on relevant experimental data. Blade shapes are mostly double circular arc. Analysis by a three-dimensional Euler code verified the experimentally measured high flow at design speed and IGV-stator setting angles. An optimization code gave an optimal IGV-stator reset schedule for higher measured efficiency at all speeds.

  15. Prediction of the Reactor Antineutrino Flux for the Double Chooz Experiment

    NASA Astrophysics Data System (ADS)

    Jones, Chirstopher LaDon

    This thesis benchmarks the deterministic lattice code, DRAGON, against data, and then applies this code to make a prediction for the antineutrino flux from the Chooz Bl and B2 reactors. Data from the destructive assay of rods from the Takahama-3 reactor and from the SONGS antineutrino detector are used for comparisons. The resulting prediction from the tuned DRAGON code is then compared to the first antineutrino event spectra from Double Chooz. Use of this simulation in nuclear nonproliferation studies is discussed. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  16. Simulation of a 20-ton LiBr/H{sub 2}O absorption cooling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardono, B.; Nelson, R.M.

    The possibility of using solar energy as the main heat input for cooling systems has led to several studies of available cooling technologies that use solar energy. The results show that double-effect absorption cooling systems give relatively high performance. To further study absorption cooling systems, a computer code was developed for a double-effect lithium bromide/water (LiBr/H{sub 2}O) absorption system. To evaluate the performance, two objective functions were developed including the coefficient of performance (COP) and the system cost. Based on the system cost, an optimization to find the minimum cost was performed to determine the nominal heat transfer areas ofmore » each heat exchanger. The nominal values of other system variables, such as the mass flow rates and inlet temperatures of the hot water, cooling water, and chilled water, are specified as commonly used values for commercial machines. The results of the optimization show that there are optimum heat transfer areas. In this study, hot water is used as the main energy input. Using a constant load of 20 tons cooling capacity, the effects of various variables including the heat transfer ares, mass flow rates, and inlet temperatures of hot water, cooling water, and chilled water are presented.« less

  17. Coupled Hydrodynamic Instability Growth on Oblique Interfaces with a Reflected Rarefaction

    NASA Astrophysics Data System (ADS)

    Rasmus, A. M.; Flippo, K. A.; di Stefano, C. A.; Doss, F. W.; Hager, J. D.; Merritt, E. C.; Cardenas, T.; Schmidt, D. W.; Kline, J. L.; Kuranz, C. C.

    2017-10-01

    Hydrodynamic instabilities play an important role in the evolution of inertial confinement fusion and astrophysical phenomena. Three of the Omega-EP long pulse beams (10 ns square pulse, 14 kJ total energy, 1.1 mm spot size) drive a supported shock across a heavy-to-light, oblique, interface. Single- and double-mode initial conditions seed coupled Richtmyer-Meshkov (RM), Rayleigh-Taylor (RT), and Kelvin-Helmholtz (KH) growth. At early times, growth is dominated by RM and KH, whereas at late times a rarefaction from laser turn-off reaches the interface, leading to decompression and RT growth. The addition of a thirty degree tilt does not alter mix width to within experimental error bars, even while significantly altering spike and bubble morphology. The results of single and double-mode experiments along with simulations using the multi-physics hydro-code RAGE will be presented. This work performed under the auspices of the U.S. Department of Energy by LANL under contract DE-AC52-06NA25396. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0002956. This material is partially supported by DOE Office of Science Graduate Student Research (SCGSR) program.

  18. Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.

    PubMed

    Ruymgaart, A Peter; Elber, Ron

    2012-11-13

    We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).

  19. A multidisciplinary audit of clinical coding accuracy in otolaryngology: financial, managerial and clinical governance considerations under payment-by-results.

    PubMed

    Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S

    2009-02-01

    To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.

  20. Convolutional coding combined with continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Pizzi, S. V.; Wilson, S. G.

    1985-01-01

    Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.

  1. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  2. Ideal MHD Stability Prediction and Required Power for EAST Advanced Scenario

    NASA Astrophysics Data System (ADS)

    Chen, Junjie; Li, Guoqiang; Qian, Jinping; Liu, Zixi

    2012-11-01

    The Experimental Advanced Superconducting Tokamak (EAST) is the first fully superconducting tokamak with a D-shaped cross-sectional plasma presently in operation. The ideal magnetohydrodynamic (MHD) stability and required power for the EAST advanced tokamak (AT) scenario with negative central shear and double transport barrier (DTB) are investigated. With the equilibrium code TOQ and stability code GATO, the ideal MHD stability is analyzed. It is shown that a moderate ratio of edge transport barriers' (ETB) height to internal transport barriers' (ITBs) height is beneficial to ideal MHD stability. The normalized beta βN limit is about 2.20 (without wall) and 3.70 (with ideal wall). With the scaling law of energy confinement time, the required heating power for EAST AT scenario is calculated. The total heating power Pt increases as the toroidal magnetic field BT or the normalized beta βN is increased.

  3. PopCORN: Hunting down the differences between binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.

    2014-02-01

    Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org

  4. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE PAGES

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...

    2018-06-14

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  5. Validating the performance of correlated fission multiplicity implementation in radiation transport codes with subcritical neutron multiplication benchmark experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson

    Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less

  6. An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2007-01-01

    An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.

  7. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  8. Application of the A.C. Admittance Technique to Double Layer Studies on Polycrystalline Gold Electrodes

    DTIC Science & Technology

    1992-02-24

    AVAiLABILITY STATEMENT 12b. DISTRIBUTION CODE Unclassified 1 . %Bsr’RACT , 3’ um . Crl) A detailed examination of the dependence of the a.c. admittance...NUMBER OF PAGES double layer at gold/solution interface, a.c. admittance techniques, constant phase element model 1 . PRCE CODE 17. SECURITY...Chemistry University of California Davis, CA 95616 U.S.A. tOn leave from the Instituto de Fisica e Quimica de Sao Carlos, USP, Sao Carlos, SP 13560

  9. Impacts of Model Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less

  10. Mechanisms for the Dissipation of Alfven Waves in Near-Earth Space Plasma

    NASA Technical Reports Server (NTRS)

    Singh, Nagendra; Khazanov, George; Krivorutsky, E. N.; Davis, John M. (Technical Monitor)

    2002-01-01

    Alfven waves are a major mechanism for the transport of electromagnetic energy from the distant part of the magnetosphere to the near-Earth space. This is especially true for the auroral and polar regions of the Earth. However, the mechanisms for their dissipation have remained illusive. One of the mechanisms is the formation of double layers when the current associated with Alfven waves in the inertial regime interact with density cavities, which either are generated nonlinearly by the waves themselves or are a part of the ambient plasma turbulence. Depending on the strength of the cavities, weak and strong double layers could form. Such double layers are transient; their lifetimes depend on that of the cavities. Thus they impulsively accelerate ions and electrons. Another mechanism is the resonant absorption of broadband Alfven- wave noise by the ions at the ion cyclotron frequencies. But this resonant absorption may not be possible for the very low frequency waves, and it may be more suited for electromagnetic ion cyclotron waves. A third mechanism is the excitation of secondary waves by the drifts of electrons and ions in the Alfven wave fields. It is found that under suitable conditions, the relative drifts between different ion species and/or between electrons and ions are large enough to drive lower hybrid waves, which could cause transverse accelerations of ions and parallel accelerations of electrons. This mechanism is being further studied by means of kinetic simulations using 2.5- and 3-D particle-in-cell codes. The ongoing modeling efforts on space weather require quantitative estimates of energy inputs of various kinds, including the electromagnetic energy. Our studies described here contribute to the methods of determining the estimates of the input from ubiquitous Alfven waves.

  11. Reservoir simulation with MUFITS code: Extension for double porosity reservoirs and flows in horizontal wells

    NASA Astrophysics Data System (ADS)

    Afanasyev, Andrey

    2017-04-01

    Numerical modelling of multiphase flows in porous medium is necessary in many applications concerning subsurface utilization. An incomplete list of those applications includes oil and gas fields exploration, underground carbon dioxide storage and geothermal energy production. The numerical simulations are conducted using complicated computer programs called reservoir simulators. A robust simulator should include a wide range of modelling options covering various exploration techniques, rock and fluid properties, and geological settings. In this work we present a recent development of new options in MUFITS code [1]. The first option concerns modelling of multiphase flows in double-porosity double-permeability reservoirs. We describe internal representation of reservoir models in MUFITS, which are constructed as a 3D graph of grid blocks, pipe segments, interfaces, etc. In case of double porosity reservoir, two linked nodes of the graph correspond to a grid cell. We simulate the 6th SPE comparative problem [2] and a five-spot geothermal production problem to validate the option. The second option concerns modelling of flows in porous medium coupled with flows in horizontal wells that are represented in the 3D graph as a sequence of pipe segments linked with pipe junctions. The well completions link the pipe segments with reservoir. The hydraulics in the wellbore, i.e. the frictional pressure drop, is calculated in accordance with Haaland's formula. We validate the option against the 7th SPE comparative problem [3]. We acknowledge financial support by the Russian Foundation for Basic Research (project No RFBR-15-31-20585). References [1] Afanasyev, A. MUFITS Reservoir Simulation Software (www.mufits.imec.msu.ru). [2] Firoozabadi A. et al. Sixth SPE Comparative Solution Project: Dual-Porosity Simulators // J. Petrol. Tech. 1990. V.42. N.6. P.710-715. [3] Nghiem L., et al. Seventh SPE Comparative Solution Project: Modelling of Horizontal Wells in Reservoir Simulation // SPE Symp. Res. Sim., 1991. DOI: 10.2118/21221-MS.

  12. Country Report on Building Energy Codes in Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shui, Bin; Evans, Meredydd; Somasundaram, Sriram

    2009-04-02

    This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America (U.S.). This reports gives an overview of the development of building energy codes in Australia, including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, and lighting) for commercial and residential buildings in Australia.

  13. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.

  14. Neutron production from beam-modifying devices in a modern double scattering proton therapy beam delivery system.

    PubMed

    Pérez-Andújar, Angélica; Newhauser, Wayne D; Deluca, Paul M

    2009-02-21

    In this work the neutron production in a passive beam delivery system was investigated. Secondary particles including neutrons are created as the proton beam interacts with beam shaping devices in the treatment head. Stray neutron exposure to the whole body may increase the risk that the patient develops a radiogenic cancer years or decades after radiotherapy. We simulated a passive proton beam delivery system with double scattering technology to determine the neutron production and energy distribution at 200 MeV proton energy. Specifically, we studied the neutron absorbed dose per therapeutic absorbed dose, the neutron absorbed dose per source particle and the neutron energy spectrum at various locations around the nozzle. We also investigated the neutron production along the nozzle's central axis. The absorbed doses and neutron spectra were simulated with the MCNPX Monte Carlo code. The simulations revealed that the range modulation wheel (RMW) is the most intense neutron source of any of the beam spreading devices within the nozzle. This finding suggests that it may be helpful to refine the design of the RMW assembly, e.g., by adding local shielding, to suppress neutron-induced damage to components in the nozzle and to reduce the shielding thickness of the treatment vault. The simulations also revealed that the neutron dose to the patient is predominated by neutrons produced in the field defining collimator assembly, located just upstream of the patient.

  15. Analysis and Simulation of a Blue Energy Cycle

    DOE PAGES

    Sharma, Ms. Ketki; Kim, Yong-Ha; Yiacoumi, Sotira; ...

    2016-01-30

    The mixing process of fresh water and seawater releases a significant amount of energy and is a potential source of renewable energy. The so called ‘blue energy’ or salinity-gradient energy can be harvested by a device consisting of carbon electrodes immersed in an electrolyte solution, based on the principle of capacitive double layer expansion (CDLE). In this study, we have investigated the feasibility of energy production based on the CDLE principle. Experiments and computer simulations were used to study the process. Mesoporous carbon materials, synthesized at the Oak Ridge National Laboratory, were used as electrode materials in the experiments. Neutronmore » imaging of the blue energy cycle was conducted with cylindrical mesoporous carbon electrodes and 0.5 M lithium chloride as the electrolyte solution. For experiments conducted at 0.6 V and 0.9 V applied potential, a voltage increase of 0.061 V and 0.054 V was observed, respectively. From sequences of neutron images obtained for each step of the blue energy cycle, information on the direction and magnitude of lithium ion transport was obtained. A computer code was developed to simulate the process. Experimental data and computer simulations allowed us to predict energy production.« less

  16. Country Report on Building Energy Codes in Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shui, Bin; Evans, Meredydd

    2009-04-06

    This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America . This reports gives an overview of the development of building energy codes in Canada, including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, lighting, and water heating) for commercial and residential buildingsmore » in Canada.« less

  17. Energy Savings Analysis of the Proposed NYStretch-Energy Code 2018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bing; Zhang, Jian; Chen, Yan

    This study was conducted by the Pacific Northwest National Laboratory (PNNL) in support of the stretch energy code development led by the New York State Energy Research and Development Authority (NYSERDA). In 2017 NYSERDA developed its 2016 Stretch Code Supplement to the 2016 New York State Energy Conservation Construction Code (hereinafter referred to as “NYStretch-Energy”). NYStretch-Energy is intended as a model energy code for statewide voluntary adoption that anticipates other code advancements culminating in the goal of a statewide Net Zero Energy Code by 2028. Since then, NYSERDA continues to develop the NYStretch-Energy Code 2018 edition. To support the effort,more » PNNL conducted energy simulation analysis to quantify the energy savings of proposed commercial provisions of the NYStretch-Energy Code (2018) in New York. The focus of this project is the 20% improvement over existing commercial model energy codes. A key requirement of the proposed stretch code is that it be ‘adoptable’ as an energy code, meaning that it must align with current code scope and limitations, and primarily impact building components that are currently regulated by local building departments. It is largely limited to prescriptive measures, which are what most building departments and design projects are most familiar with. This report describes a set of energy-efficiency measures (EEMs) that demonstrate 20% energy savings over ANSI/ASHRAE/IES Standard 90.1-2013 (ASHRAE 2013) across a broad range of commercial building types and all three climate zones in New York. In collaboration with New Building Institute, the EEMs were developed from national model codes and standards, high-performance building codes and standards, regional energy codes, and measures being proposed as part of the on-going code development process. PNNL analyzed these measures using whole building energy models for selected prototype commercial buildings and multifamily buildings representing buildings in New York. Section 2 of this report describes the analysis methodology, including the building types and construction area weights update for this analysis, the baseline, and the method to conduct the energy saving analysis. Section 3 provides detailed specifications of the EEMs and bundles. Section 4 summarizes the results of individual EEMs and EEM bundles by building type, energy end-use and climate zone. Appendix A documents detailed descriptions of the selected prototype buildings. Appendix B provides energy end-use breakdown results by building type for both the baseline code and stretch code in all climate zones.« less

  18. Numerical Simulation of a Double-anode Magnetron Injection Gun for 110 GHz, 1 MW Gyrotron

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Kumar, Nitin; Purohit, L. P.; Sinha, Ashok K.

    2010-07-01

    A 40 A double-anode magnetron injection gun for a 1 MW, 110 GHz gyrotron has been designed. The preliminary design has been obtained by using some trade-off equations. The electron beam analysis has been performed by using the commercially available code EGUN and the in-house developed code MIGANS. The operating mode of the gyrotron is TE22,6 and it is operated in the fundamental harmonic. The electron beam with a low transverse velocity spread ( δ {β_{ bot max }} = 2.26% ) and the transverse-to-axial velocity ratio of the electron beam (α) = 1.37 is obtained. The simulated results of the MIG obtained with the EGUN code have been validated with another trajectory code TRAK. The results on the design output parameters obtained by both the codes are in good agreement. The sensitivity analysis has been carried out by changing the different gun parameters to decide the fabrication tolerance.

  19. Transformations, Inc. Net Zero Energy Communities, Devens, Easthampton, Townsend, Massachusetts (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-11-01

    In 2009, Transformations, Inc. partnered with U.S. Department of Energy (DOE) Building America team Building Science Corporation (BSC) to build new net zero energy houses in three developments in Massachusetts. The company has been developing strategies for cost-effective super-insulated homes in the New England market since 2006. After years of using various construction techniques, it has developed a specific set of assemblies and specifications that achieve a 44.9% reduction in energy use compared with a home built to the 2009 International Residential Code, qualifying the houses for the DOE's Challenge Home. The super-insulated houses provide data for several research topicsmore » in a cold climate. BSC studied the moisture risks in double stud walls insulated with open cell spray foam and cellulose. The mini-split air source heat pump (ASHP) research focused on the range of temperatures experienced in bedrooms as well as the homeowners' perceptions of equipment performance. BSC also examined the developer's financing options for the photovoltaic (PV) systems, which take advantage of Solar Renewable Energy Certificates, local incentives, and state and federal tax credits.« less

  20. Towards robust algorithms for current deposition and dynamic load-balancing in a GPU particle in cell code

    NASA Astrophysics Data System (ADS)

    Rossi, Francesco; Londrillo, Pasquale; Sgattoni, Andrea; Sinigardi, Stefano; Turchetti, Giorgio

    2012-12-01

    We present `jasmine', an implementation of a fully relativistic, 3D, electromagnetic Particle-In-Cell (PIC) code, capable of running simulations in various laser plasma acceleration regimes on Graphics-Processing-Units (GPUs) HPC clusters. Standard energy/charge preserving FDTD-based algorithms have been implemented using double precision and quadratic (or arbitrary sized) shape functions for the particle weighting. When porting a PIC scheme to the GPU architecture (or, in general, a shared memory environment), the particle-to-grid operations (e.g. the evaluation of the current density) require special care to avoid memory inconsistencies and conflicts. Here we present a robust implementation of this operation that is efficient for any number of particles per cell and particle shape function order. Our algorithm exploits the exposed GPU memory hierarchy and avoids the use of atomic operations, which can hurt performance especially when many particles lay on the same cell. We show the code multi-GPU scalability results and present a dynamic load-balancing algorithm. The code is written using a python-based C++ meta-programming technique which translates in a high level of modularity and allows for easy performance tuning and simple extension of the core algorithms to various simulation schemes.

  1. Oil and Gas field code master list 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This is the fourteenth annual edition of the Energy Information Administration`s (EIA) Oil and Gas Field Code Master List. It reflects data collected through October 1995 and provides standardized field name spellings and codes for all identified oil and/or gas fields in the US. The Field Code Index, a listing of all field names and the States in which they occur, ordered by field code, has been removed from this year`s publications to reduce printing and postage costs. Complete copies (including the Field Code Index) will be available on the EIA CD-ROM and the EIA World-Wide Web Site. Future editionsmore » of the complete Master List will be available on CD-ROM and other electronic media. There are 57,400 field records in this year`s Oil and Gas Field Code Master List. As it is maintained by EIA, the Master List includes the following: field records for each State and county in which a field resides; field records for each offshore area block in the Gulf of Mexico in which a field resides; field records for each alias field name (see definition of alias below); and fields crossing State boundaries that may be assigned different names by the respective State naming authorities. Taking into consideration the double-counting of fields under such circumstances, EIA identifies 46,312 distinct fields in the US as of October 1995. This count includes fields that no longer produce oil or gas, and 383 fields used in whole or in part for oil or gas Storage. 11 figs., 6 tabs.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Ms. Ketki; Kim, Yong-Ha; Yiacoumi, Sotira

    The mixing process of fresh water and seawater releases a significant amount of energy and is a potential source of renewable energy. The so called ‘blue energy’ or salinity-gradient energy can be harvested by a device consisting of carbon electrodes immersed in an electrolyte solution, based on the principle of capacitive double layer expansion (CDLE). In this study, we have investigated the feasibility of energy production based on the CDLE principle. Experiments and computer simulations were used to study the process. Mesoporous carbon materials, synthesized at the Oak Ridge National Laboratory, were used as electrode materials in the experiments. Neutronmore » imaging of the blue energy cycle was conducted with cylindrical mesoporous carbon electrodes and 0.5 M lithium chloride as the electrolyte solution. For experiments conducted at 0.6 V and 0.9 V applied potential, a voltage increase of 0.061 V and 0.054 V was observed, respectively. From sequences of neutron images obtained for each step of the blue energy cycle, information on the direction and magnitude of lithium ion transport was obtained. A computer code was developed to simulate the process. Experimental data and computer simulations allowed us to predict energy production.« less

  3. Track structure of protons and other light ions in liquid water: applications of the LIonTrack code at the nanometer scale.

    PubMed

    Bäckström, G; Galassi, M E; Tilly, N; Ahnesjö, A; Fernández-Varea, J M

    2013-06-01

    The LIonTrack (Light Ion Track) Monte Carlo (MC) code for the simulation of H(+), He(2+), and other light ions in liquid water is presented together with the results of a novel investigation of energy-deposition site properties from single ion tracks. The continuum distorted-wave formalism with the eikonal initial state approximation (CDW-EIS) is employed to generate the initial energy and angle of the electrons emitted in ionizing collisions of the ions with H2O molecules. The model of Dingfelder et al. ["Electron inelastic-scattering cross sections in liquid water," Radiat. Phys. Chem. 53, 1-18 (1998); "Comparisons of calculations with PARTRAC and NOREC: Transport of electrons in liquid water," Radiat. Res. 169, 584-594 (2008)] is linked to the general-purpose MC code PENELOPE/penEasy to simulate the inelastic interactions of the secondary electrons in liquid water. In this way, the extended PENELOPE/penEasy code may provide an improved description of the 3D distribution of energy deposits (EDs), making it suitable for applications at the micrometer and nanometer scales. Single-ionization cross sections calculated with the ab initio CDW-EIS formalism are compared to available experimental values, some of them reported very recently, and the theoretical electronic stopping powers are benchmarked against those recommended by the ICRU. The authors also analyze distinct aspects of the spatial patterns of EDs, such as the frequency of nearest-neighbor distances for various radiation qualities, and the variation of the mean specific energy imparted in nanoscopic targets located around the track. For 1 MeV/u particles, the C(6+) ions generate about 15 times more clusters of six EDs within an ED distance of 3 nm than H(+). On average clusters of two to three EDs for 1 MeV/u H(+) and clusters of four to five EDs for 1 MeV/u C(6+) could be expected for a modeling double strand break distance of 3.4 nm.

  4. Energy Codes at a Glance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pamala C.; Richman, Eric E.

    2008-09-01

    Feeling dim from energy code confusion? Read on to give your inspections a charge. The U.S. Department of Energy’s Building Energy Codes Program addresses hundreds of inquiries from the energy codes community every year. This article offers clarification for topics of confusion submitted to BECP Technical Support of interest to electrical inspectors, focusing on the residential and commercial energy code requirements based on the most recently published 2006 International Energy Conservation Code® and ANSI/ASHRAE/IESNA1 Standard 90.1-2004.

  5. Numerical Modeling and Testing of an Inductively-Driven and High-Energy Pulsed Plasma Thrusters

    NASA Technical Reports Server (NTRS)

    Parma, Brian

    2004-01-01

    Pulsed Plasma Thrusters (PPTs) are advanced electric space propulsion devices that are characterized by simplicity and robustness. They suffer, however, from low thrust efficiencies. This summer, two approaches to improve the thrust efficiency of PPTs will be investigated through both numerical modeling and experimental testing. The first approach, an inductively-driven PPT, uses a double-ignition circuit to fire two PPTs in succession. This effectively changes the PPTs configuration from an LRC circuit to an LR circuit. The LR circuit is expected to provide better impedance matching and improving the efficiency of the energy transfer to the plasma. An added benefit of the LR circuit is an exponential decay of the current, whereas a traditional PPT s under damped LRC circuit experiences the characteristic "ringing" of its current. The exponential decay may provide improved lifetime and sustained electromagnetic acceleration. The second approach, a high-energy PPT, is a traditional PPT with a variable size capacitor bank. This PPT will be simulated and tested at energy levels between 100 and 450 joules in order to investigate the relationship between efficiency and energy level. Arbitrary Coordinate Hydromagnetic (MACH2) code is used. The MACH2 code, designed by the Center for Plasma Theory and Computation at the Air Force Research Laboratory, has been used to gain insight into a variety of plasma problems, including electric plasma thrusters. The goals for this summer include numerical predictions of performance for both the inductively-driven PPT and high-energy PFT, experimental validation of the numerical models, and numerical optimization of the designs. These goals will be met through numerical and experimental investigation of the PPTs current waveforms, mass loss (or ablation), and impulse bit characteristics.

  6. Theoretical studies of Resonance Enhanced Stimulated Raman Scattering (RESRS) of frequency doubled Alexandrite laser wavelength in cesium vapor

    NASA Technical Reports Server (NTRS)

    Lawandy, Nabil M.

    1987-01-01

    The third phase of research will focus on the propagation and energy extraction of the pump and SERS beams in a variety of configurations including oscillator structures. In order to address these questions a numerical code capable of allowing for saturation and full transverse beam evolution is required. The method proposed is based on a discretized propagation energy extraction model which uses a Kirchoff integral propagator coupled to the three level Raman model already developed. The model will have the resolution required by diffraction limits and will use the previous density matrix results in the adiabatic following limit. Owing to its large computational requirements, such a code must be implemented on a vector array processor. One code on the Cyber is being tested by using previously understood two-level laser models as guidelines for interpreting the results. Two tests were implemented: the evolution of modes in a passive resonator and the evolution of a stable state of the adiabatically eliminated laser equations. These results show mode shapes and diffraction losses for the first case and relaxation oscillations for the second one. Finally, in order to clarify the computing methodology used to exploit the speed of the Cyber's computational speed, the time it takes to perform both of the computations previously mentioned to run on the Cyber and VAX 730 must be measured. Also included is a short description of the current laser model (CAVITY.FOR) and a flow chart of the test computations.

  7. Building Energy Codes: Policy Overview and Good Practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Sadie

    2016-02-19

    Globally, 32% of total final energy consumption is attributed to the building sector. To reduce energy consumption, energy codes set minimum energy efficiency standards for the building sector. With effective implementation, building energy codes can support energy cost savings and complementary benefits associated with electricity reliability, air quality improvement, greenhouse gas emission reduction, increased comfort, and economic and social development. This policy brief seeks to support building code policymakers and implementers in designing effective building code programs.

  8. Double emittance exchanger as a bunch compressor for the MaRIE XFEL electron beam line at 1 GeV

    NASA Astrophysics Data System (ADS)

    Malyzhenkov, Alexander; Carlsten, Bruce E.; Yampolsky, Nikolai A.

    2017-03-01

    We demonstrate an alternative realization of a bunch compressor (specifically, the second bunch compressor for the MaRIE XFEL beamline, 1GeV electron energy) using a double emittance exchanger (EEX) and a telescope in the transverse phase space. We compare our results with a traditional bunch compressor realized via a chicane, taking into account the nonlinear dynamics, Coherent Synchrotron Radiation (CSR) and Space Charge (SC) effects. In particular, we use the Elegant code for tracking particles through the beamline, and analyze the evolution of the eigen-emittances to separate the influence of the CSR/SC effects from the nonlinear dynamics effects. We optimize the scheme parameters to reach a desirable compression factor and minimize the emittance growth. We observe dominant CSR effects in our scheme, resulting in critical emittance growth, and introduce an alternative version of an emittance exchanger with a reduced number of bending magnets to minimize the impact of CSR effects.

  9. Double differential light charged particle emission cross sections for some structural fusion materials

    NASA Astrophysics Data System (ADS)

    Sarpün, Ismail Hakki; n, Abdullah Aydı; Tel, Eyyup

    2017-09-01

    In fusion reactors, neutron induced radioactivity strongly depends on the irradiated material. So, a proper selection of structural materials will have been limited the radioactive inventory in a fusion reactor. First-wall and blanket components have high radioactivity concentration due to being the most flux-exposed structures. The main objective of fusion structural material research is the development and selection of materials for reactor components with good thermo-mechanical and physical properties, coupled with low-activation characteristics. Double differential light charged particle emission cross section, which is a fundamental data to determine nuclear heating and material damages in structural fusion material research, for some elements target nuclei have been calculated by the TALYS 1.8 nuclear reaction code at 14-15 MeV neutron incident energy and compared with available experimental data in EXFOR library. Direct, compound and pre-equilibrium reaction contribution have been theoretically calculated and dominant contribution have been determined for each emission of proton, deuteron and alpha particle.

  10. Double Emittance Exchanger as a Bunch Compressor for the MaRIE XFEL electron beam line at 1GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyzhenkov, Alexander; Yampolsky, Nikolai; Carlsten, Bruce Eric

    We demonstrate an alternative realization of a bunch compressor (specifically the second bunch compressor for the MaRIE XFEL beamline, 1GeV electron energy) using a double emittance exchanger (EEX) and a telescope in the transverse phase space.We compare our results with a traditional bunch compressor realized via chicane, taking into account the nonlinear dynamics, Coherent Synchrotron Radiation (CSR) and Space Charge (SC) effects. In particular, we use the Elegant code for tracking particles through the beam line and analyze the eigen-emittances evolution to separate the influence of the CSR/SC effects from the nonlinear dynamics effects. We optimize the scheme parameters tomore » reach a desirable compression factor and minimize the emittance growth. We observe dominant CSR-effects in our scheme resulting in critical emittance growth and introduce alternative version of an emittance exchanger with a reduced number of bending magnets to minimize the impact of CSR effects.« less

  11. An international survey of building energy codes and their implementation

    DOE PAGES

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    2017-08-01

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy use in buildings. These elements and practices include: comprehensive coverage of buildings by type, age, size, and geographic location; an implementationmore » framework that involves a certified agency to inspect construction at critical stages; and building materials that are independently tested, rated, and labeled. Training and supporting tools are another element of successful code implementation. Some countries have also introduced compliance evaluation studies, which suggested that tightening energy requirements would only be meaningful when also addressing gaps in implementation (Pitt&Sherry, 2014; U.S. DOE, 2016b). Here, this article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  12. An international survey of building energy codes and their implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Roshchanka, Volha; Graham, Peter

    Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy use in buildings. These elements and practices include: comprehensive coverage of buildings by type, age, size, and geographic location; an implementationmore » framework that involves a certified agency to inspect construction at critical stages; and building materials that are independently tested, rated, and labeled. Training and supporting tools are another element of successful code implementation. Some countries have also introduced compliance evaluation studies, which suggested that tightening energy requirements would only be meaningful when also addressing gaps in implementation (Pitt&Sherry, 2014; U.S. DOE, 2016b). Here, this article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less

  13. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams.

    PubMed

    Lourenço, Ana; Thomas, Russell; Bouchard, Hugo; Kacperek, Andrzej; Vondracek, Vladimir; Royle, Gary; Palmans, Hugo

    2016-07-01

    The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the fluka code [A. Ferrari et al., "fluka: A multi-particle transport code," in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., "The fluka Code: Developments and challenges for high energy and medical applications," Nucl. Data Sheets 120, 211-214 (2014)], to partial fluence corrections measured experimentally. A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary particle fluence. A correction factor, F(d), has been established to relate fluence corrections defined theoretically to partial fluence corrections derived experimentally. The findings presented here are also relevant to water and tissue-equivalent-plastic materials given their carbon content.

  14. Double-layer neutron shield design as neutron shielding application

    NASA Astrophysics Data System (ADS)

    Sariyer, Demet; Küçer, Rahmi

    2018-02-01

    The shield design in particle accelerators and other high energy facilities are mainly connected to the high-energy neutrons. The deep penetration of neutrons through massive shield has become a very serious problem. For shielding to be efficient, most of these neutrons should be confined to the shielding volume. If the interior space will become limited, the sufficient thickness of multilayer shield must be used. Concrete and iron are widely used as a multilayer shield material. Two layers shield material was selected to guarantee radiation safety outside of the shield against neutrons generated in the interaction of the different proton energies. One of them was one meter of concrete, the other was iron-contained material (FeB, Fe2B and stainless-steel) to be determined shield thicknesses. FLUKA Monte Carlo code was used for shield design geometry and required neutron dose distributions. The resulting two layered shields are shown better performance than single used concrete, thus the shield design could leave more space in the interior shielded areas.

  15. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system

    PubMed Central

    Ruymgaart, A. Peter; Cardenas, Alfredo E.; Elber, Ron

    2011-01-01

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as “energy drift” in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code. PMID:22328867

  16. Energy levels of double triangular graphene quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, F. X.; Jiang, Z. T., E-mail: ztjiang616@hotmail.com; Zhang, H. Y.

    2014-09-28

    We investigate theoretically the energy levels of the coupled double triangular graphene quantum dots (GQDs) based on the tight-binding Hamiltonian model. The double GQDs including the ZZ-type, ZA-type, and AA-type GQDs with the two GQDs having the zigzag or armchair boundaries can be coupled together via different interdot connections, such as the direct coupling, the chains of benzene rings, and those of carbon atoms. It is shown that the energy spectrum of the coupled double GQDs is the amalgamation of those spectra of the corresponding two isolated GQDs with the modification triggered by the interdot connections. The interdot connection ismore » inclined to lift up the degeneracies of the energy levels in different degree, and as the connection changes from the direct coupling to the long chains, the removal of energy degeneracies is suppressed in ZZ-type and AA-type double GQDs, which indicates that the two coupled GQDs are inclined to become decoupled. Then we consider the influences on the spectra of the coupled double GQDs induced by the electric fields applied on the GQDs or the connection, which manifests as the global spectrum redistribution or the local energy level shift. Finally, we study the symmetrical and asymmetrical energy spectra of the double GQDs caused by the substrates supporting the two GQDs, clearly demonstrating how the substrates affect the double GQDs' spectrum. This research elucidates the energy spectra of the coupled double GQDs, as well as the mechanics of manipulating them by the electric field and the substrates, which would be a significant reference for designing GQD-based devices.« less

  17. Design of a double-anode magnetron-injection gun for the W-band gyrotron

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Ho; Choi, Jin Joo; So, Joon Ho

    2015-07-01

    A double-anode magnetron-injection gun (MIG) was designed. The MIG is for a W-band 10-kW gyrotron. Analytic equations based on adiabatic theory and angular momentum conservation were used to examine the initial design parameters such as the cathode angle, and the radius of the beam emitting surface. The MIG's performances were predicted by using an electron trajectory code, the EGUN code. The beam spread of the axial velocity, Δvz/vz, obtained from the EGUN code was observed to be 1.34% at α = 1.3. The cathode edge emission and the thermal effect were modeled. The cathode edge emission was found to have a major effect on the velocity spread. The electron beam's quality was significantly improved by affixing non-emissive cylinders to the cathode.

  18. Bilingual Voicing: A Study of Code-Switching in the Reported Speech of Finnish Immigrants in Estonia

    ERIC Educational Resources Information Center

    Frick, Maria; Riionheimo, Helka

    2013-01-01

    Through a conversation analytic investigation of Finnish-Estonian bilingual (direct) reported speech (i.e., voicing) by Finns who live in Estonia, this study shows how code-switching is used as a double contextualization device. The code-switched voicings are shaped by the on-going interactional situation, serving its needs by opening up a context…

  19. New Whole-House Case Study: Transformations, Inc. Net Zero Energy Communities, Devens, Easthampton, Townsend, Massachusetts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-11-01

    In 2009, Transformations, Inc. partnered with Building America team Building Science Corporation (BSC) to build new net zero energy houses in three developments in Massachusetts. The company has been developing strategies for cost-effective super-insulated homes in the New England market since 2006. After years of using various construction techniques, it has developed a specific set of assemblies and specifications that achieve a 44.9% reduction in energy use compared with a home built to the 2009 International Residential Code, qualifying the houses for the DOE’s Challenge Home. The super-insulated houses provide data for several research topics in a cold climate. BSCmore » studied the moisture risks in double stud walls insulated with open cell spray foam and cellulose. The mini-split air source heat pump (ASHP) research focused on the range of temperatures experienced in bedrooms as well as the homeowners’ perceptions of equipment performance. BSC also examined the developer’s financing options for the photovoltaic (PV) systems, which take advantage of Solar Renewable Energy Certificates, local incentives, and state and federal tax credits.« less

  20. A long-term, integrated impact assessment of alternative building energy code scenarios in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Eom, Jiyong; Evans, Meredydd

    2014-04-01

    China is the second largest building energy user in the world, ranking first and third in residential and commercial energy consumption. Beginning in the early 1980s, the Chinese government has developed a variety of building energy codes to improve building energy efficiency and reduce total energy demand. This paper studies the impact of building energy codes on energy use and CO2 emissions by using a detailed building energy model that represents four distinct climate zones each with three building types, nested in a long-term integrated assessment framework GCAM. An advanced building stock module, coupled with the building energy model, ismore » developed to reflect the characteristics of future building stock and its interaction with the development of building energy codes in China. This paper also evaluates the impacts of building codes on building energy demand in the presence of economy-wide carbon policy. We find that building energy codes would reduce Chinese building energy use by 13% - 22% depending on building code scenarios, with a similar effect preserved even under the carbon policy. The impact of building energy codes shows regional and sectoral variation due to regionally differentiated responses of heating and cooling services to shell efficiency improvement.« less

  1. Elastic and Inelastic Scattering of Neutrons from Neon and Argon: Impact on Neutrinoless Double-Beta Decay and Dark Matter Experimental Programs

    NASA Astrophysics Data System (ADS)

    MacMullin, Sean Patrick

    In underground physics experiments, such as neutrinoless double-beta decay and dark matter searches, fast neutrons may be the dominant and potentially irreducible source of background. Experimental data for the elastic and inelastic scattering cross sections of neutrons from argon and neon, which are target and shielding materials of interest to the dark matter and neutrinoless double-beta decay communities, were previously unavailable. Unmeasured neutron scattering cross sections are often accounted for incorrectly in Monte-Carlo simulations. Elastic scattering cross sections were measured at the Triangle Universities Nuclear Laboratory (TUNL) using the neutron time-of-flight technique. Angular distributions for neon were measured at 5.0 and 8.0 MeV. One full angular distribution was measured for argon at 6.0 MeV. The cross-section data were compared to calculations using a global optical model. Data were also fit using the spherical optical model. These model fits were used to predict the elastic scattering cross section at unmeasured energies and also provide a benchmark where the global optical models are not well constrained. Partial gamma-ray production cross sections for (n,xngamma ) reactions in natural argon and neon were measured using the broad spectrum neutron beam at the Los Alamos Neutron Science Center (LANSCE). Neutron energies were determined using time of flight and resulting gamma rays from neutron-induced reactions were detected using the GErmanium Array for Neutron Induced Excitations (GEANIE). Partial gamma-ray production cross sections for six transitions in 40Ar, two transitions in 39Ar and the first excited state transitions is 20Ne and 22Ne were measured from threshold to a neutron energy where the gamma-ray yield dropped below the detection sensitivity. Measured (n,xngamma) cross sections were compared with calculations using the TALYS and CoH3 nuclear reaction codes. These new measurements will help to identify potential backgrounds in neutrinoless double-beta decay and dark matter experiments that use argon or neon. The measurements will also aid in the identification of neutron interactions in these experiments through the detection of gamma rays produced by ( n,xngamma) reactions.

  2. 10 CFR 434.99 - Explanation of numbering system for codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...

  3. 10 CFR 434.99 - Explanation of numbering system for codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...

  4. Potential Job Creation in Rhode Island as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  5. Potential Job Creation in Minnesota as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  6. Potential Job Creation in Tennessee as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  7. Potential Job Creation in Nevada as a Result of Adopting New Residential Building Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Michael J.; Niemeyer, Jackie M.

    Are there advantages to states that adopt the most recent model building energy codes other than saving energy? For example, can the construction activity and energy savings associated with code-compliant housing units become significant sources of job creation for states if new building energy codes are adopted to cover residential construction? , The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) asked Pacific Northwest National Laboratory (PNNL) to research and ascertain whether jobs would be created in individual states based on their adoption of model building energy codes. Each state in the country is dealing with high levelsmore » of unemployment, so job creation has become a top priority. Many programs have been created to combat unemployment with various degrees of failure and success. At the same time, many states still have not yet adopted the most current versions of the International Energy Conservation Code (IECC) model building energy code, when doing so could be a very effective tool in creating jobs to assist states in recovering from this economic downturn.« less

  8. Neutron production from beam-modifying devices in a modern double scattering proton therapy beam delivery system

    PubMed Central

    Pérez-Andújar, Angélica; Newhauser, Wayne D; DeLuca, Paul M

    2014-01-01

    In this work the neutron production in a passive beam delivery system was investigated. Secondary particles including neutrons are created as the proton beam interacts with beam shaping devices in the treatment head. Stray neutron exposure to the whole body may increase the risk that the patient develops a radiogenic cancer years or decades after radiotherapy. We simulated a passive proton beam delivery system with double scattering technology to determine the neutron production and energy distribution at 200 MeV proton energy. Specifically, we studied the neutron absorbed dose per therapeutic absorbed dose, the neutron absorbed dose per source particle and the neutron energy spectrum at various locations around the nozzle. We also investigated the neutron production along the nozzle's central axis. The absorbed doses and neutron spectra were simulated with the MCNPX Monte Carlo code. The simulations revealed that the range modulation wheel (RMW) is the most intense neutron source of any of the beam spreading devices within the nozzle. This finding suggests that it may be helpful to refine the design of the RMW assembly, e.g., by adding local shielding, to suppress neutron-induced damage to components in the nozzle and to reduce the shielding thickness of the treatment vault. The simulations also revealed that the neutron dose to the patient is predominated by neutrons produced in the field defining collimator assembly, located just upstream of the patient. PMID:19147903

  9. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    NASA Astrophysics Data System (ADS)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  10. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals.

    PubMed

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  11. Nuclear Computational Low Energy Initiative (NUCLEI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, Sanjay K.

    This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS andmore » FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less

  12. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    PubMed

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  13. Secondary Neutron Doses to Pediatric Patients During Intracranial Proton Therapy: Monte Carlo Simulation of the Neutron Energy Spectrum and its Organ Doses.

    PubMed

    Matsumoto, Shinnosuke; Koba, Yusuke; Kohno, Ryosuke; Lee, Choonsik; Bolch, Wesley E; Kai, Michiaki

    2016-04-01

    Proton therapy has the physical advantage of a Bragg peak that can provide a better dose distribution than conventional x-ray therapy. However, radiation exposure of normal tissues cannot be ignored because it is likely to increase the risk of secondary cancer. Evaluating secondary neutrons generated by the interaction of the proton beam with the treatment beam-line structure is necessary; thus, performing the optimization of radiation protection in proton therapy is required. In this research, the organ dose and energy spectrum were calculated from secondary neutrons using Monte Carlo simulations. The Monte Carlo code known as the Particle and Heavy Ion Transport code System (PHITS) was used to simulate the transport proton and its interaction with the treatment beam-line structure that modeled the double scattering body of the treatment nozzle at the National Cancer Center Hospital East. The doses of the organs in a hybrid computational phantom simulating a 5-y-old boy were calculated. In general, secondary neutron doses were found to decrease with increasing distance to the treatment field. Secondary neutron energy spectra were characterized by incident neutrons with three energy peaks: 1×10, 1, and 100 MeV. A block collimator and a patient collimator contributed significantly to organ doses. In particular, the secondary neutrons from the patient collimator were 30 times higher than those from the first scatter. These results suggested that proactive protection will be required in the design of the treatment beam-line structures and that organ doses from secondary neutrons may be able to be reduced.

  14. New Modeling Approaches to Study DNA Damage by the Direct and Indirect Effects of Ionizing Radiation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2012-01-01

    DNA is damaged both by the direct and indirect effects of radiation. In the direct effect, the DNA itself is ionized, whereas the indirect effect involves the radiolysis of the water molecules surrounding the DNA and the subsequent reaction of the DNA with radical products. While this problem has been studied for many years, many unknowns still exist. To study this problem, we have developed the computer code RITRACKS [1], which simulates the radiation track structure for heavy ions and electrons, calculating all energy deposition events and the coordinates of all species produced by the water radiolysis. In this work, we plan to simulate DNA damage by using the crystal structure of a nucleosome and calculations performed by RITRACKS. The energy deposition events are used to calculate the dose deposited in nanovolumes [2] and therefore can be used to simulate the direct effect of the radiation. Using the positions of the radiolytic species with a radiation chemistry code [3] it will be possible to simulate DNA damage by indirect effect. The simulation results can be compared with results from previous calculations such as the frequencies of simple and complex strand breaks [4] and with newer experimental data using surrogate markers of DNA double ]strand breaks such as . ]H2AX foci [5].

  15. Study of anyon condensation and topological phase transitions from a Z4 topological phase using the projected entangled pair states approach

    NASA Astrophysics Data System (ADS)

    Iqbal, Mohsin; Duivenvoorden, Kasper; Schuch, Norbert

    2018-05-01

    We use projected entangled pair states (PEPS) to study topological quantum phase transitions. The local description of topological order in the PEPS formalism allows us to set up order parameters which measure condensation and deconfinement of anyons and serve as substitutes for conventional order parameters. We apply these order parameters, together with anyon-anyon correlation functions and some further probes, to characterize topological phases and phase transitions within a family of models based on a Z4 symmetry, which contains Z4 quantum double, toric code, double semion, and trivial phases. We find a diverse phase diagram which exhibits a variety of different phase transitions of both first and second order which we comprehensively characterize, including direct transitions between the toric code and the double semion phase.

  16. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  17. National Cost-effectiveness of ASHRAE Standard 90.1-2010 Compared to ASHRAE Standard 90.1-2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Brian; Halverson, Mark A.; Myer, Michael

    Pacific Northwest National Laboratory (PNNL) completed this project for the U.S. Department of Energy’s (DOE’s) Building Energy Codes Program (BECP). DOE’s BECP supports upgrading building energy codes and standards, and the states’ adoption, implementation, and enforcement of upgraded codes and standards. Building energy codes and standards set minimum requirements for energy-efficient design and construction for new and renovated buildings, and impact energy use and greenhouse gas emissions for the life of buildings. Continuous improvement of building energy efficiency is achieved by periodically upgrading energy codes and standards. Ensuring that changes in the code that may alter costs (for building components,more » initial purchase and installation, replacement, maintenance and energy) are cost-effective encourages their acceptance and implementation. ANSI/ASHRAE/IESNA Standard 90.1 is the energy standard for commercial and multi-family residential buildings over three floors.« less

  18. Cost-effectiveness of ASHRAE Standard 90.1-2010 Compared to ASHRAE Standard 90.1-2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Brian A.; Halverson, Mark A.; Myer, Michael

    Pacific Northwest National Laboratory (PNNL) completed this project for the U.S. Department of Energy’s (DOE’s) Building Energy Codes Program (BECP). DOE’s BECP supports upgrading building energy codes and standards, and the states’ adoption, implementation, and enforcement of upgraded codes and standards. Building energy codes and standards set minimum requirements for energy-efficient design and construction for new and renovated buildings, and impact energy use and greenhouse gas emissions for the life of buildings. Continuous improvement of building energy efficiency is achieved by periodically upgrading energy codes and standards. Ensuring that changes in the code that may alter costs (for building components,more » initial purchase and installation, replacement, maintenance and energy) are cost-effective encourages their acceptance and implementation. ANSI/ASHRAE/IESNA Standard 90.1 is the energy standard for commercial and multi-family residential buildings over three floors.« less

  19. Laser Energy Monitor for Double-Pulsed 2-Micrometer IPDA Lidar Application

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Petros, Mulugeta; Remus, Ruben; Yu, Jirong; Singh, Upendra N.

    2014-01-01

    Integrated path differential absorption (IPDA) lidar is a remote sensing technique for monitoring different atmospheric species. The technique relies on wavelength differentiation between strong and weak absorbing features normalized to the transmitted energy. 2-micron double-pulsed IPDA lidar is best suited for atmospheric carbon dioxide measurements. In such case, the transmitter produces two successive laser pulses separated by short interval (200 microseconds), with low repetition rate (10Hz). Conventional laser energy monitors, based on thermal detectors, are suitable for low repetition rate single pulse lasers. Due to the short pulse interval in double-pulsed lasers, thermal energy monitors underestimate the total transmitted energy. This leads to measurement biases and errors in double-pulsed IPDA technique. The design and calibration of a 2-micron double-pulse laser energy monitor is presented. The design is based on a high-speed, extended range InGaAs pin quantum detectors suitable for separating the two pulse events. Pulse integration is applied for converting the detected pulse power into energy. Results are compared to a photo-electro-magnetic (PEM) detector for impulse response verification. Calibration included comparing the three detection technologies in single-pulsed mode, then comparing the pin and PEM detectors in double-pulsed mode. Energy monitor linearity will be addressed.

  20. Standardizing Methods for Weapons Accuracy and Effectiveness Evaluation

    DTIC Science & Technology

    2014-06-01

    37  B.  MONTE CARLO APPROACH............................37  C.  EXPECTED VALUE THEOREM..........................38  D.  PHIT /PNM METHODOLOGY...MATLAB CODE – SR_CDF_DATA.......................96  F.  MATLAB CODE – GE_EXTRACT........................98  G.  MATLAB CODE - PHIT /PNM...Normal fit to test data.........................18  Figure 11.  Double Normal fit to test data..................19  Figure 12.  PHIT /PNM Methodology (from

  1. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  2. Numerical Simulation of Single-anode and Double-anode Magnetron Injection Guns for 127.5 GHz 1 MW Gyrotron

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Kumar, Nitin; Kumar, Anil; Purohit, Laxmi Prasad; Sinha, Ashok Kumar

    2011-07-01

    This paper presents the design of two types of magnetron injection guns (MIG's) for 1 MW, 127.5 GHz gyrotron. TE24,8 mode has been chosen as the operating mode. In-house developed code MIGSYN has been used to estimate the initial gun parameters. The electron trajectory tracing program EGUN and in-house developed code MIGANS have been used to optimize the single-anode and the double-anode design for 80 kV, 40 A MIG. The parametric analysis of MIG has also been presented. The advantages and the disadvantages of each kind of configuration have been critically examined.

  3. ICF Implosions, Space-Charge Electric Fields, and Their Impact on Mix and Compression

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Chacon, Luis; Simakov, Andrei

    2013-10-01

    The single-fluid, quasi-neutral, radiation hydrodynamics codes, used to design the NIF targets, predict thermonuclear ignition for the conditions that have been achieved experimentally. A logical conclusion is that the physics model used in these codes is missing one, or more, key phenomena. Two key model-experiment inconsistencies on NIF are: 1) a lower implosion velocity than predicted by the design codes, and 2) transport of pusher material deep into the hot spot. We hypothesize that both of these model-experiment inconsistencies may be a result of a large, space-charge, electric field residing on the distinct interfaces in a NIF target. Large space-charge fields have been experimentally observed in Omega experiments. Given our hypothesis, this presentation will: 1) Develop a more complete physics picture of initiation, sustainment, and dissipation of a current-driven plasma sheath / double-layer at the Fuel-Pusher interface of an ablating plastic shell implosion on Omega, 2) Characterize the mix that can result from a double-layer field at the Fuel-Pusher interface, prior to the onset of fluid instabilities, and 3) Quantify the impact of the double-layer induced surface tension at the Fuel-Pusher interface on the peak observed implosion velocity in Omega.

  4. SU-F-T-154: An Evaluation and Quantification of Secondary Neutron Radiation Dose Due to Double Scatter and Pencil Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glick, A; Diffenderfer, E

    2016-06-15

    Proton radiation therapy can deliver high radiation doses to tumors while sparing normal tissue. However, protons yield secondary neutron and gamma radiation that is difficult to detect, small in comparison to the prescribed dose, and not accounted for in most treatment planning systems. The risk for secondary malignancies after proton therapy may be dependent on the quality of this dose. Consequently, there is interest in characterizing the secondary radiation. Previously, we used the dual ionization chamber method to measure the separate absorbed dose from gamma-rays and neutrons secondary to the proton beam1, relying on characterization of ionization chamber response inmore » the unknown neutron spectrum from Monte Carlo simulation. We developed a procedure to use Shieldwerx activation foils, with neutron activation energies ranging from 0.025 eV to 13.5 MeV, to measure the neutron energy spectrum from double scattering (DS) and pencil beam scanning (PBS) protons outside of the treatment volume in a water tank. The activated foils are transferred to a NaI well chamber for gamma-ray spectroscopy and activity measurement. Since PBS treats in layers, the switching time between layers is used to correct for the decay of the activated foils and the relative dose per layer is assumed to be proportional to the neutron fluence per layer. MATLAB code was developed to incorporate the layer delivery and switching time into a calculation of foil activity, which is then used to determine the neutron energy fluence from tabulated foil activation energy thresholds.1. Diffenderfer et. al., Med. Phys., 38(11) 2011.« less

  5. Constructing topological models by symmetrization: A projected entangled pair states study

    NASA Astrophysics Data System (ADS)

    Fernández-González, Carlos; Mong, Roger S. K.; Landon-Cardinal, Olivier; Pérez-García, David; Schuch, Norbert

    2016-10-01

    Symmetrization of topologically ordered wave functions is a powerful method for constructing new topological models. Here we study wave functions obtained by symmetrizing quantum double models of a group G in the projected entangled pair states (PEPS) formalism. We show that symmetrization naturally gives rise to a larger symmetry group G ˜ which is always non-Abelian. We prove that by symmetrizing on sufficiently large blocks, one can always construct wave functions in the same phase as the double model of G ˜. In order to understand the effect of symmetrization on smaller patches, we carry out numerical studies for the toric code model, where we find strong evidence that symmetrizing on individual spins gives rise to a critical model which is at the phase transitions of two inequivalent toric codes, obtained by anyon condensation from the double model of G ˜.

  6. Deformed quantum double realization of the toric code and beyond

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Pramod; Ibieta-Jimenez, Juan Pablo; Bernabe Ferreira, Miguel Jorge; Teotonio-Sobrinho, Paulo

    2016-09-01

    Quantum double models, such as the toric code, can be constructed from transfer matrices of lattice gauge theories with discrete gauge groups and parametrized by the center of the gauge group algebra and its dual. For general choices of these parameters the transfer matrix contains operators acting on links which can also be thought of as perturbations to the quantum double model driving it out of its topological phase and destroying the exact solvability of the quantum double model. We modify these transfer matrices with perturbations and extract exactly solvable models which remain in a quantum phase, thus nullifying the effect of the perturbation. The algebra of the modified vertex and plaquette operators now obey a deformed version of the quantum double algebra. The Abelian cases are shown to be in the quantum double phase whereas the non-Abelian phases are shown to be in a modified phase of the corresponding quantum double phase. These are illustrated with the groups Zn and S3. The quantum phases are determined by studying the excitations of these systems namely their fusion rules and the statistics. We then go further to construct a transfer matrix which contains the other Z2 phase namely the double semion phase. More generally for other discrete groups these transfer matrices contain the twisted quantum double models. These transfer matrices can be thought of as being obtained by introducing extra parameters into the transfer matrix of lattice gauge theories. These parameters are central elements belonging to the tensor products of the algebra and its dual and are associated to vertices and volumes of the three dimensional lattice. As in the case of the lattice gauge theories we construct the operators creating the excitations in this case and study their braiding and fusion properties.

  7. Stirling cryocooler test results and design model verification

    NASA Astrophysics Data System (ADS)

    Shimko, Martin A.; Stacy, W. D.; McCormick, John A.

    A long-life Stirling cycle cryocooler being developed for spaceborne applications is described. The results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the generator design code used in its development are presented. This machine achieved a cold-end temperature of 65 K while carrying a 1/2-W cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for sweeping and sealing the machine working volumes. The double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. The PC-compatible design code developed for this design approach calculates regenerator loss, including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls. The code accurately predicted cooler performance and assisted in diagnosing breadboard machine flaws during shakedown and development testing.

  8. CNOT sequences for heterogeneous spin qubit architectures in a noisy environment

    NASA Astrophysics Data System (ADS)

    Ferraro, Elena; Fanciulli, Marco; de Michielis, Marco

    Explicit CNOT gate sequences for two-qubits mixed architectures are presented in view of applications for large-scale quantum computation. Different kinds of coded spin qubits are combined allowing indeed the favorable physical properties of each to be employed. The building blocks for such composite systems are qubit architectures based on the electronic spin in electrostatically defined semiconductor quantum dots. They are the single quantum dot spin qubit, the double quantum dot singlet-triplet qubit and the double quantum dot hybrid qubit. The effective Hamiltonian models expressed by only exchange interactions between pair of electrons are exploited in different geometrical configurations. A numerical genetic algorithm that takes into account the realistic physical parameters involved is adopted. Gate operations are addressed by modulating the tunneling barriers and the energy offsets between different couple of quantum dots. Gate infidelities are calculated considering limitations due to unideal control of gate sequence pulses, hyperfine interaction and unwanted charge coupling. Second affiliation: Dipartimento di Scienza dei Materiali, University of Milano Bicocca, Via R. Cozzi, 55, 20126 Milano, Italy.

  9. Fluctuations in the DNA double helix

    NASA Astrophysics Data System (ADS)

    Peyrard, M.; López, S. C.; Angelov, D.

    2007-08-01

    DNA is not the static entity suggested by the famous double helix structure. It shows large fluctuational openings, in which the bases, which contain the genetic code, are temporarily open. Therefore it is an interesting system to study the effect of nonlinearity on the physical properties of a system. A simple model for DNA, at a mesoscopic scale, can be investigated by computer simulation, in the same spirit as the original work of Fermi, Pasta and Ulam. These calculations raise fundamental questions in statistical physics because they show a temporary breaking of equipartition of energy, regions with large amplitude fluctuations being able to coexist with regions where the fluctuations are very small, even when the model is studied in the canonical ensemble. This phenomenon can be related to nonlinear excitations in the model. The ability of the model to describe the actual properties of DNA is discussed by comparing theoretical and experimental results for the probability that base pairs open an a given temperature in specific DNA sequences. These studies give us indications on the proper description of the effect of the sequence in the mesoscopic model.

  10. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    DOE PAGES

    Motl, Patrick M.; Frank, Juhan; Staff, Jan; ...

    2017-03-29

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume "grid" code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code aremore » chosen to match as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. Here, we also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less

  11. GEOtop, a model with coupled water and energy budgets and non linear hydrological interactions. (Invited)

    NASA Astrophysics Data System (ADS)

    Endrizzi, S.; Gruber, S.; Dall'Amico, M.; Rigon, R.

    2013-12-01

    This contribution describes the new version of GEOtop which emerges after almost eight years of development from the original version. GEOtop now integrate the 3D Richards equation with a new numerical method; improvements were made on the treatment of surface waters by using the shallow water equation. The freezing-soil module was greatly improved, and the evapotranspiration -vegetation modelling is now based on a double layer scheme. Here we discuss the rational of each choice that was made, and we compare the differences between the actual solutions and the old solutions. In doing we highlight the issues that we faced during the development, including the trade-off between complexity and simplicity of the code, the requirements of a shared development, the different branches that were opened during the evolution of the code, and why we think that a code like GEOtop is indeed necessary. Models where the hydrological cycle is simplified can be built on the base of perceptual models that neglects some fundamental aspects of the hydrological processes, of which some examples are presented. At the same time, also process-based models like GEOtop can indeed neglect some fundamental process: but this is made evident with the comparison with measurements, especially when data are imposed ex-ante, instead than calibrated.

  12. betaFIT: A computer program to fit pointwise potentials to selected analytic functions

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Pashov, Asen

    2017-01-01

    This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.

  13. Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Liu, Bing

    This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less

  14. The Marriage of Residential Energy Codes and Rating Systems: Conflict Resolution or Just Conflict?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Zachary T.; Mendon, Vrushali V.

    2014-08-21

    After three decades of coexistence at a distance, model residential energy codes and residential energy rating systems have come together in the 2015 International Energy Conservation Code. At the October, 2013, International Code Council’s Public Comment Hearing, a new compliance path based on an Energy Rating Index was added to the IECC. Although not specifically named in the code, RESNET’s HERS rating system is the likely candidate Index for most jurisdictions. While HERS has been a mainstay in various beyond-code programs for many years, its direct incorporation into the most popular model energy code raises questions about the equivalence ofmore » a HERS-based compliance path and the traditional IECC performance compliance path, especially because the two approaches use different efficiency metrics, are governed by different simulation rules, and have different scopes with regard to energy impacting house features. A detailed simulation analysis of more than 15,000 house configurations reveals a very large range of HERS Index values that achieve equivalence with the IECC’s performance path. This paper summarizes the results of that analysis and evaluates those results against the specific Energy Rating Index values required by the 2015 IECC. Based on the home characteristics most likely to result in disparities between HERS-based compliance and performance path compliance, potential impacts on the compliance process, state and local adoption of the new code, energy efficiency in the next generation of homes subject to this new code, and future evolution of model code formats are discussed.« less

  15. Double layers and circuits in astrophysics

    NASA Technical Reports Server (NTRS)

    Alfven, Hannes

    1986-01-01

    As the rate of energy release in a double layer with voltage delta V is P approx I delta V, a double layer must be treated as a part of a circuit which delivers the current I. As neither double layer nor circuit can be derived from magnetofluid models of a plasma, such models are useless for treating energy transfer by means of double layers. They must be replaced by particle models and circuit theory. A simple circuit is suggested which is applied to the energizing of auroral particles, to solar flares, and to intergalactic double radio sources. Application to the heliospheric current systems leads to the prediction of two double layers on the Sun's axis which may give radiations detectable from Earth. Double layers in space should be classified as a new type of celestial object (one example is the double radio sources). It is tentatively suggested in X-ray and Gamma-ray bursts may be due to exploding double layers (although annihilation is an alternative energy source). A study of how a number of the most used textbooks in astrophysics treat important concepts like double layers, critical velocity, pinch effects and circuits is made.

  16. Preserving Envelope Efficiency in Performance Based Code Compliance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Brian A.; Sullivan, Greg P.; Rosenberg, Michael I.

    2015-06-20

    The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringentmore » than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.« less

  17. Chemical Energy Release in Several Recently Discovered Detonation and Deflagration Flows

    NASA Astrophysics Data System (ADS)

    Tarver, Craig M.

    2010-10-01

    Several recent experiments on complex detonation and deflagration flows are analyzed in terms of the chemical energy release required to sustain these flows. The observed double cellular structures in detonating gaseous nitromethane-oxygen and NO2-fuel (H2, CH4, and C2H6) mixtures are explained by the amplification of two distinct pressure wave frequencies by two exothermic reactions, the faster reaction forming vibrationally excited NO* and the slower reaction forming highly vibrationally excited N2**. The establishment of a Chapman-Jouguet (C-J) deflagration behind a weak shock wave, the C-J detonation established after a head-on collision with a shock front, and the C-J detonation conditions established in reactive supersonic flows are quantitatively calculated using the chemical energy release of a H2 + Cl2 mixture. For these three reactive flows, these calculations illustrate that different fractions of the exothermic chemical energy are used to sustain steady-state propagation. C-J detonation calculations on the various initial states using the CHEETAH chemical equilibrium code are shown to be in good agreement with experimental detonation velocity measurements for the head-on collision and supersonic flow detonations.

  18. A new time dependent density functional algorithm for large systems and plasmons in metal clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baseggio, Oscar; Fronzoni, Giovanna; Stener, Mauro, E-mail: stener@univ.trieste.it

    2015-07-14

    A new algorithm to solve the Time Dependent Density Functional Theory (TDDFT) equations in the space of the density fitting auxiliary basis set has been developed and implemented. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, allowing an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The methodmore » has been applied to very different systems in nature and size (from H{sub 2} to [Au{sub 147}]{sup −}). In all cases, the maximum deviations found for the excitation energies with respect to the Amsterdam density functional code are below 0.2 eV. The new algorithm has the merit not only to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of transition contribution maps, Jacob plasmon scaling factor, and induced density analysis, which have been all implemented.« less

  19. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less

  20. Double Linear Damage Rule for Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Halford, G.; Manson, S.

    1985-01-01

    Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.

  1. 76 FR 12786 - Culturally Significant Objects Imported for Exhibition Determinations: “Double Sexus”

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-08

    ... ``Double Sexus,'' imported from abroad for temporary exhibition within the United States, are of cultural... also determine that the exhibition or display of the exhibit objects at the Wexner Center for the Arts... Educational and Cultural Affairs, Department of State. [FR Doc. 2011-5240 Filed 3-7-11; 8:45 am] BILLING CODE...

  2. Analytical Energy Gradients for Excited-State Coupled-Cluster Methods

    NASA Astrophysics Data System (ADS)

    Wladyslawski, Mark; Nooijen, Marcel

    The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit equations for the wavefunction amplitudes, the Lagrange multipliers, and the analytical gradient via the perturbation-independent generalized Hellmann-Feynman effective density matrix. This systematic automated derivation procedure is applied to obtain the detailed gradient equations for the excitation energy (EE-), double ionization potential (DIP-), and double electron affinity (DEA-) similarity transformed equation-of-motion coupled-cluster singles-and-doubles (STEOM-CCSD) methods. In addition, the derivatives of the closed-shell-reference excitation energy (EE-), ionization potential (IP-), and electron affinity (EA-) equation-of-motion coupled-cluster singles-and-doubles (EOM-CCSD) methods are derived. Furthermore, the perturbative EOM-PT and STEOM-PT gradients are obtained. The algebraic derivative expressions for these dozen methods are all derived here uniformly through the automated Lagrange multiplier process and are expressed compactly in a chain-rule/intermediate-density formulation, which facilitates a unified modular implementation of analytic energy gradients for CCSD/PT-based electronic methods. The working equations for these analytical gradients are presented in full detail, and their factorization and implementation into an efficient computer code are discussed.

  3. Development and validation of a complementary map to enhance the existing 1998 to 2008 Abbreviated Injury Scale map

    PubMed Central

    2011-01-01

    Introduction Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Methods Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. Results The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. Conclusions The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available. PMID:21548991

  4. Development and validation of a complementary map to enhance the existing 1998 to 2008 Abbreviated Injury Scale map.

    PubMed

    Palmer, Cameron S; Franklyn, Melanie; Read-Allsopp, Christine; McLellan, Susan; Niggemeyer, Louise E

    2011-05-08

    Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available.

  5. 78 FR 23550 - Department of Energy's (DOE) Participation in Development of the International Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-19

    ...: Notice. SUMMARY: The DOE participates in the code development process of the International Code Council... notice outlines the process by which DOE produces code change proposals, and participates in the ICC code development process. FOR FURTHER INFORMATION CONTACT: Jeremiah Williams, U.S. Department of Energy, Office of...

  6. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  7. An energy-optimal solution for transportation control of cranes with double pendulum dynamics: Design and experiments

    NASA Astrophysics Data System (ADS)

    Sun, Ning; Wu, Yiming; Chen, He; Fang, Yongchun

    2018-03-01

    Underactuated cranes play an important role in modern industry. Specifically, in most situations of practical applications, crane systems exhibit significant double pendulum characteristics, which makes the control problem quite challenging. Moreover, most existing planners/controllers obtained with standard methods/techniques for double pendulum cranes cannot minimize the energy consumption when fulfilling the transportation tasks. Therefore, from a practical perspective, this paper proposes an energy-optimal solution for transportation control of double pendulum cranes. By applying the presented approach, the transportation objective, including fast trolley positioning and swing elimination, is achieved with minimized energy consumption, and the residual oscillations are suppressed effectively with all the state constrains being satisfied during the entire transportation process. As far as we know, this is the first energy-optimal solution for transportation control of underactuated double pendulum cranes with various state and control constraints. Hardware experimental results are included to verify the effectiveness of the proposed approach, whose superior performance is reflected by being experimentally compared with some comparative controllers.

  8. Delamination Modeling of Composites for Improved Crash Analysis

    NASA Technical Reports Server (NTRS)

    Fleming, David C.

    1999-01-01

    Finite element crash modeling of composite structures is limited by the inability of current commercial crash codes to accurately model delamination growth. Efforts are made to implement and assess delamination modeling techniques using a current finite element crash code, MSC/DYTRAN. Three methods are evaluated, including a straightforward method based on monitoring forces in elements or constraints representing an interface; a cohesive fracture model proposed in the literature; and the virtual crack closure technique commonly used in fracture mechanics. Results are compared with dynamic double cantilever beam test data from the literature. Examples show that it is possible to accurately model delamination propagation in this case. However, the computational demands required for accurate solution are great and reliable property data may not be available to support general crash modeling efforts. Additional examples are modeled including an impact-loaded beam, damage initiation in laminated crushing specimens, and a scaled aircraft subfloor structures in which composite sandwich structures are used as energy-absorbing elements. These examples illustrate some of the difficulties in modeling delamination as part of a finite element crash analysis.

  9. Hybrid MPI/OpenMP Implementation of the ORAC Molecular Dynamics Program for Generalized Ensemble and Fast Switching Alchemical Simulations.

    PubMed

    Procacci, Piero

    2016-06-27

    We present a new release (6.0β) of the ORAC program [Marsili et al. J. Comput. Chem. 2010, 31, 1106-1116] with a hybrid OpenMP/MPI (open multiprocessing message passing interface) multilevel parallelism tailored for generalized ensemble (GE) and fast switching double annihilation (FS-DAM) nonequilibrium technology aimed at evaluating the binding free energy in drug-receptor system on high performance computing platforms. The production of the GE or FS-DAM trajectories is handled using a weak scaling parallel approach on the MPI level only, while a strong scaling force decomposition scheme is implemented for intranode computations with shared memory access at the OpenMP level. The efficiency, simplicity, and inherent parallel nature of the ORAC implementation of the FS-DAM algorithm, project the code as a possible effective tool for a second generation high throughput virtual screening in drug discovery and design. The code, along with documentation, testing, and ancillary tools, is distributed under the provisions of the General Public License and can be freely downloaded at www.chim.unifi.it/orac .

  10. Maintenance Energy Requirements of Double-Muscled Belgian Blue Beef Cows

    PubMed Central

    Fiems, Leo O.; De Boever, Johan L.; Vanacker, José M.; De Campeneere, Sam

    2015-01-01

    Simple Summary Double-muscled Belgian Blue animals are extremely lean, characterized by a deviant muscle fiber type with more fast-glycolytic fibers, compared to non-double-muscled animals. This fiber type may result in lower maintenance energy requirements. On the other hand, lean meat animals mostly have a higher rate of protein turnover, which requires more energy for maintenance. Therefore, maintenance requirements of Belgian Blue cows were investigated based on a zero body weight gain. This technique showed that maintenance energy requirements of double-muscled Belgian Blue beef cows were close to the mean requirements of cows of other beef genotypes. Abstract Sixty non-pregnant, non-lactating double-muscled Belgian Blue (DMBB) cows were used to estimate the energy required to maintain body weight (BW). They were fed one of three energy levels for 112 or 140 days, corresponding to approximately 100%, 80% or 70% of their total energy requirements. The relationship between daily energy intake and BW and daily BW change was developed using regression analysis. Maintenance energy requirements were estimated from the regression equation by setting BW gain to zero. Metabolizable and net energy for maintenance amounted to 0.569 ± 0.001 and 0.332 ± 0.001 MJ per kg BW0.75/d, respectively. Maintenance energy requirements were not dependent on energy level (p > 0.10). Parity affected maintenance energy requirements (p < 0.001), although the small numerical differences between parities may hardly be nutritionally relevant. Maintenance energy requirements of DMBB beef cows were close to the mean energy requirements of other beef genotypes reported in the literature. PMID:26479139

  11. Mechanism on brain information processing: Energy coding

    NASA Astrophysics Data System (ADS)

    Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa

    2006-09-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.

  12. Energy coding in biological neural networks

    PubMed Central

    Zhang, Zhikang

    2007-01-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513

  13. Residential Building Energy Code Field Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Bartlett, M. Halverson, V. Mendon, J. Hathaway, Y. Xie

    This document presents a methodology for assessing baseline energy efficiency in new single-family residential buildings and quantifying related savings potential. The approach was developed by Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE) Building Energy Codes Program with the objective of assisting states as they assess energy efficiency in residential buildings and implementation of their building energy codes, as well as to target areas for improvement through energy codes and broader energy-efficiency programs. It is also intended to facilitate a consistent and replicable approach to research studies of this type and establish a transparent data setmore » to represent baseline construction practices across U.S. states.« less

  14. Parallel electric fields in extragalactic jets - Double layers and anomalous resistivity in symbiotic relationships

    NASA Technical Reports Server (NTRS)

    Borovsky, J. E.

    1986-01-01

    After examining the properties of Coulomb-collision resistivity, anomalous (collective) resistivity, and double layers, a hybrid anomalous-resistivity/double-layer model is introduced. In this model, beam-driven waves on both sides of a double layer provide electrostatic plasma-wave turbulence that greatly reduces the mobility of charged particles. These regions then act to hold open a density cavity within which the double layer resides. In the double layer, electrical energy is dissipated with 100 percent efficiency into high-energy particles, creating conditions optimal for the collective emission of polarized radio waves.

  15. Production mechanism of new neutron-rich heavy nuclei in the 136Xe +198Pt reaction

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wen, Peiwei; Li, Jingjing; Zhang, Gen; Li, Bing; Xu, Xinxin; Liu, Zhong; Zhu, Shaofei; Zhang, Feng-Shou

    2018-01-01

    The multinucleon transfer reaction of 136Xe +198Pt at Elab = 7.98 MeV/nucleon is investigated by using the improved quantum molecular dynamics model. The quasielastic, deep-inelastic, and quasifission collision mechanisms are studied via analyzing the angular distributions of fragments and the energy dissipation processes during the collisions. The measured isotope production cross sections of projectile-like fragments are reasonably well reproduced by the calculation of the ImQMD model together with the GEMINI code. The isotope production cross sections for the target-like fragments and double differential cross sections of 199Pt, 203Pt, and 208Pt are calculated. It is shown that about 50 new neutron-rich heavy nuclei can be produced via deep-inelastic collision mechanism, where the production cross sections are from 10-3 to 10-6 mb. The corresponding emission angle and the kinetic energy for these new neutron-rich nuclei locate at 40∘-60∘ and 100-200 MeV, respectively.

  16. Pulsational Pair-instability Model for Superluminous Supernova PTF12dam: Interaction and Radioactive Decay

    NASA Astrophysics Data System (ADS)

    Tolstov, Alexey; Nomoto, Ken'ichi; Blinnikov, Sergei; Sorokina, Elena; Quimby, Robert; Baklanov, Petr

    2017-02-01

    Being a superluminous supernova, PTF12dam can be explained by a 56Ni-powered model, a magnetar-powered model, or an interaction model. We propose that PTF12dam is a pulsational pair-instability supernova, where the outer envelope of a progenitor is ejected during the pulsations. Thus, it is powered by a double energy source: radioactive decay of 56Ni and a radiative shock in a dense circumstellar medium. To describe multicolor light curves and spectra, we use radiation-hydrodynamics calculations of the STELLA code. We found that light curves are well described in the model with 40 M⊙ ejecta and 20-40 M⊙ circumstellar medium. The ejected 56Ni mass is about 6 M⊙, which results from explosive nucleosynthesis with large explosion energy (2-3) × 1052 erg. In comparison with alternative scenarios of pair-instability supernova and magnetar-powered supernova, in the interaction model, all the observed main photometric characteristics are well reproduced: multicolor light curves, color temperatures, and photospheric velocities.

  17. 75 FR 20833 - Building Energy Codes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2010-BT-BC-0012] Building Energy Codes AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Request for Information. SUMMARY: The U.S. Department of Energy (DOE) is soliciting...

  18. Clean Energy in City Codes: A Baseline Analysis of Municipal Codification across the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Jeffrey J.; Aznar, Alexandra; Dane, Alexander

    Municipal governments in the United States are well positioned to influence clean energy (energy efficiency and alternative energy) and transportation technology and strategy implementation within their jurisdictions through planning, programs, and codification. Municipal governments are leveraging planning processes and programs to shape their energy futures. There is limited understanding in the literature related to codification, the primary way that municipal governments enact enforceable policies. The authors fill the gap in the literature by documenting the status of municipal codification of clean energy and transportation across the United States. More directly, we leverage online databases of municipal codes to develop nationalmore » and state-specific representative samples of municipal governments by population size. Our analysis finds that municipal governments with the authority to set residential building energy codes within their jurisdictions frequently do so. In some cases, communities set codes higher than their respective state governments. Examination of codes across the nation indicates that municipal governments are employing their code as a policy mechanism to address clean energy and transportation.« less

  19. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  20. A Review on Spectral Amplitude Coding Optical Code Division Multiple Access

    NASA Astrophysics Data System (ADS)

    Kaur, Navpreet; Goyal, Rakesh; Rani, Monika

    2017-06-01

    This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.

  1. Energy Savings Analysis of the Proposed Revision of the Washington D.C. Non-Residential Energy Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Athalye, Rahul A.; Hart, Philip R.

    This report presents the results of an assessment of savings for the proposed Washington D.C. energy code relative to ASHRAE Standard 90.1-2010. It includes annual and life cycle savings for site energy, source energy, energy cost, and carbon dioxide emissions that would result from adoption and enforcement of the proposed code for newly constructed buildings in Washington D.C. over a five year period.

  2. Environmental impact assessment of double- and relay-cropping with winter camelina in the northern Great Plains, USA

    USDA-ARS?s Scientific Manuscript database

    Recent findings indicate that double- or relay-cropping winter camelina (Camelina sativa L. Crantz.) with feed or food crops can increase yield per area, improve energy balance, and provide several ecosystem services. Double-cropping can help balance food and energy production. The objective of this...

  3. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  4. Application of Electric Double-layer Capacitors for Energy Storage on Electric Railway

    NASA Astrophysics Data System (ADS)

    Hase, Shin-Ichi; Konishi, Takeshi; Okui, Akinobu; Nakamichi, Yoshinobu; Nara, Hidetaka; Uemura, Tadashi

    The methods to stabilize power sources, which are the measures against voltage drop, power loading fluctuation, regeneration power lapse and so on, have been important issues in DC feeding circuits. Therefore, an energy storage medium that uses power efficiently and reduces above-mentioned problems is much concerned about. In recent years, development of energy storage medium is remarkable for drive-power supplies of electric vehicles. A number of applications of energy storage, for instance, battery and flywheel, have been investigated so far. A large-scale electric double-layer capacitor which is rapidly charged and discharged and offers long life, maintenance-free, low pollution and high efficiency, has been developed in wide range. We have compared the ability to charge batteries and electric double-layer capacitors. Therefore, we carried out fundamental studies about electric double-layer capacitors and its control. And we produced a prototype of energy storage for the DC electric railway system that consists of electric double-layer capacitors, diode bridge rectifiers, chopper system and PWM converters. From the charge and discharge tests of the prototype, useful information was obtained. This paper describes its characteristics and experimental results of energy storage system.

  5. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motl, Patrick M.; Frank, Juhan; Clayton, Geoffrey C.

    2017-04-01

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n  = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume “grid” code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code are chosen to matchmore » as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. We also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less

  6. Single-Photon, Double Photodetachment of Nickel Phthalocyanine Tetrasulfonic Acid 4- Anions.

    PubMed

    Daly, Steven; Girod, Marion; Vojkovic, Marin; Giuliani, Alexandre; Antoine, Rodolphe; Nahon, Laurent; O'Hair, Richard A J; Dugourd, Philippe

    2016-07-07

    Single-photon, two-electron photodetachment from nickel phthalocyanine tetrasulfonic acid tetra anions, [NiPc](4-), was examined in the gas-phase using a linear ion trap coupled to the DESIRS VUV beamline of the SOLEIL Synchrotron. This system was chosen since it has a low detachment energy, known charge localization, and well-defined geometrical and electronic structures. A threshold for two-electron loss is observed at 10.2 eV, around 1 eV lower than previously observed double detachment thresholds on multiple charged protein anions. The photodetachment energy of [NiPc](4-) has been previously determined to be 3.5 eV and the photodetachment energy of [NiPc](3-•) is determined in this work to be 4.3 eV. The observed single photon double electron detachment threshold is hence 5.9 eV higher than the energy required for sequential single electron loss. Possible mechanisms are for double photodetachment are discussed. These observations pave the way toward new, exciting experiments for probing double photodetachment at relatively low energies, including correlation measurements on emitted photoelectrons.

  7. Analysis of a two-dimensional type 6 shock-interference pattern using a perfect-gas code and a real-gas code

    NASA Technical Reports Server (NTRS)

    Bertin, J. J.; Graumann, B. W.

    1973-01-01

    Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.

  8. Performance Analysis of OCDMA Based on AND Detection in FTTH Access Network Using PIN & APD Photodiodes

    NASA Astrophysics Data System (ADS)

    Aldouri, Muthana; Aljunid, S. A.; Ahmad, R. Badlishah; Fadhil, Hilal A.

    2011-06-01

    In order to comprise between PIN photo detector and avalanche photodiodes in a system used double weight (DW) code to be a performance of the optical spectrum CDMA in FTTH network with point-to-multi-point (P2MP) application. The performance of PIN against APD is compared through simulation by using opt system software version 7. In this paper we used two networks designed as follows one used PIN photo detector and the second using APD photo diode, both two system using with and without erbium doped fiber amplifier (EDFA). It is found that APD photo diode in this system is better than PIN photo detector for all simulation results. The conversion used a Mach-Zehnder interferometer (MZI) wavelength converter. Also we are study, the proposing a detection scheme known as AND subtraction detection technique implemented with fiber Bragg Grating (FBG) act as encoder and decoder. This FBG is used to encode and decode the spectral amplitude coding namely double weight (DW) code in Optical Code Division Multiple Access (OCDMA). The performances are characterized through bit error rate (BER) and bit rate (BR) also the received power at various bit rate.

  9. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  10. Energetic properties' investigation of removing flattening filter at phantom surface: Monte Carlo study using BEAMnrc code, DOSXYZnrc code and BEAMDP code

    NASA Astrophysics Data System (ADS)

    Bencheikh, Mohamed; Maghnouj, Abdelmajid; Tajmouati, Jaouad

    2017-11-01

    The Monte Carlo calculation method is considered to be the most accurate method for dose calculation in radiotherapy and beam characterization investigation, in this study, the Varian Clinac 2100 medical linear accelerator with and without flattening filter (FF) was modelled. The objective of this study was to determine flattening filter impact on particles' energy properties at phantom surface in terms of energy fluence, mean energy, and energy fluence distribution. The Monte Carlo codes used in this study were BEAMnrc code for simulating linac head, DOSXYZnrc code for simulating the absorbed dose in a water phantom, and BEAMDP for extracting energy properties. Field size was 10 × 10 cm2, simulated photon beam energy was 6 MV and SSD was 100 cm. The Monte Carlo geometry was validated by a gamma index acceptance rate of 99% in PDD and 98% in dose profiles, gamma criteria was 3% for dose difference and 3mm for distance to agreement. In without-FF, the energetic properties was as following: electron contribution was increased by more than 300% in energy fluence, almost 14% in mean energy and 1900% in energy fluence distribution, however, photon contribution was increased 50% in energy fluence, and almost 18% in mean energy and almost 35% in energy fluence distribution. The removing flattening filter promotes the increasing of electron contamination energy versus photon energy; our study can contribute in the evolution of removing flattening filter configuration in future linac.

  11. New Approach for Nuclear Reaction Model in the Combination of Intra-nuclear Cascade and DWBA

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Iwamoto, O.; Iwamoto, Y.; Sato, T.; Niita, K.

    2014-04-01

    We applied a new nuclear reaction model that is a combination of the intra nuclear cascade model and the distorted wave Born approximation (DWBA) calculation to estimate neutron spectra in reactions induced by protons incident on 7Li and 9Be targets at incident energies below 50 MeV, using the particle and heavy ion transport code system (PHITS). The results obtained by PHITS with the new model reproduce the sharp peaks observed in the experimental double-differential cross sections as a result of taking into account transitions between discrete nuclear states in the DWBA. An excellent agreement was observed between the calculated results obtained using the combination model and experimental data on neutron yields from thick targets in the inclusive (p, xn) reaction.

  12. Energy Referencing in LANL HE-EOS Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiding, Jeffery Allen; Coe, Joshua Damon

    2017-10-19

    Here, We briefly describe the choice of energy referencing in LANL's HE-EOS codes, HEOS and MAGPIE. Understanding this is essential to comparing energies produced by different EOS codes, as well as to the correct calculation of shock Hugoniots of HEs and other materials. In all equations after (3) throughout this report, all energies, enthalpies and volumes are assumed to be molar quantities.

  13. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus

    PubMed Central

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-01-01

    Abstract DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure. PMID:29800455

  14. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus.

    PubMed

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-05-01

    DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure.

  15. Strategy Plan A Methodology to Predict the Uniformity of Double-Shell Tank Waste Slurries Based on Mixing Pump Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.A. Bamberger; L.M. Liljegren; P.S. Lowery

    This document presents an analysis of the mechanisms influencing mixing within double-shell slurry tanks. A research program to characterize mixing of slurries within tanks has been proposed. The research program presents a combined experimental and computational approach to produce correlations describing the tank slurry concentration profile (and therefore uniformity) as a function of mixer pump operating conditions. The TEMPEST computer code was used to simulate both a full-scale (prototype) and scaled (model) double-shell waste tank to predict flow patterns resulting from a stationary jet centered in the tank. The simulation results were used to evaluate flow patterns in the tankmore » and to determine whether flow patterns are similar between the full-scale prototype and an existing 1/12-scale model tank. The flow patterns were sufficiently similar to recommend conducting scoping experiments at 1/12-scale. Also, TEMPEST modeled velocity profiles of the near-floor jet were compared to experimental measurements of the near-floor jet with good agreement. Reported values of physical properties of double-shell tank slurries were analyzed to evaluate the range of properties appropriate for conducting scaled experiments. One-twelfth scale scoping experiments are recommended to confirm the prioritization of the dimensionless groups (gravitational settling, Froude, and Reynolds numbers) that affect slurry suspension in the tank. Two of the proposed 1/12-scale test conditions were modeled using the TEMPEST computer code to observe the anticipated flow fields. This information will be used to guide selection of sampling probe locations. Additional computer modeling is being conducted to model a particulate laden, rotating jet centered in the tank. The results of this modeling effort will be compared to the scaled experimental data to quantify the agreement between the code and the 1/12-scale experiment. The scoping experiment results will guide selection of parameters to be varied in the follow-on experiments. Data from the follow-on experiments will be used to develop correlations to describe slurry concentration profile as a function of mixing pump operating conditions. This data will also be used to further evaluate the computer model applications. If the agreement between the experimental data and the code predictions is good, the computer code will be recommended for use to predict slurry uniformity in the tanks under various operating conditions. If the agreement between the code predictions and experimental results is not good, the experimental data correlations will be used to predict slurry uniformity in the tanks within the range of correlation applicability.« less

  16. Energy Efficiency Program Administrators and Building Energy Codes

    EPA Pesticide Factsheets

    Explore how energy efficiency program administrators have helped advance building energy codes at federal, state, and local levels—using technical, institutional, financial, and other resources—and discusses potential next steps.

  17. Increasing Flexibility in Energy Code Compliance: Performance Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Rosenberg, Michael I.

    Energy codes and standards have provided significant increases in building efficiency over the last 38 years, since the first national energy code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. As the code matures, the prescriptive path becomes more complicated, and also more restrictive. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. Performance code paths are increasing in popularity; however, there remains a significant designmore » team overhead in following the performance path, especially for smaller buildings. This paper focuses on development of one alternative format, prescriptive packages. A method to develop building-specific prescriptive packages is reviewed based on a multiple runs of prototypical building models that are used to develop parametric decision analysis to determines a set of packages with equivalent energy performance. The approach is designed to be cost-effective and flexible for the design team while achieving a desired level of energy efficiency performance. A demonstration of the approach based on mid-sized office buildings with two HVAC system types is shown along with a discussion of potential applicability in the energy code process.« less

  18. Calculation of the Frequency Distribution of the Energy Deposition in DNA Volumes by Heavy Ions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cicinotta, Francis A.

    2012-01-01

    Radiation quality effects are largely determined by energy deposition in small volumes of characteristic sizes less than 10 nm representative of short-segments of DNA, the DNA nucleosome, or molecules initiating oxidative stress in the nucleus, mitochondria, or extra-cellular matrix. On this scale, qualitatively distinct types of molecular damage are possible for high linear energy transfer (LET) radiation such as heavy ions compared to low LET radiation. Unique types of DNA lesions or oxidative damages are the likely outcome of the energy deposition. The frequency distribution for energy imparted to 1-20 nm targets per unit dose or particle fluence is a useful descriptor and can be evaluated as a function of impact parameter from an ions track. In this work, the simulation of 1-Gy irradiation of a cubic volume of 5 micron by: 1) 450 (1)H(+) ions, 300 MeV; 2) 10 (12)C(6+) ions, 290 MeV/amu and 3) (56)Fe(26+) ions, 1000 MeV/amu was done with the Monte-Carlo simulation code RITRACKS. Cylindrical targets are generated in the irradiated volume, with random orientation. The frequency distribution curves of the energy deposited in the targets is obtained. For small targets (i.e. <25 nm size), the probability of an ion to hit a target is very small; therefore a large number of tracks and targets as well as a large number of histories are necessary to obtain statistically significant results. This simulation is very time-consuming and is difficult to perform by using the original version of RITRACKS. Consequently, the code RITRACKS was adapted to use multiple CPU on a workstation or on a computer cluster. To validate the simulation results, similar calculations were performed using targets with fixed position and orientation, for which experimental data are available [5]. Since the probability of single- and double-strand breaks in DNA as function of energy deposited is well know, the results that were obtained can be used to estimate the yield of DSB, and can be extended to include other targeted or non-target effects.

  19. Hydrogen Generation by Koh-Ethanol Plasma Electrolysis Using Double Compartement Reactor

    NASA Astrophysics Data System (ADS)

    Saksono, Nelson; Sasiang, Johannes; Dewi Rosalina, Chandra; Budikania, Trisutanti

    2018-03-01

    This study has successfully investigated the generation of hydrogen using double compartment reactor with plasma electrolysis process. Double compartment reactor is designed to achieve high discharged voltage, high concentration, and also reduce the energy consumption. The experimental results showed the use of double compartment reactor increased the productivity ratio 90 times higher compared to Faraday electrolysis process. The highest hydrogen production obtained is 26.50 mmol/min while the energy consumption can reach up 1.71 kJ/mmol H2 at 0.01 M KOH solution. It was shown that KOH concentration, addition of ethanol, cathode depth, and temperature have important effects on hydrogen production, energy consumption, and process efficiency.

  20. Compliance Verification Paths for Residential and Commercial Energy Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, David R.; Makela, Eric J.; Fannin, Jerica D.

    2011-10-10

    This report looks at different ways to verify energy code compliance and to ensure that the energy efficiency goals of an adopted document are achieved. Conformity assessment is the body of work that ensures compliance, including activities that can ensure residential and commercial buildings satisfy energy codes and standards. This report identifies and discusses conformity-assessment activities and provides guidance for conducting assessments.

  1. High Performance Double-null Plasma Operation Under Radiating Divertor Conditions

    NASA Astrophysics Data System (ADS)

    Petrie, T. W.; Osborne, T.; Leonard, A. W.; Luce, T. C.; Petty, C. C.; Fenstermacher, M. E.; Lasnier, C. J.; Turco, F.; Watkins, J. G.

    2017-10-01

    We report on heat flux reduction experiments in which deuterium/neon- or deuterium/argon-based radiating mantle/divertor approaches were applied to high performance double-null (DN) plasmas (H98 1.4-1.7,βN 4 , q 95 6) with a combined neutral beam and ECH power input PIN 15 MW. When the radial location of the ECH deposition is close to the magnetic axis (e.g., ρ <=0.20), the radial profiles of both injected and intrinsic impurities are flat to somewhat hollow. For deposition farther out (e.g., ρ=0.45), the impurity profiles are highly peaked on axis, which would make high performance DN operation with impurity injection more problematical. Comparison of neon with argon `seeding' with respect to core dilution, energy confinement, and heat flux reduction under these conditions favors argon. Conditions that lead to an improved τE as predicted previously from ELITE code analysis, i.e., very high PIN, proximity to magnetic balance, and higher q95, are largely consistent with this data. Work was supported by the US DOE under DE-FC02-04ER54698, DE-AC52-07NA27344, DE-FG02-04ER54761, and DE-AC04-94AL85000.

  2. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  3. Interlimb relation during the double support phase of gait: an electromyographic, mechanical and energy-based analysis.

    PubMed

    Sousa, Andreia S P; Silva, Augusta; Tavares, João Manuel R S

    2013-03-01

    The purpose of this study is to analyse the interlimb relation and the influence of mechanical energy on metabolic energy expenditure during gait. In total, 22 subjects were monitored as to electromyographic activity, ground reaction forces and VO2 consumption (metabolic power) during gait. The results demonstrate a moderate negative correlation between the activity of tibialis anterior, biceps femoris and vastus medialis of the trailing limb during the transition between mid-stance and double support and that of the leading limb during double support for the same muscles, and between these and gastrocnemius medialis and soleus of the trailing limb during double support. Trailing limb soleus during the transition between mid-stance and double support was positively correlated to leading limb tibialis anterior, vastus medialis and biceps femoris during double support. Also, the trailing limb centre of mass mechanical work was strongly influenced by the leading limbs, although only the mechanical power related to forward progression of both limbs was correlated to metabolic power. These findings demonstrate a consistent interlimb relation in terms of electromyographic activity and centre of mass mechanical work, being the relations occurred in the plane of forward progression the more important to gait energy expenditure.

  4. Modeling of the EAST ICRF antenna with ICANT Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin Chengming; Zhao Yanping; Colas, L.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  5. Modeling of the EAST ICRF antenna with ICANT Code

    NASA Astrophysics Data System (ADS)

    Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.

    2007-09-01

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  6. Theoretical Thermal Evaluation of Energy Recovery Incinerators

    DTIC Science & Technology

    1985-12-01

    Army Logistics Mgt Center, Fort Lee , VA DTIC Alexandria, VA DTNSRDC Code 4111 (R. Gierich), Bethesda MD; Code 4120, Annapolis, MD; Code 522 (Library...Washington. DC: Code (I6H4. Washington. DC NAVSECGRUACT PWO (Code .’^O.’^). Winter Harbor. IVIE ; PWO (Code 4(1). Edzell. Scotland; PWO. Adak AK...NEW YORK Fort Schuyler. NY (Longobardi) TEXAS A&M UNIVERSITY W.B. Ledbetter College Station. TX UNIVERSITY OF CALIFORNIA Energy Engineer. Davis CA

  7. A-to-I editing of coding and non-coding RNAs by ADARs

    PubMed Central

    Nishikura, Kazuko

    2016-01-01

    Adenosine deaminases acting on RNA (ADARs) convert adenosine to inosine in double-stranded RNA. This A-to-I editing occurs not only in protein-coding regions of mRNAs, but also frequently in non-coding regions that contain inverted Alu repeats. Editing of coding sequences can result in the expression of functionally altered proteins that are not encoded in the genome, whereas the significance of Alu editing remains largely unknown. Certain microRNA (miRNA) precursors are also edited, leading to reduced expression or altered function of mature miRNAs. Conversely, recent studies indicate that ADAR1 forms a complex with Dicer to promote miRNA processing, revealing a new function of ADAR1 in the regulation of RNA interference. PMID:26648264

  8. Main doorway to the display area, straight ahead. Double doors ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Main doorway to the display area, straight ahead. Double doors with "top secret" alert lights, coded doorbell, and one way mirror. Stairway to second floor and basement is at the left, as well as the secondary entrance at the east part of the north front. View to east - March Air Force Base, Strategic Air Command, Combat Operations Center, 5220 Riverside Drive, Moreno Valley, Riverside County, CA

  9. Photodissociation dynamics of Mo(CO) sub 6 at 266 and 355 nm: CO photofragment kinetic-energy and internal-state distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntin, S.A.; Cavanagh, R.R.; Richter, L.J.

    1991-06-15

    The internal-state and kinetic-energy distributions of the CO photofragments from the 266 and 355 nm photolysis of Mo(CO){sub 6} have been measured under collision-free conditions using vacuum-ultraviolet laser-induced fluorescence. The rotational-state distributions for CO({ital v}{double prime}=0) and ({ital v}{double prime}=1) are well represented by Boltzmann distributions with effective rotational temperatures'' of {ital T}{sub {ital r}}({ital v}{double prime}=0)=950{plus minus}70 K and {ital T}{sub {ital r}}({ital v}{double prime}=1)=935{plus minus}85 K for 266 nm and {ital T}{sub {ital r}}({ital v}{double prime}=0)=750{plus minus}70 K and {ital T}{sub {ital r}}({ital v}{double prime}=1)=1150{plus minus}250 K for 355 nm photolysis. The CO({ital v}{double prime}=1/{ital v}{double prime}=0) vibrational-statemore » ratios for 266 and 355 nm photolysis are 0.19{plus minus}0.03 and 0.09{plus minus}0.02, respectively. The Doppler-broadened CO photofragment line shapes indicate that the translational energy distributions are isotropic and Maxwellian. There is no photolysis-laser wavelength or internal-state dependence to the extracted translational temperatures.'' The observed energy partitioning and kinetic-energy distributions are inconsistent with an impulsive ejection of a single CO ligand. CO photofragment line shapes for 266 nm photolysis are not consistent with a mechanism involving the repulsive ejection of the first CO ligand, followed by the statistical decomposition of the Mo(CO){sub 5} fragment. While phase-space theories do not predict quantitatively the energy disposal, the photodissociation mechanism appears to be dominated by statistical considerations. The results also suggest that the photodissociation of Mo(CO){sub 6} at 266 and 355 nm involves a common initial state'' and that similar exit channel effects are operative.« less

  10. Evolution of a double-front Rayleigh-Taylor system using a graphics-processing-unit-based high-resolution thermal lattice-Boltzmann model.

    PubMed

    Ripesi, P; Biferale, L; Schifano, S F; Tripiccione, R

    2014-04-01

    We study the turbulent evolution originated from a system subjected to a Rayleigh-Taylor instability with a double density at high resolution in a two-dimensional geometry using a highly optimized thermal lattice-Boltzmann code for GPUs. Our investigation's initial condition, given by the superposition of three layers with three different densities, leads to the development of two Rayleigh-Taylor fronts that expand upward and downward and collide in the middle of the cell. By using high-resolution numerical data we highlight the effects induced by the collision of the two turbulent fronts in the long-time asymptotic regime. We also provide details on the optimized lattice-Boltzmann code that we have run on a cluster of GPUs.

  11. Propagation of spiking regularity and double coherence resonance in feedforward networks.

    PubMed

    Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok

    2012-03-01

    We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.

  12. Mandating better buildings: a global review of building codes and prospects for improvement in the United States

    DOE PAGES

    Sun, Xiaojing; Brown, Marilyn A.; Cox, Matt; ...

    2015-03-11

    This paper provides a global overview of the design, implementation, and evolution of building energy codes. Reflecting alternative policy goals, building energy codes differ significantly across the United States, the European Union, and China. This review uncovers numerous innovative practices including greenhouse gas emissions caps per square meter of building space, energy performance certificates with retrofit recommendations, and inclusion of renewable energy to achieve “nearly zero-energy buildings”. These innovations motivated an assessment of an aggressive commercial building code applied to all US states, requiring both new construction and buildings with major modifications to comply with the latest version of themore » ASHRAE 90.1 Standards. Using the National Energy Modeling System (NEMS), we estimate that by 2035, such building codes in the United States could reduce energy for space heating, cooling, water heating and lighting in commercial buildings by 16%, 15%, 20% and 5%, respectively. Impacts on different fuels and building types, energy rates and bills as well as pollution emission reductions are also examined.« less

  13. Axisymmetric Shearing Box Models of Magnetized Disks

    NASA Astrophysics Data System (ADS)

    Guan, Xiaoyue; Gammie, Charles F.

    2008-01-01

    The local model, or shearing box, has proven a useful model for studying the dynamics of astrophysical disks. Here we consider the evolution of magnetohydrodynamic (MHD) turbulence in an axisymmetric local model in order to evaluate the limitations of global axisymmetric models. An exploration of the model parameter space shows the following: (1) The magnetic energy and α-decay approximately exponentially after an initial burst of turbulence. For our code, HAM, the decay time τ propto Res , where Res/2 is the number of zones per scale height. (2) In the initial burst of turbulence the magnetic energy is amplified by a factor proportional to Res3/4λR, where λR is the radial scale of the initial field. This scaling applies only if the most unstable wavelength of the magnetorotational instability is resolved and the final field is subthermal. (3) The shearing box is a resonant cavity and in linear theory exhibits a discrete set of compressive modes. These modes are excited by the MHD turbulence and are visible as quasi-periodic oscillations (QPOs) in temporal power spectra of fluid variables at low spatial resolution. At high resolution the QPOs are hidden by a noise continuum. (4) In axisymmetry disk turbulence is local. The correlation function of the turbulence is limited in radial extent, and the peak magnetic energy density is independent of the radial extent of the box LR for LR > 2H. (5) Similar results are obtained for the HAM, ZEUS, and ATHENA codes; ATHENA has an effective resolution that is nearly double that of HAM and ZEUS. (6) Similar results are obtained for 2D and 3D runs at similar resolution, but only for particular choices of the initial field strength and radial scale of the initial magnetic field.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This document contains the State Building Energy Codes Status prepared by Pacific Northwest National Laboratory for the U.S. Department of Energy under Contract DE-AC06-76RL01830 and dated September 1996. The U.S. Department of Energy`s Office of Codes and Standards has developed this document to provide an information resource for individuals interested in energy efficiency of buildings and the relevant building energy codes in each state and U.S. territory. This is considered to be an evolving document and will be updated twice a year. In addition, special state updates will be issued as warranted.

  15. Electric and magnetic field modulated energy dispersion, conductivity and optical response in double quantum wire with spin-orbit interactions

    NASA Astrophysics Data System (ADS)

    Karaaslan, Y.; Gisi, B.; Sakiroglu, S.; Kasapoglu, E.; Sari, H.; Sokmen, I.

    2018-02-01

    We study the influence of electric field on the electronic energy band structure, zero-temperature ballistic conductivity and optical properties of double quantum wire. System described by double-well anharmonic confinement potential is exposed to a perpendicular magnetic field and Rashba and Dresselhaus spin-orbit interactions. Numerical results show up that the combined effects of internal and external agents cause the formation of crossing, anticrossing, camel-back/anomaly structures and the lateral, downward/upward shifts in the energy dispersion. The anomalies in the energy subbands give rise to the oscillation patterns in the ballistic conductance, and the energy shifts bring about the shift in the peak positions of optical absorption coefficients and refractive index changes.

  16. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  17. Capacitance of carbon-based electrical double-layer capacitors.

    PubMed

    Ji, Hengxing; Zhao, Xin; Qiao, Zhenhua; Jung, Jeil; Zhu, Yanwu; Lu, Yalin; Zhang, Li Li; MacDonald, Allan H; Ruoff, Rodney S

    2014-01-01

    Experimental electrical double-layer capacitances of porous carbon electrodes fall below ideal values, thus limiting the practical energy densities of carbon-based electrical double-layer capacitors. Here we investigate the origin of this behaviour by measuring the electrical double-layer capacitance in one to five-layer graphene. We find that the capacitances are suppressed near neutrality, and are anomalously enhanced for thicknesses below a few layers. We attribute the first effect to quantum capacitance effects near the point of zero charge, and the second to correlations between electrons in the graphene sheet and ions in the electrolyte. The large capacitance values imply gravimetric energy storage densities in the single-layer graphene limit that are comparable to those of batteries. We anticipate that these results shed light on developing new theoretical models in understanding the electrical double-layer capacitance of carbon electrodes, and on opening up new strategies for improving the energy density of carbon-based capacitors.

  18. PTF11mnb: First analog of supernova 2005bf: Long-rising, double-peaked supernova Ic from a massive progenitor*

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddia, F.; Sollerman, J.; Fremling, C.

    The aim is to study PTF11mnb, a He-poor supernova (SN) whose light curves resemble those of SN 2005bf, a peculiar double-peaked stripped-envelope (SE) SN, until the declining phase after the main peak. We investigate the mechanism powering its light curve and the nature of its progenitor star. Methods. Optical photometry and spectroscopy of PTF11mnb are presented. We compared light curves, colors and spectral properties to those of SN 2005bf and normal SE SNe. We built a bolometric light curve and modeled this light curve with the SuperNova Explosion Code (SNEC) hydrodynamical code explosion of a MESA progenitor star and semi-analyticmore » models. Results. The light curve of PTF11mnb turns out to be similar to that of SN 2005bf until ~50 d when the main (secondary) peaks occur at -18.5 mag. The early peak occurs at ~20 d and is about 1.0 mag fainter. After the main peak, the decline rate of PTF11mnb is remarkably slower than what was observed in SN 2005bf, and it traces well the 56Co decay rate. The spectra of PTF11mnb reveal a SN Ic and have no traces of He unlike in the case of SN Ib 2005bf, although they have velocities comparable to those of SN 2005bf. The whole evolution of the bolometric light curve is well reproduced by the explosion of a massive (M ej = 7.8 M ⊙ ), He-poor star characterized by a double-peaked 56 Ni distribution, a total 56 Ni mass of 0.59 M ⊙ , and an explosion energy of 2.2 × 10 51 erg. Alternatively, a normal SN Ib/c explosion (M( 56Ni) = 0.11 M ⊙ , E K = 0.2 × 10 51 erg, M ej = 1 M ⊙ ) can power the first peak while a magnetar, with a magnetic field characterized by B = 5.0 × 10 14 G, and a rotation period of P = 18.1 ms, provides energy for the main peak. The early g-band light curve can be fit with a shock-breakout cooling tail or an extended envelope model from which a radius of at least 30 R ⊙ is obtained. Conclusions. We presented a scenario where PTF11mnb was the explosion of a massive, He-poor star, characterized by a double-peaked 56Ni distribution. In this case, the ejecta mass and the absence of He imply a large ZAMS mass (~85 M ⊙) for the progenitor, which most likely was a Wolf-Rayet star, surrounded by an extended envelope formed either by a pre-SN eruption or due to a binary configuration. Alternatively, PTF11mnb could be powered by a SE SN with a less massive progenitor during the first peak and by a magnetar afterward.« less

  19. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  20. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  1. 76 FR 13101 - Building Energy Codes Program: Presenting and Receiving Comments to DOE Proposed Changes to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    .... The IgCC is intended to provide a green model building code provisions for new and existing commercial... DEPARTMENT OF ENERGY 10 CFR Part 430 [Docket No. EERE-2011-BT-BC-0009] Building Energy Codes Program: Presenting and Receiving Comments to DOE Proposed Changes to the International Green Construction...

  2. Software Tools for Stochastic Simulations of Turbulence

    DTIC Science & Technology

    2015-08-28

    client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF

  3. Preliminary calibration of the ACP safeguards neutron counter

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.

    2007-10-01

    The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.

  4. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less

  5. Energy and Environment Guide to Action - Chapter 4.3: Building Codes for Energy Efficiency

    EPA Pesticide Factsheets

    Provides guidance and recommendations for establishing, implementing, and evaluating state building codes for energy efficiency, which improve energy efficiency in new construction and major renovations. State success stories are included for reference.

  6. Assessing Potential Energy Cost Savings from Increased Energy Code Compliance in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.

    The US Department of Energy’s most recent commercial energy code compliance evaluation efforts focused on determining a percent compliance rating for states to help them meet requirements under the American Recovery and Reinvestment Act (ARRA) of 2009. That approach included a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. With its binary approach to compliance determination, the previous methodology failed to answer some important questions. In particular, how much energy cost could be saved by better compliance with the commercial energymore » code and what are the relative priorities of code requirements from an energy cost savings perspective? This paper explores an analytical approach and pilot study using a single building type and climate zone to answer those questions.« less

  7. A Computational Model for Observation in Quantum Mechanics.

    DTIC Science & Technology

    1987-03-16

    Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden

  8. The problem with brain GUTs: conflation of different senses of "prediction" threatens metaphysical disaster.

    PubMed

    Anderson, Michael L; Chemero, Tony

    2013-06-01

    Clark appears to be moving toward epistemic internalism, which he once rightly rejected. This results from a double over-interpretation of predictive coding's significance. First, Clark argues that predictive coding offers a Grand Unified Theory (GUT) of brain function. Second, he over-reads its epistemic import, perhaps even conflating causal and epistemic mediators. We argue instead for a plurality of neurofunctional principles.

  9. Analysis of unmitigated large break loss of coolant accidents using MELCOR code

    NASA Astrophysics Data System (ADS)

    Pescarini, M.; Mascari, F.; Mostacci, D.; De Rosa, F.; Lombardo, C.; Giannetti, F.

    2017-11-01

    In the framework of severe accident research activity developed by ENEA, a MELCOR nodalization of a generic Pressurized Water Reactor of 900 MWe has been developed. The aim of this paper is to present the analysis of MELCOR code calculations concerning two independent unmitigated large break loss of coolant accident transients, occurring in the cited type of reactor. In particular, the analysis and comparison between the transients initiated by an unmitigated double-ended cold leg rupture and an unmitigated double-ended hot leg rupture in the loop 1 of the primary cooling system is presented herein. This activity has been performed focusing specifically on the in-vessel phenomenology that characterizes this kind of accidents. The analysis of the thermal-hydraulic transient phenomena and the core degradation phenomena is therefore here presented. The analysis of the calculated data shows the capability of the code to reproduce the phenomena typical of these transients and permits their phenomenological study. A first sequence of main events is here presented and shows that the cold leg break transient results faster than the hot leg break transient because of the position of the break. Further analyses are in progress to quantitatively assess the results of the code nodalization for accident management strategy definition and fission product source term evaluation.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael; Jonlin, Duane; Nadel, Steven

    Today’s building energy codes focus on prescriptive requirements for features of buildings that are directly controlled by the design and construction teams and verifiable by municipal inspectors. Although these code requirements have had a significant impact, they fail to influence a large slice of the building energy use pie – including not only miscellaneous plug loads, cooking equipment and commercial/industrial processes, but the maintenance and optimization of the code-mandated systems as well. Currently, code compliance is verified only through the end of construction, and there are no limits or consequences for the actual energy use in an occupied building. Inmore » the future, our suite of energy regulations will likely expand to include building efficiency, energy use or carbon emission budgets over their full life cycle. Intelligent building systems, extensive renewable energy, and a transition from fossil fuel to electric heating systems will likely be required to meet ultra-low-energy targets. This paper lays out the authors’ perspectives on how buildings may evolve over the course of the 21st century and the roles that codes and regulations will play in shaping those buildings of the future.« less

  11. Soft-photon emission effects and radiative corrections for electromagnetic processes at very high energies

    NASA Technical Reports Server (NTRS)

    Gould, R. J.

    1979-01-01

    Higher-order electromagnetic processes involving particles at ultrahigh energies are discussed, with particular attention given to Compton scattering with the emission of an additional photon (double Compton scattering). Double Compton scattering may have significance in the interaction of a high-energy electron with the cosmic blackbody photon gas. At high energies the cross section for double Compton scattering is large, though this effect is largely canceled by the effects of radiative corrections to ordinary Compton scattering. A similar cancellation takes place for radiative pair production and the associated radiative corrections to the radiationless process. This cancellation is related to the well-known cancellation of the infrared divergence in electrodynamics.

  12. Self-Calibration and Laser Energy Monitor Validations for a Double-Pulsed 2-Micron CO2 Integrated Path Differential Absorption Lidar Application

    NASA Technical Reports Server (NTRS)

    Refaat, Tamer F.; Singh, Upendra N.; Petros, Mulugeta; Remus, Ruben; Yu, Jirong

    2015-01-01

    Double-pulsed 2-micron integrated path differential absorption (IPDA) lidar is well suited for atmospheric CO2 remote sensing. The IPDA lidar technique relies on wavelength differentiation between strong and weak absorbing features of the gas normalized to the transmitted energy. In the double-pulse case, each shot of the transmitter produces two successive laser pulses separated by a short interval. Calibration of the transmitted pulse energies is required for accurate CO2 measurement. Design and calibration of a 2-micron double-pulse laser energy monitor is presented. The design is based on an InGaAs pin quantum detector. A high-speed photo-electromagnetic quantum detector was used for laser-pulse profile verification. Both quantum detectors were calibrated using a reference pyroelectric thermal detector. Calibration included comparing the three detection technologies in the single-pulsed mode, then comparing the quantum detectors in the double-pulsed mode. In addition, a self-calibration feature of the 2-micron IPDA lidar is presented. This feature allows one to monitor the transmitted laser energy, through residual scattering, with a single detection channel. This reduces the CO2 measurement uncertainty. IPDA lidar ground validation for CO2 measurement is presented for both calibrated energy monitor and self-calibration options. The calibrated energy monitor resulted in a lower CO2 measurement bias, while self-calibration resulted in a better CO2 temporal profiling when compared to the in situ sensor.

  13. Chromatic aberration and the roles of double-opponent and color-luminance neurons in color vision.

    PubMed

    Vladusich, Tony

    2007-03-01

    How does the visual cortex encode color? I summarize a theory in which cortical double-opponent color neurons perform a role in color constancy and a complementary set of color-luminance neurons function to selectively correct for color fringes induced by chromatic aberration in the eye. The theory may help to resolve an ongoing debate concerning the functional properties of cortical receptive fields involved in color coding.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Evans, Meredydd; Shi, Qing

    China will account for about half of the new construction globally in the coming decade. Its floorspace doubled from 1996 to 2011, and Chinese rural buildings alone have as much floorspace as all of U.S. residential buildings. Building energy consumption has also grown, increasing by over 40% since 1990. To curb building energy demand, the Chinese government has launched a series of policies and programs. Combined, this growth in buildings and renovations, along with the policies to promote green buildings, are creating a large market for energy efficiency products and services. This report assesses the impact of China’s policies onmore » building energy efficiency and on the market for energy efficiency in the future. The first chapter of this report introduces the trends in China, drawing on both historical analysis, and detailed modeling of the drivers behind changes in floorspace and building energy demand such as economic and population growth, urbanization, policy. The analysis describes the trends by region, building type and energy service. The second chapter discusses China’s policies to promote green buildings. China began developing building energy codes in the 1980s. Over time, the central government has increased the stringency of the code requirements and the extent of enforcement. The codes are mandatory in all new buildings and major renovations in China’s cities, and they have been a driving force behind the expansion of China’s markets for insulation, efficient windows, and other green building materials. China also has several other important policies to encourage efficient buildings, including the Three-Star Rating System (somewhat akin to LEED), financial incentives tied to efficiency, appliance standards, a phasing out of incandescent bulbs and promotion of efficient lighting, and several policies to encourage retrofits in existing buildings. In the third chapter, we take “deep dives” into the trends affecting key building components. This chapter examines insulation in walls and roofs; efficient windows and doors; heating, air conditioning and controls; and lighting. These markets have seen significant growth because of the strength of the construction sector but also the specific policies that require and promote efficient building components. At the same time, as requirements have become more stringent, there has been fierce competition, and quality has at time suffered, which in turn has created additional challenges. Next we examine existing buildings in chapter four. China has many Soviet-style, inefficient buildings built before stringent requirements for efficiency were more widely enforced. As a result, there are several specific market opportunities related to retrofits. These fall into two or three categories. First, China now has a code for retrofitting residential buildings in the north. Local governments have targets of the number of buildings they must retrofit each year, and they help finance the changes. The requirements focus on insulation, windows, and heat distribution. Second, the Chinese government recently decided to increase the scale of its retrofits of government and state-owned buildings. It hopes to achieve large scale changes through energy service contracts, which creates an opportunity for energy service companies. Third, there is also a small but growing trend to apply energy service contracts to large commercial and residential buildings. This report assesses the impacts of China’s policies on building energy efficiency. By examining the existing literature and interviewing stakeholders from the public, academic, and private sectors, the report seeks to offer an in-depth insights of the opportunities and barriers for major market segments related to building energy efficiency. The report also discusses trends in building energy use, policies promoting building energy efficiency, and energy performance contracting for public building retrofits.« less

  15. 76 FR 57982 - Building Energy Codes Cost Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2011-BT-BC-0046] Building Energy Codes Cost Analysis Correction In notice document 2011-23236 beginning on page... heading ``Table 1. Cash flow components'' should read ``Table 7. Cash flow components''. [FR Doc. C1-2011...

  16. Modification of the band offset in boronitrene

    NASA Astrophysics Data System (ADS)

    Obodo, K. O.; Andrew, R. C.; Chetty, N.

    2011-10-01

    Using density functional methods within the generalized gradient approximation implemented in the Quantum Espresso codes, we modify the band offset in a single layer of boronitrene by substituting a double line of carbon atoms. This effectively introduces a line of dipoles at the interface. We considered various junctions of this system within the zigzag and armchair orientations. Our results show that the “zigzag-short” structure is energetically most stable, with a formation energy of 0.502 eV and with a band offset of 1.51 eV. The “zigzag-long” structure has a band offset of 1.99 eV. The armchair structures are nonpolar, while the zigzag-single structures show a charge accumulation for the C-substituted B and charge depletion for the C-substituted N at the junction. Consequently there is no shifting of the bands.

  17. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  18. Building Standards and Codes for Energy Conservation

    ERIC Educational Resources Information Center

    Gross, James G.; Pierlert, James H.

    1977-01-01

    Current activity intended to lead to energy conservation measures in building codes and standards is reviewed by members of the Office of Building Standards and Codes Services of the National Bureau of Standards. For journal availability see HE 508 931. (LBH)

  19. Tunneling effect on double potential barriers GaAs and PbS

    NASA Astrophysics Data System (ADS)

    Prastowo, S. H. B.; Supriadi, B.; Ridlo, Z. R.; Prihandono, T.

    2018-04-01

    A simple model of transport phenomenon tunnelling effect through double barrier structure was developed. In this research we concentrate on the variation of electron energy which entering double potential barriers to transmission coefficient. The barriers using semiconductor materials GaAs (Galium Arsenide) with band-gap energy 1.424 eV, distance of lattice 0.565 nm, and PbS (Lead Sulphide) with band gap energy 0.41 eV distance of lattice is 18 nm. The Analysisof tunnelling effect on double potentials GaAs and PbS using Schrodinger’s equation, continuity, and matrix propagation to get transmission coefficient. The maximum energy of electron that we use is 1.0 eV, and observable from 0.0025 eV- 1.0 eV. The shows the highest transmission coefficient is0.9982 from electron energy 0.5123eV means electron can pass the barriers with probability 99.82%. Semiconductor from materials GaAs and PbS is one of selected material to design semiconductor device because of transmission coefficient directly proportional to bias the voltage of semiconductor device. Application of the theoretical analysis of resonant tunnelling effect on double barriers was used to design and develop new structure and combination of materials for semiconductor device (diode, transistor, and integrated circuit).

  20. WEC-SIM Phase 1 Validation Testing -- Numerical Modeling of Experiments: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, Kelley; Michelen, Carlos; Bosma, Bret

    2016-08-01

    The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is amore » follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.« less

  1. Study of a new design of p-N semiconductor detector array for nuclear medicine imaging by monte carlo simulation codes.

    PubMed

    Hajizadeh-Safar, M; Ghorbani, M; Khoshkharam, S; Ashrafi, Z

    2014-07-01

    Gamma camera is an important apparatus in nuclear medicine imaging. Its detection part is consists of a scintillation detector with a heavy collimator. Substitution of semiconductor detectors instead of scintillator in these cameras has been effectively studied. In this study, it is aimed to introduce a new design of P-N semiconductor detector array for nuclear medicine imaging. A P-N semiconductor detector composed of N-SnO2 :F, and P-NiO:Li, has been introduced through simulating with MCNPX monte carlo codes. Its sensitivity with different factors such as thickness, dimension, and direction of emission photons were investigated. It is then used to configure a new design of an array in one-dimension and study its spatial resolution for nuclear medicine imaging. One-dimension array with 39 detectors was simulated to measure a predefined linear distribution of Tc(99_m) activity and its spatial resolution. The activity distribution was calculated from detector responses through mathematical linear optimization using LINPROG code on MATLAB software. Three different configurations of one-dimension detector array, horizontal, vertical one sided, and vertical double-sided were simulated. In all of these configurations, the energy windows of the photopeak were ± 1%. The results show that the detector response increases with an increase of dimension and thickness of the detector with the highest sensitivity for emission photons 15-30° above the surface. Horizontal configuration array of detectors is not suitable for imaging of line activity sources. The measured activity distribution with vertical configuration array, double-side detectors, has no similarity with emission sources and hence is not suitable for imaging purposes. Measured activity distribution using vertical configuration array, single side detectors has a good similarity with sources. Therefore, it could be introduced as a suitable configuration for nuclear medicine imaging. It has been shown that using semiconductor P-N detectors such as P-NiO:Li, N-SnO2 :F for gamma detection could be possibly applicable for design of a one dimension array configuration with suitable spatial resolution of 2.7 mm for nuclear medicine imaging.

  2. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.

    Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  3. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun

    2004-05-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less

  4. Performance tuning of N-body codes on modern microprocessors: I. Direct integration with a hermite scheme on x86_64 architecture

    NASA Astrophysics Data System (ADS)

    Nitadori, Keigo; Makino, Junichiro; Hut, Piet

    2006-12-01

    The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.

  5. Conductance of graphene-based double-barrier nanostructures.

    PubMed

    Setare, M R; Jahani, D

    2010-12-22

    The effect of a mass gap on the conductance of graphene double-barrier heterojunctions is studied. By obtaining the 2D expression for the electronic transport of the low energy excitations of pure graphene through double-barrier systems, it is found that the conductivity of these structures does not depend on the type of charge carriers in the zones of the electric field. However, a finite induced gap in the graphene spectrum makes conductivity dependent on the energy band index. We also discuss a few controversies concerning double-barrier systems stemming from an improper choice of the scattering angle. Then it is observed that, for some special values of the incident energy and potential's height, graphene junctions behave like left-handed materials, resulting in a maximum value for the conductivity.

  6. Pulsational Pair-instability Model for Superluminous Supernova PTF12dam:Interaction and Radioactive Decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolstov, Alexey; Nomoto, Ken’ichi; Blinnikov, Sergei

    2017-02-01

    Being a superluminous supernova, PTF12dam can be explained by a {sup 56}Ni-powered model, a magnetar-powered model, or an interaction model. We propose that PTF12dam is a pulsational pair-instability supernova, where the outer envelope of a progenitor is ejected during the pulsations. Thus, it is powered by a double energy source: radioactive decay of {sup 56}Ni and a radiative shock in a dense circumstellar medium. To describe multicolor light curves and spectra, we use radiation-hydrodynamics calculations of the STELLA code. We found that light curves are well described in the model with 40 M {sub ⊙} ejecta and 20–40 M {submore » ⊙} circumstellar medium. The ejected {sup 56}Ni mass is about 6 M {sub ⊙}, which results from explosive nucleosynthesis with large explosion energy (2–3)×10{sup 52} erg. In comparison with alternative scenarios of pair-instability supernova and magnetar-powered supernova, in the interaction model, all the observed main photometric characteristics are well reproduced: multicolor light curves, color temperatures, and photospheric velocities.« less

  7. Evaluating the benefits of commercial building energy codes and improving federal incentives for code adoption.

    PubMed

    Gilbraith, Nathaniel; Azevedo, Inês L; Jaramillo, Paulina

    2014-12-16

    The federal government has the goal of decreasing commercial building energy consumption and pollutant emissions by incentivizing the adoption of commercial building energy codes. Quantitative estimates of code benefits at the state level that can inform the size and allocation of these incentives are not available. We estimate the state-level climate, environmental, and health benefits (i.e., social benefits) and reductions in energy bills (private benefits) of a more stringent code (ASHRAE 90.1-2010) relative to a baseline code (ASHRAE 90.1-2007). We find that reductions in site energy use intensity range from 93 MJ/m(2) of new construction per year (California) to 270 MJ/m(2) of new construction per year (North Dakota). Total annual benefits from more stringent codes total $506 million for all states, where $372 million are from reductions in energy bills, and $134 million are from social benefits. These total benefits range from $0.6 million in Wyoming to $49 million in Texas. Private benefits range from $0.38 per square meter in Washington State to $1.06 per square meter in New Hampshire. Social benefits range from $0.2 per square meter annually in California to $2.5 per square meter in Ohio. Reductions in human/environmental damages and future climate damages account for nearly equal shares of social benefits.

  8. Overcoming Codes and Standards Barriers to Innovations in Building Energy Efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pamala C.; Gilbride, Theresa L.

    2015-02-15

    In this journal article, the authors discuss approaches to overcoming building code barriers to energy-efficiency innovations in home construction. Building codes have been a highly motivational force for increasing the energy efficiency of new homes in the United States in recent years. But as quickly as the codes seem to be changing, new products are coming to the market at an even more rapid pace, sometimes offering approaches and construction techniques unthought of when the current code was first proposed, which might have been several years before its adoption by various jurisdictions. Due to this delay, the codes themselves canmore » become barriers to innovations that might otherwise be helping to further increase the efficiency, comfort, health or durability of new homes. . The U.S. Department of Energy’s Building America, a program dedicated to improving the energy efficiency of America’s housing stock through research and education, is working with the U.S. housing industry through its research teams to help builders identify and remove code barriers to innovation in the home construction industry. The article addresses several approaches that builders use to achieve approval for innovative building techniques when code barriers appear to exist.« less

  9. Alternative Formats to Achieve More Efficient Energy Codes for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, David R.; Rosenberg, Michael I.; Halverson, Mark A.

    2013-01-26

    This paper identifies and examines several formats or structures that could be used to create the next generation of more efficient energy codes and standards for commercial buildings. Pacific Northwest National Laboratory (PNNL) is funded by the U.S. Department of Energy’s Building Energy Codes Program (BECP) to provide technical support to the development of ANSI/ASHRAE/IES Standard 90.1. While the majority of PNNL’s ASHRAE Standard 90.1 support focuses on developing and evaluating new requirements, a portion of its work involves consideration of the format of energy standards. In its current working plan, the ASHRAE 90.1 committee has approved an energy goalmore » of 50% improvement in Standard 90.1-2013 relative to Standard 90.1-2004, and will likely be considering higher improvement targets for future versions of the standard. To cost-effectively achieve the 50% goal in manner that can gain stakeholder consensus, formats other than prescriptive must be considered. Alternative formats that include reducing the reliance on prescriptive requirements may make it easier to achieve these aggressive efficiency levels in new codes and standards. The focus on energy code and standard formats is meant to explore approaches to presenting the criteria that will foster compliance, enhance verification, and stimulate innovation while saving energy in buildings. New formats may also make it easier for building designers and owners to design and build the levels of efficiency called for in the new codes and standards. This paper examines a number of potential formats and structures, including prescriptive, performance-based (with sub-formats of performance equivalency and performance targets), capacity constraint-based, and outcome-based. The paper also discusses the pros and cons of each format from the viewpoint of code users and of code enforcers.« less

  10. Pseudocapacitive and hierarchically ordered porous electrode materials supercapacitors

    NASA Astrophysics Data System (ADS)

    Saruhan, B.; Gönüllü, Y.; Arndt, B.

    2013-05-01

    Commercially available double layer capacitors store energy in an electrostatic field. This forms in the form of a double layer by charged particles arranged on two electrodes consisting mostly of active carbon. Such double layer capacitors exhibit a low energy density, so that components with large capacity according to large electrode areas are required. Our research focuses on the development of new electrode materials to realize the production of electrical energy storage systems with high energy density and high power density. Metal oxide based electrodes increase the energy density and the capacitance by addition of pseudo capacitance to the static capacitance present by the double layer super-capacitor electrodes. The so-called hybrid asymmetric cell capacitors combine both types of energy storage in a single component. In this work, the production routes followed in our laboratories for synthesis of nano-porous and aligned metal oxide electrodes using the electrochemical and sputter deposition as well as anodization methods will be described. Our characterisation studies concentrate on electrodes having redox metal-oxides (e.g. MnOx and WOx) and hierarchically aligned nano-porous Li-doped TiO2-NTs. The material specific and electrochemical properties achieved with these electrodes will be presented.

  11. Comparison of direct DNA strand breaks induced by low energy electrons with different inelastic cross sections

    NASA Astrophysics Data System (ADS)

    Li, Jun-Li; Li, Chun-Yan; Qiu, Rui; Yan, Cong-Chong; Xie, Wen-Zhang; Zeng, Zhi; Tung, Chuan-Jong

    2013-09-01

    In order to study the influence of inelastic cross sections on the simulation of direct DNA strand breaks induced by low energy electrons, six different sets of inelastic cross section data were calculated and loaded into the Geant4-DNA code to calculate the DNA strand break yields under the same conditions. The six sets of the inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with two different optical datasets and three different dispersion models, using the same Born corrections. Results show that the inelastic cross sections have a notable influence on the direct DNA strand break yields. The yields simulated with the inelastic cross sections based on Hayashi's optical data are greater than those based on Heller's optical data. The discrepancies are about 30-45% for the single strand break yields and 45-80% for the double strand break yields. Among the yields simulated with cross sections of the three different dispersion models, generally the greatest are those of the extended-Drude dispersion model, the second are those of the extended-oscillator-Drude dispersion model, and the last are those of the Ashley's δ-oscillator dispersion model. For the single strand break yields, the differences between the first two are very little and the differences between the last two are about 6-57%. For the double strand break yields, the biggest difference between the first two can be about 90% and the differences between the last two are about 17-70%.

  12. Reed Solomon codes for error control in byte organized computer memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  13. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    NASA Astrophysics Data System (ADS)

    Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.

    2017-11-01

    This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  14. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    NASA Astrophysics Data System (ADS)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  15. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  16. Recognition of Double Stranded RNA by Guanidine-Modified Peptide Nucleic Acids (GPNA)

    PubMed Central

    Gupta, Pankaj; Muse, Oluwatoyosi; Rozners, Eriks

    2011-01-01

    Double helical RNA has become an attractive target for molecular recognition because many non-coding RNAs play important roles in control of gene expression. Recently, we discovered that short peptide nucleic acids (PNA) bind strongly and sequence selectively to a homopurine tract of double helical RNA via triple helix formation. Herein we tested if the molecular recognition of RNA can be enhanced by α-guanidine modification of PNA. Our study was motivated by the discovery of Ly and co-workers that the guanidine modification greatly enhances the cellular delivery of PNA. Isothermal titration calorimetry showed that the guanidine-modified PNA (GPNA) had reduced affinity and sequence selectivity for triple helical recognition of RNA. The data suggested that in contrast to unmodified PNA, which formed a 1:1 PNA-RNA triple helix, GPNA preferred a 2:1 GPNA-RNA triplex-invasion complex. Nevertheless, promising results were obtained for recognition of biologically relevant double helical RNA. Consistent with enhanced strand invasion ability, GPNA derived from D-arginine recognized the transactivation response element (TAR) of HIV-1 with high affinity and sequence selectivity, presumably via Watson-Crick duplex formation. On the other hand, strong and sequence selective triple helices were formed by unmodified and nucelobase-modified PNAs and the purine rich strand of bacterial A-site. These results suggest that appropriate chemical modifications of PNA may enhance molecular recognition of complex non-coding RNAs. PMID:22146072

  17. Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport

    NASA Technical Reports Server (NTRS)

    Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.

    2008-01-01

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.

  18. Double Resonances and Spectral Scaling in the Weak Turbulence Theory of Rotating and Stratified Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1999-01-01

    In rotating turbulence, stably stratified turbulence, and in rotating stratified turbulence, heuristic arguments concerning the turbulent time scale suggest that the inertial range energy spectrum scales as k(exp -2). From the viewpoint of weak turbulence theory, there are three possibilities which might invalidate these arguments: four-wave interactions could dominate three-wave interactions leading to a modified inertial range energy balance, double resonances could alter the time scale, and the energy flux integral might not converge. It is shown that although double resonances exist in all of these problems, they do not influence overall energy transfer. However, the resonance conditions cause the flux integral for rotating turbulence to diverge logarithmically when evaluated for a k(exp -2) energy spectrum; therefore, this spectrum requires logarithmic corrections. Finally, the role of four-wave interactions is briefly discussed.

  19. City Reach Code Technical Support Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Chen, Yan; Zhang, Jian

    This report describes and analyzes a set of energy efficiency measures that will save 20% energy over ASHRAE Standard 90.1-2013. The measures will be used to formulate a Reach Code for cities aiming to go beyond national model energy codes. A coalition of U.S. cities together with other stakeholders wanted to facilitate the development of voluntary guidelines and standards that can be implemented in stages at the city level to improve building energy efficiency. The coalition's efforts are being supported by the U.S. Department of Energy via Pacific Northwest National Laboratory (PNNL) and in collaboration with the New Buildings Institute.

  20. Accurate double many-body expansion potential energy surface for the 2(1)A' state of N2O.

    PubMed

    Li, Jing; Varandas, António J C

    2014-08-28

    An accurate double many-body expansion potential energy surface is reported for the 2(1)A' state of N2O. The new double many-body expansion (DMBE) form has been fitted to a wealth of ab initio points that have been calculated at the multi-reference configuration interaction level using the full-valence-complete-active-space wave function as reference and the cc-pVQZ basis set, and subsequently corrected semiempirically via double many-body expansion-scaled external correlation method to extrapolate the calculated energies to the limit of a complete basis set and, most importantly, the limit of an infinite configuration interaction expansion. The topographical features of the novel potential energy surface are then examined in detail and compared with corresponding attributes of other potential functions available in the literature. Exploratory trajectories have also been run on this DMBE form with the quasiclassical trajectory method, with the thermal rate constant so determined at room temperature significantly enhancing agreement with experimental data.

  1. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  2. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Lulu; Zhang, Ming; Rassoul, Hamid K., E-mail: lzhao@fit.edu

    Previous investigations on the energy spectra of solar energetic particle (SEP) events revealed that the energy spectra observed at 1 au often show double power laws with break energies from one to tens of MeV/nuc. In order to determine whether the double power-law features result from the SEP source or the interplanetary transport process from the Sun to 1 au, we separately analyze the SEP spectra in the decay phase, during which the transport effect is minimum. In this paper, we reported three events observed by the Interplanetary Monitory Platform 8 spacecraft, which occurred on 1977 September 19, November 22,more » and 1979 March 1. For the first two events, the event-integrated spectra of protons possess double power-law profiles with break energies in a range of several MeV to tens of MeV, while the spectra integrated in the decay (reservoir) phase yield single power laws. Moreover, a general trend from a double power law at the rising phase to a single power law at the decay phase is observed. For the third event, both the event-integrated and the reservoir spectra show double power-law features. However, the difference between the low- and high-energy power-law indices is smaller for the reservoir spectrum than the event-integrated spectrum. These features were reproduced by solving the 1D diffusion equation analytically and we suggest that the transport process, especially the diffusion process, plays an important role in breaking the energy spectra.« less

  4. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE PAGES

    Makwana, K. D.; Zhdankin, V.; Li, H.; ...

    2015-04-10

    We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  5. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D.; Zhdankin, V.; Li, H.

    We performed simulations of decaying magnetohydrodynamic (MHD) turbulence with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k-1.3⊥k⊥-1.3. The kinetic code shows a spectral slope of k-1.5⊥k⊥-1.5 for smallermore » simulation domain, and k-1.3⊥k⊥-1.3 for larger domain. We then estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. Finally, this work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  6. Suppression of suprathermal ions from a colloidal microjet target containing SnO2 nanoparticles by using double laser pulses

    NASA Astrophysics Data System (ADS)

    Higashiguchi, Takeshi; Kaku, Masanori; Katto, Masahito; Kubodera, Shoichi

    2007-10-01

    We have demonstrated suppression of suprathermal ions from a colloidal microjet target plasma containing tin-dioxide (SnO2) nanoparticles irradiated by double laser pulses. We observed a significant decrease of the tin and oxygen ion signals in the charged-state-separated energy spectra when double laser pulses were irradiated. The peak energy of the singly ionized tin ions decreased from 9to3keV when a preplasma was produced. The decrease in the ion energy, considered as debris suppression, is attributed to the interaction between an expanding low-density preplasma and a main laser pulse.

  7. The histone codes for meiosis.

    PubMed

    Wang, Lina; Xu, Zhiliang; Khawar, Muhammad Babar; Liu, Chao; Li, Wei

    2017-09-01

    Meiosis is a specialized process that produces haploid gametes from diploid cells by a single round of DNA replication followed by two successive cell divisions. It contains many special events, such as programmed DNA double-strand break (DSB) formation, homologous recombination, crossover formation and resolution. These events are associated with dynamically regulated chromosomal structures, the dynamic transcriptional regulation and chromatin remodeling are mainly modulated by histone modifications, termed 'histone codes'. The purpose of this review is to summarize the histone codes that are required for meiosis during spermatogenesis and oogenesis, involving meiosis resumption, meiotic asymmetric division and other cellular processes. We not only systematically review the functional roles of histone codes in meiosis but also discuss future trends and perspectives in this field. © 2017 Society for Reproduction and Fertility.

  8. Double-Row Capsulolabral Repair Increases Load to Failure and Decreases Excessive Motion.

    PubMed

    McDonald, Lucas S; Thompson, Matthew; Altchek, David W; McGarry, Michelle H; Lee, Thay Q; Rocchi, Vanna J; Dines, Joshua S

    2016-11-01

    Using a cadaver shoulder instability model and load-testing device, we compared biomechanical characteristics of double-row and single-row capsulolabral repairs. We hypothesized a greater reduction in glenohumeral motion and translation and a higher load to failure in a mattress double-row capsulolabral repair than in a single-row repair. In 6 matched pairs of cadaveric shoulders, a capsulolabral injury was created. One shoulder was repaired with a single-row technique, and the other with a double-row mattress technique. Rotational range of motion, anterior-inferior translation, and humeral head kinematics were measured. Load-to-failure testing measured stiffness, yield load, deformation at yield load, energy absorbed at yield load, load to failure, deformation at ultimate load, and energy absorbed at ultimate load. Double-row repair significantly decreased external rotation and total range of motion compared with single-row repair. Both repairs decreased anterior-inferior translation compared with the capsulolabral-injured condition, however, no differences existed between repair types. Yield load in the single-row group was 171.3 ± 110.1 N, and in the double-row group it was 216.1 ± 83.1 N (P = .02). Ultimate load to failure in the single-row group was 224.5 ± 121.0 N, and in the double-row group it was 373.9 ± 172.0 N (P = .05). Energy absorbed at ultimate load in the single-row group was 1,745.4 ± 1,462.9 N-mm, and in the double-row group it was 4,649.8 ± 1,930.8 N-mm (P = .02). In cases of capsulolabral disruption, double-row repair techniques may result in decreased shoulder rotational range of motion and improved load-to-failure characteristics. In cases of capsulolabral disruption, repair techniques with double-row mattress repair may provide more secure fixation. Double-row capsulolabral repair decreases shoulder motion and increases load to failure, yield load, and energy absorbed at yield load more than single-row repair. Published by Elsevier Inc.

  9. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  10. Effect of two doses of ginkgo biloba extract (EGb 761) on the dual-coding test in elderly subjects.

    PubMed

    Allain, H; Raoul, P; Lieury, A; LeCoz, F; Gandon, J M; d'Arbigny, P

    1993-01-01

    The subjects of this double-blind study were 18 elderly men and women (mean age, 69.3 years) with slight age-related memory impairment. In a crossover-study design, each subject received placebo or an extract of Ginkgo biloba (EGb 761) (320 mg or 600 mg) 1 hour before performing a dual-coding test that measures the speed of information processing; the test consists of several coding series of drawings and words presented at decreasing times of 1920, 960, 480, 240, and 120 ms. The dual-coding phenomenon (a break point between coding verbal material and images) was demonstrated in all the tests. After placebo, the break point was observed at 960 ms and dual coding beginning at 1920 ms. After each dose of the ginkgo extract, the break point (at 480 ms) and dual coding (at 960 ms) were significantly shifted toward a shorter presentation time, indicating an improvement in the speed of information processing.

  11. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  12. Procedure of recovery of pin-by-pin fields of energy release in the core of VVER-type reactor for the BIPR-8 code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.

    2014-12-15

    The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.

  13. Potential Energy Cost Savings from Increased Commercial Energy Code Compliance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.

    2016-08-22

    An important question for commercial energy code compliance is: “How much energy cost savings can better compliance achieve?” This question is in sharp contrast to prior efforts that used a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. A field investigation method is being developed that goes beyond the binary approach to determine how much energy cost savings is not realized. Prototype building simulations were used to estimate the energy cost impact of varying levels of non-compliance for newly constructed officemore » buildings in climate zone 4C. Field data collected from actual buildings on specific conditions relative to code requirements was then applied to the simulation results to find the potential lost energy savings for a single building or for a sample of buildings. This new methodology was tested on nine office buildings in climate zone 4C. The amount of additional energy cost savings they could have achieved had they complied fully with the 2012 International Energy Conservation Code is determined. This paper will present the results of the test, lessons learned, describe follow-on research that is needed to verify that the methodology is both accurate and practical, and discuss the benefits that might accrue if the method were widely adopted.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Soumya; Soudackov, Alexander V.; Hammes-Schiffer, Sharon

    Electron transfer and proton coupled electron transfer (PCET) reactions at electrochemical interfaces play an essential role in a broad range of energy conversion processes. The reorganization energy, which is a measure of the free energy change associated with solute and solvent rearrangements, is a key quantity for calculating rate constants for these reactions. We present a computational method for including the effects of the double layer and ionic environment of the diffuse layer in calculations of electrochemical solvent reorganization energies. This approach incorporates an accurate electronic charge distribution of the solute within a molecular-shaped cavity in conjunction with a dielectricmore » continuum treatment of the solvent, ions, and electrode using the integral equations formalism polarizable continuum model. The molecule-solvent boundary is treated explicitly, but the effects of the electrode-double layer and double layer-diffuse layer boundaries, as well as the effects of the ionic strength of the solvent, are included through an external Green’s function. The calculated total reorganization energies agree well with experimentally measured values for a series of electrochemical systems, and the effects of including both the double layer and ionic environment are found to be very small. This general approach was also extended to electrochemical PCET and produced total reorganization energies in close agreement with experimental values for two experimentally studied PCET systems. This research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center, funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences.« less

  15. Michigan Energy and Cost Savings for New Single- and Multifamily Homes: 2012 IECC as Compared to the Michigan Uniform Energy Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.

    2012-07-03

    The 2012 International Energy Conservation Code (IECC) yields positive benefits for Michigan homeowners. Moving to the 2012 IECC from the Michigan Uniform Energy Code is cost-effective over a 30-year life cycle. On average, Michigan homeowners will save $10,081 with the 2012 IECC. Each year, the reduction to energy bills will significantly exceed increased mortgage costs. After accounting for up-front costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2012 IECC. Average annual energy savings are $604 for the 2012 IECC.

  16. Accuracy of the lattice-Boltzmann method using the Cell processor

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; de Fabritiis, G.; Giupponi, G.

    2008-11-01

    Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.

  17. Implementation of the direct S ( α , β ) method in the KENO Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Maldonado, G. Ivan

    The Monte Carlo code KENO contains thermal scattering data for a wide variety of thermal moderators. These data are processed from Evaluated Nuclear Data Files (ENDF) by AMPX and stored as double differential probability distribution functions. The method examined in this study uses S(α,β) probability distribution functions derived from the ENDF data files directly instead of being converted to double differential cross sections. This allows the size of the cross section data on the disk to be reduced substantially amount. KENO has also been updated to allow interpolation in temperature on these data so that problems can be run atmore » any temperature. Results are shown for several simplified problems for a variety of moderators. In addition, benchmark models based on the KRITZ reactor in Sweden were run, and the results are compared with the previous versions of KENO without the direct S(α,β) method. Results from the direct S(α,β) method compare favorably with the original results obtained using the double differential cross sections. Finally, sampling the data increases the run-time of the Monte Carlo calculation, but memory usage is decreased substantially.« less

  18. Implementation of the direct S ( α , β ) method in the KENO Monte Carlo code

    DOE PAGES

    Hart, Shane W. D.; Maldonado, G. Ivan

    2016-11-25

    The Monte Carlo code KENO contains thermal scattering data for a wide variety of thermal moderators. These data are processed from Evaluated Nuclear Data Files (ENDF) by AMPX and stored as double differential probability distribution functions. The method examined in this study uses S(α,β) probability distribution functions derived from the ENDF data files directly instead of being converted to double differential cross sections. This allows the size of the cross section data on the disk to be reduced substantially amount. KENO has also been updated to allow interpolation in temperature on these data so that problems can be run atmore » any temperature. Results are shown for several simplified problems for a variety of moderators. In addition, benchmark models based on the KRITZ reactor in Sweden were run, and the results are compared with the previous versions of KENO without the direct S(α,β) method. Results from the direct S(α,β) method compare favorably with the original results obtained using the double differential cross sections. Finally, sampling the data increases the run-time of the Monte Carlo calculation, but memory usage is decreased substantially.« less

  19. Challenges facing lithium batteries and electrical double-layer capacitors.

    PubMed

    Choi, Nam-Soon; Chen, Zonghai; Freunberger, Stefan A; Ji, Xiulei; Sun, Yang-Kook; Amine, Khalil; Yushin, Gleb; Nazar, Linda F; Cho, Jaephil; Bruce, Peter G

    2012-10-01

    Energy-storage technologies, including electrical double-layer capacitors and rechargeable batteries, have attracted significant attention for applications in portable electronic devices, electric vehicles, bulk electricity storage at power stations, and "load leveling" of renewable sources, such as solar energy and wind power. Transforming lithium batteries and electric double-layer capacitors requires a step change in the science underpinning these devices, including the discovery of new materials, new electrochemistry, and an increased understanding of the processes on which the devices depend. The Review will consider some of the current scientific issues underpinning lithium batteries and electric double-layer capacitors. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. NIF Double Shell outer/inner shell collision experiments

    NASA Astrophysics Data System (ADS)

    Merritt, E. C.; Loomis, E. N.; Wilson, D. C.; Cardenas, T.; Montgomery, D. S.; Daughton, W. S.; Dodd, E. S.; Desjardins, T.; Renner, D. B.; Palaniyappan, S.; Batha, S. H.; Khan, S. F.; Smalyuk, V.; Ping, Y.; Amendt, P.; Schoff, M.; Hoppe, M.

    2017-10-01

    Double shell capsules are a potential low convergence path to substantial alpha-heating and ignition on NIF, since they are predicted to ignite and burn at relatively low temperatures via volume ignition. Current LANL NIF double shell designs consist of a low-Z ablator, low-density foam cushion, and high-Z inner shell with liquid DT fill. Central to the Double Shell concept is kinetic energy transfer from the outer to inner shell via collision. The collision determines maximum energy available for compression and implosion shape of the fuel. We present results of a NIF shape-transfer study: two experiments comparing shape and trajectory of the outer and inner shells at post-collision times. An outer-shell-only target shot measured the no-impact shell conditions, while an `imaging' double shell shot measured shell conditions with impact. The `imaging' target uses a low-Z inner shell and is designed to perform in similar collision physics space to a high-Z double shell but can be radiographed at 16keV, near the viable 2DConA BL energy limit. Work conducted under the auspices of the U.S. DOE by LANL under contract DE-AC52-06NA25396.

  1. Energy efficient rateless codes for high speed data transfer over free space optical channels

    NASA Astrophysics Data System (ADS)

    Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.

    2015-03-01

    Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.

  2. Recombinant Vaccinia Viruses Coding Transgenes of Apoptosis-Inducing Proteins Enhance Apoptosis But Not Immunogenicity of Infected Tumor Cells

    PubMed Central

    Tkachenko, Anastasiya; Richter, Vladimir

    2017-01-01

    Genetic modifications of the oncolytic vaccinia virus (VV) improve selective tumor cell infection and death, as well as activation of antitumor immunity. We have engineered a double recombinant VV, coding human GM-CSF, and apoptosis-inducing protein apoptin (VV-GMCSF-Apo) for comparing with the earlier constructed double recombinant VV-GMCSF-Lact, coding another apoptosis-inducing protein, lactaptin, which activated different cell death pathways than apoptin. We showed that both these recombinant VVs more considerably activated a set of critical apoptosis markers in infected cells than the recombinant VV coding GM-CSF alone (VV-GMCSF-dGF): these were phosphatidylserine externalization, caspase-3 and caspase-7 activation, DNA fragmentation, and upregulation of proapoptotic protein BAX. However, only VV-GMCSF-Lact efficiently decreased the mitochondrial membrane potential of infected cancer cells. Investigating immunogenic cell death markers in cancer cells infected with recombinant VVs, we demonstrated that all tested recombinant VVs were efficient in calreticulin and HSP70 externalization, decrease of cellular HMGB1, and ATP secretion. The comparison of antitumor activity against advanced MDA-MB-231 tumor revealed that both recombinants VV-GMCSF-Lact and VV-GMCSF-Apo efficiently delay tumor growth. Our results demonstrate that the composition of GM-CSF and apoptosis-inducing proteins in the VV genome is very efficient tool for specific killing of cancer cells and for activation of antitumor immunity. PMID:28951871

  3. Biosynthesis of reovirus-specified polypeptides: the reovirus s1 mRNA encodes two primary translation products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, B.L.; Samuel, C.E.

    1985-05-01

    Reovirus serotypes 1 (Lang strain) and 3 (Dearing strain) code for a hitherto unrecognized low-molecular-weight polypeptide of Mr approximately 12,000. This polypeptide (p12) was synthesized in vitro in L-cell-free protein synthesizing systems programmed with either reovirus serotype 1 mRNA, reovirus serotype 3 mRNA, or with denatured reovirus genome double-stranded RNA, and in vivo in L-cell cultures infected with either reovirus serotype. Pulse-chase experiments in vivo, and the relative kinetics of synthesis of p12 in vitro, indicate that it is a primary translation product. Fractionation of reovirus mRNAs by velocity sedimentation and translation of separated mRNAs in vitro suggests that p12more » is coded for by the s1 mRNA, which also codes for the previously recognized sigma 1 polypeptide. Synthesis of both p12 and sigma 1 in vitro in L-cell-free protein synthesizing systems programmed with denatured reovirus genome double-stranded RNA also suggests that these two polypeptides can be coded by the same mRNA species. It is proposed that the Mr approximately 12,000 polypeptide encoded by the S1 genome segment be designated sigma 1bNS, and that the polypeptide previously designated sigma 1 be renamed sigma 1a.« less

  4. Comparing Different Strategies in Directed Evolution of Enzyme Stereoselectivity: Single- versus Double-Code Saturation Mutagenesis.

    PubMed

    Sun, Zhoutong; Lonsdale, Richard; Li, Guangyue; Reetz, Manfred T

    2016-10-04

    Saturation mutagenesis at sites lining the binding pockets of enzymes constitutes a viable protein engineering technique for enhancing or inverting stereoselectivity. Statistical analysis shows that oversampling in the screening step (the bottleneck) increases astronomically as the number of residues in the randomization site increases, which is the reason why reduced amino acid alphabets have been employed, in addition to splitting large sites into smaller ones. Limonene epoxide hydrolase (LEH) has previously served as the experimental platform in these methodological efforts, enabling comparisons between single-code saturation mutagenesis (SCSM) and triple-code saturation mutagenesis (TCSM); these employ either only one or three amino acids, respectively, as building blocks. In this study the comparative platform is extended by exploring the efficacy of double-code saturation mutagenesis (DCSM), in which the reduced amino acid alphabet consists of two members, chosen according to the principles of rational design on the basis of structural information. The hydrolytic desymmetrization of cyclohexene oxide is used as the model reaction, with formation of either (R,R)- or (S,S)-cyclohexane-1,2-diol. DCSM proves to be clearly superior to the likewise tested SCSM, affording both R,R- and S,S-selective mutants. These variants are also good catalysts in reactions of further substrates. Docking computations reveal the basis of enantioselectivity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Studies of Silicon-Refractory Metal Interfaces: Photoemission Study of Interface Formation and Compound Nucleation.

    DTIC Science & Technology

    1984-10-29

    Photoelectron energy analysis was done with Si s states 10--14 eV below E,. For CoSi2 and NiSi2, a commercial double-pass electron energy analyzer . The...Collaborative studies with theorists gave rise to modeling ,f interfaces and calculation of electronic energy states for ordered silicides. (i, ,I... analyzed by a double-pass cylindrical mirror energy A. There is no evidence of Cr outdiffusion into the Au analyzer , and the overall resolution

  6. Recombination energy in double white dwarf formation

    NASA Astrophysics Data System (ADS)

    Nandez, J. L. A.; Ivanova, N.; Lombardi, J. C.

    2015-06-01

    In this Letter, we investigate the role of recombination energy during a common envelope event. We confirm that taking this energy into account helps to avoid the formation of the circumbinary envelope commonly found in previous studies. For the first time, we can model a complete common envelope event, with a clean compact double white dwarf binary system formed at the end. The resulting binary orbit is almost perfectly circular. In addition to considering recombination energy, we also show that between 1/4 and 1/2 of the released orbital energy is taken away by the ejected material. We apply this new method to the case of the double white dwarf system WD 1101+364, and we find that the progenitor system at the start of the common envelope event consisted of an ˜1.5 M⊙ red giant star in an ˜30 d orbit with a white dwarf companion.

  7. Detailed study of the water trimer potential energy surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, J.E.; Schaefer, H.F. III

    The potential energy surface of the water trimer has been studied through the use of ab initio quantum mechanical methods. Five stationary points were located, including one minimum and two transition states. All geometries were optimized at levels up to the double-[Zeta] plus polarization plus diffuse (DZP + diff) single and double excitation coupled cluster (CCSD) level of theory. CCSD single energy points were obtained for the minimum, two transition states, and the water monomer using the triple-[Zeta] plus double polarization plus diffuse (TZ2P + diff) basis at the geometries predicted by the DZP + diff CCSD method. Reported aremore » the following: geometrical parameters, total and relative energies, harmonic vibrational frequencies and infrared intensities for the minimum, and zero point vibrational energies for the minimum, two transition states, and three separated water molecules. 27 refs., 5 figs., 10 tabs.« less

  8. Blast-Wave Generation and Propagation in Rapidly Heated Laser-Irradiated Targets

    NASA Astrophysics Data System (ADS)

    Ivancic, S. T.; Stillman, C. R.; Nilson, P. M.; Solodov, A. A.; Froula, D. H.

    2017-10-01

    Time-resolved extreme ultraviolet (XUV) spectroscopy was used to study the creation and propagation of a >100-Mbar blast wave in a target irradiated by an intense (>1018WWcm2 cm2) laser pulse. Blast waves provide a platform to generate immense pressures in the laboratory. A temporal double flash of XUV radiation was observed when viewing the rear side of the target, which is attributed to the emergence of a blast wave following rapid heating by a fast-electron beam generated from the laser pulse. The time-history of XUV emission in the photon energy range of 50 to 200 eV was recorded with an x-ray streak camera with 7-ps temporal resolution. The heating and expansion of the target was simulated with an electron transport code coupled to 1-D radiation-hydrodynamics simulations. The temporal delay between the two flashes measured in a systematic study of target thickness and composition was found to evolve in good agreement with a Sedov-Taylor blast-wave solution. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944 and Department of Energy Office of Science Award Number DE-SC-0012317.

  9. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zehtabian, M; Zaker, N; Sina, S

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less

  10. MEAM interatomic force calculation subroutine for LAMMPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stukowski, A.

    2010-10-25

    Interatomic force and energy calculation subroutine tobe used with the molecular dynamics simulation code LAMMPS (Ref a.). The code evaluates the total energy and atomic forces (energy gradient) according to cubic spine-based variant (Ref b.) of the Modified Embedded Atom Method (MEAM).

  11. Porting a Hall MHD Code to a Graphic Processing Unit

    NASA Technical Reports Server (NTRS)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  12. Discrete Spring Model for Predicting Delamination Growth in Z-Fiber Reinforced DCB Specimens

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; OBrien, T. Kevin

    2004-01-01

    Beam theory analysis was applied to predict delamination growth in Double Cantilever Beam (DCB) specimens reinforced in the thickness direction with pultruded pins, known as Z-fibers. The specimen arms were modeled as cantilever beams supported by discrete springs, which were included to represent the pins. A bi-linear, irreversible damage law was used to represent Z-fiber damage, the parameters of which were obtained from previous experiments. Closed-form solutions were developed for specimen compliance and displacements corresponding to Z-fiber row locations. A solution strategy was formulated to predict delamination growth, in which the parent laminate mode I critical strain energy release rate was used as the criterion for delamination growth. The solution procedure was coded into FORTRAN 90, giving a dedicated software tool for performing the delamination prediction. Comparison of analysis results with previous analysis and experiment showed good agreement, yielding an initial verification for the analytical procedure.

  13. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.

    PubMed

    Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).

  14. Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, Daniel, E-mail: daniel.berger@ch.tum.de; Oberhofer, Harald; Reuter, Karsten

    2014-07-14

    We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capabilitymore » by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)« less

  15. Contributions to the NUCLEI SciDAC-3 Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogner, Scott; Nazarewicz, Witek

    This is the Final Report for Michigan State University for the NUCLEI SciDAC-3 project. The NUCLEI project, as defined by the scope of work, has developed, implemented and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics studied included the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques used included Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program emphasized areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS at ANL and FRIB at MSU (nuclear structuremore » and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrinoless double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).« less

  16. Secondary bremsstrahlung and the energy-conservation aspects of kerma in photon-irradiated media.

    PubMed

    Kumar, Sudhir; Nahum, Alan E

    2016-02-07

    Kerma, collision kerma and absorbed dose in media irradiated by megavoltage photons are analysed with respect to energy conservation. The user-code DOSRZnrc was employed to compute absorbed dose D, kerma K and a special form of kerma, K ncpt, obtained by setting the charged-particle transport energy cut-off very high, thereby preventing the generation of 'secondary bremsstrahlung' along the charged-particle paths. The user-code FLURZnrc was employed to compute photon fluence, differential in energy, from which collision kerma, K col and K were derived. The ratios K/D, K ncpt/D and K col/D have thereby been determined over a very large volumes of water, aluminium and copper irradiated by broad, parallel beams of 0.1 to 25 MeV monoenergetic photons, and 6, 10 and 15 MV 'clinical' radiotherapy qualities. Concerning depth-dependence, the 'area under the kerma, K, curve' exceeded that under the dose curve, demonstrating that kerma does not conserve energy when computed over a large volume. This is due to the 'double counting' of the energy of the secondary bremsstrahlung photons, this energy being (implicitly) included in the kerma 'liberated' in the irradiated medium, at the same time as this secondary bremsstrahlung is included in the photon fluence which gives rise to kerma elsewhere in the medium. For 25 MeV photons this 'violation' amounts to 8.6%, 14.2% and 25.5% in large volumes of water, aluminium and copper respectively but only 0.6% for a 'clinical' 6 MV beam in water. By contrast, K col/D and K ncpt/D, also computed over very large phantoms of the same three media, for the same beam qualities, are equal to unity within (very low) statistical uncertainties, demonstrating that collision kerma and the special type of kerma, K ncpt, do conserve energy over a large volume. A comparison of photon fluence spectra for the 25 MeV beam at a depth of  ≈51 g cm−2 for both very high and very low charged-particle transport cut-offs reveals the considerable contribution to the total photon fluence by secondary bremsstrahlung in the latter case. Finally, a correction to the 'kerma integral' has been formulated to account for the energy transferred to charged particles by photons with initial energies below the Monte-Carlo photon transport cut-off PCUT; for 25 MeV photons this 'photon track end' correction is negligible for all PCUT below 10 keV.

  17. An adaptive distributed data aggregation based on RCPC for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hua, Guogang; Chen, Chang Wen

    2006-05-01

    One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks

  18. The international implications of national and local coordination on building energy codes: Case studies in six cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Yu, Sha; Staniszewski, Aaron

    Building energy efficiency is an important strategy for reducing greenhouse gas emissions globally. In fact, 55 countries have included building energy efficiency in their Nationally Determined Contributions (NDCs) under the Paris Agreement. This research uses building energy code implementation in six cities across different continents as case studies to assess what it may take for countries to implement the ambitions of their energy efficiency goals. Specifically, we look at the cases of Bogota, Colombia; Da Nang, Vietnam; Eskisehir, Turkey; Mexico City, Mexico; Rajkot, India; and Tshwane, South Africa, all of which are “deep dive” cities under the Sustainable Energy formore » All's Building Efficiency Accelerator. The research focuses on understanding the baseline with existing gaps in implementation and coordination. The methodology used a combination of surveys on code status and interviews with stakeholders at the local and national level, as well as review of published documents. We looked at code development, implementation, and evaluation. The cities are all working to improve implementation, however, the challenges they currently face include gaps in resources, capacity, tools, and institutions to check for compliance. Better coordination between national and local governments could help improve implementation, but that coordination is not yet well established. For example, all six of the cities reported that there was little to no involvement of local stakeholders in development of the national code; only one city reported that it had access to national funding to support code implementation. More robust coordination could better link cities with capacity building and funding for compliance, and ensure that the code reflects local priorities. By understanding gaps in implementation, it can also help in designing more targeted interventions to scale up energy savings.« less

  19. The international implications of national and local coordination on building energy codes: Case studies in six cities

    DOE PAGES

    Evans, Meredydd; Yu, Sha; Staniszewski, Aaron; ...

    2018-04-17

    Building energy efficiency is an important strategy for reducing greenhouse gas emissions globally. In fact, 55 countries have included building energy efficiency in their Nationally Determined Contributions (NDCs) under the Paris Agreement. This research uses building energy code implementation in six cities across different continents as case studies to assess what it may take for countries to implement the ambitions of their energy efficiency goals. Specifically, we look at the cases of Bogota, Colombia; Da Nang, Vietnam; Eskisehir, Turkey; Mexico City, Mexico; Rajkot, India; and Tshwane, South Africa, all of which are “deep dive” cities under the Sustainable Energy formore » All's Building Efficiency Accelerator. The research focuses on understanding the baseline with existing gaps in implementation and coordination. The methodology used a combination of surveys on code status and interviews with stakeholders at the local and national level, as well as review of published documents. We looked at code development, implementation, and evaluation. The cities are all working to improve implementation, however, the challenges they currently face include gaps in resources, capacity, tools, and institutions to check for compliance. Better coordination between national and local governments could help improve implementation, but that coordination is not yet well established. For example, all six of the cities reported that there was little to no involvement of local stakeholders in development of the national code; only one city reported that it had access to national funding to support code implementation. More robust coordination could better link cities with capacity building and funding for compliance, and ensure that the code reflects local priorities. By understanding gaps in implementation, it can also help in designing more targeted interventions to scale up energy savings.« less

  20. Efficient red organic electroluminescent devices by doping platinum(II) Schiff base emitter into two host materials with stepwise energy levels.

    PubMed

    Zhou, Liang; Kwok, Chi-Chung; Cheng, Gang; Zhang, Hongjie; Che, Chi-Ming

    2013-07-15

    In this work, organic electroluminescent (EL) devices with double light-emitting layers (EMLs) having stepwise energy levels were designed to improve the EL performance of a red-light-emitting platinum(II) Schiff base complex. A series of devices with single or double EML(s) were fabricated and characterized. Compared with single-EML devices, double-EML devices showed improved EL efficiency and brightness, attributed to better balance in carriers. In addition, the stepwise distribution in energy levels of host materials is instrumental in broadening the recombination zone, thus delaying the roll-off of EL efficiency. The highest EL current efficiency and power efficiency of 17.36 cd/A and 14.73 lm/W, respectively, were achieved with the optimized double-EML devices. At high brightness of 1000 cd/m², EL efficiency as high as 8.89 cd/A was retained.

  1. Energy balance in TM-1-MH Tokamak (ohmical heating)

    NASA Astrophysics Data System (ADS)

    Stoeckel, J.; Koerbel, S.; Kryska, L.; Kopecky, V.; Dadalec, V.; Datlov, J.; Jakubka, K.; Magula, P.; Zacek, F.; Pereverzev, G. V.

    1981-10-01

    Plasma in the TM-1-MH Tokamak was experimentally studied in the parameter range: tor. mg. field B = 1,3 T, plasma current I sub p = 14 kA, electron density N sub E 3.10 to the 19th power cubic meters. The two numerical codes are available for the comparison with experimental data. TOKATA-code solves simplified energy balance equations for electron and ion components. TOKSAS-code solves the detailed energy balance of the ion component.

  2. An implementation framework for the feedback of individual research results and incidental findings in research.

    PubMed

    Thorogood, Adrian; Joly, Yann; Knoppers, Bartha Maria; Nilsson, Tommy; Metrakos, Peter; Lazaris, Anthoula; Salman, Ayat

    2014-12-23

    This article outlines procedures for the feedback of individual research data to participants. This feedback framework was developed in the context of a personalized medicine research project in Canada. Researchers in this domain have an ethical obligation to return individual research results and/or material incidental findings that are clinically significant, valid and actionable to participants. Communication of individual research data must proceed in an ethical and efficient manner. Feedback involves three procedural steps: assessing the health relevance of a finding, re-identifying the affected participant, and communicating the finding. Re-identification requires researchers to break the code in place to protect participant identities. Coding systems replace personal identifiers with a numerical code. Double coding systems provide added privacy protection by separating research data from personal identifying data with a third "linkage" database. A trusted and independent intermediary, the "keyholder", controls access to this linkage database. Procedural guidelines for the return of individual research results and incidental findings are lacking. This article outlines a procedural framework for the three steps of feedback: assessment, re-identification, and communication. This framework clarifies the roles of the researcher, Research Ethics Board, and keyholder in the process. The framework also addresses challenges posed by coding systems. Breaking the code involves privacy risks and should only be carried out in clearly defined circumstances. Where a double coding system is used, the keyholder plays an important role in balancing the benefits of individual feedback with the privacy risks of re-identification. Feedback policies should explicitly outline procedures for the assessment of findings, and the re-identification and contact of participants. The responsibilities of researchers, the Research Ethics Board, and the keyholder must be clearly defined. We provide general guidelines for keyholders involved in feedback. We also recommend that Research Ethics Boards should not be directly involved in the assessment of individual findings. Hospitals should instead establish formal, interdisciplinary clinical advisory committees to help researchers determine whether or not an uncertain finding should be returned.

  3. Quantum scattering in one-dimensional systems satisfying the minimal length uncertainty relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph

    In quantum gravity theories, when the scattering energy is comparable to the Planck energy the Heisenberg uncertainty principle breaks down and is replaced by the minimal length uncertainty relation. In this paper, the consequences of the minimal length uncertainty relation on one-dimensional quantum scattering are studied using an approach involving a recently proposed second-order differential equation. An exact analytical expression for the tunneling probability through a locally-periodic rectangular potential barrier system is obtained. Results show that the existence of a non-zero minimal length uncertainty tends to shift the resonant tunneling energies to the positive direction. Scattering through a locally-periodic potentialmore » composed of double-rectangular potential barriers shows that the first band of resonant tunneling energies widens for minimal length cases when the double-rectangular potential barrier is symmetric but narrows down when the double-rectangular potential barrier is asymmetric. A numerical solution which exploits the use of Wronskians is used to calculate the transmission probabilities through the Pöschl–Teller well, Gaussian barrier, and double-Gaussian barrier. Results show that the probability of passage through the Pöschl–Teller well and Gaussian barrier is smaller in the minimal length cases compared to the non-minimal length case. For the double-Gaussian barrier, the probability of passage for energies that are more positive than the resonant tunneling energy is larger in the minimal length cases compared to the non-minimal length case. The approach is exact and applicable to many types of scattering potential.« less

  4. Detecting RNA/DNA hybridization using double-labeled donor probes with enhanced fluorescence resonance energy transfer signals.

    PubMed

    Okamura, Yukio; Watanabe, Yuichiro

    2006-01-01

    Fluorescence resonance energy transfer (FRET) occurs when two fluorophores are in close proximity, and the emission energy of a donor fluorophore is transferred to excite an acceptor fluorophore. Using such fluorescently labeled oligonucleotides as FRET probes, makes possible specific detection of RNA molecules even if similar sequences are present in the environment. A higher ratio of signal to background fluorescence is required for more sensitive probe detection. We found that double-labeled donor probes labeled with BODIPY dye resulted in a remarkable increase in fluorescence intensity compared to single-labeled donor probes used in conventional FRET. Application of this double-labeled donor system can improve a variety of FRET techniques.

  5. Data and code for the exploratory data analysis of the electrical energy demand in the time domain in Greece.

    PubMed

    Tyralis, Hristos; Karakatsanis, Georgios; Tzouka, Katerina; Mamassis, Nikos

    2017-08-01

    We present data and code for visualizing the electrical energy data and weather-, climate-related and socioeconomic variables in the time domain in Greece. The electrical energy data include hourly demand, weekly-ahead forecasted values of the demand provided by the Greek Independent Power Transmission Operator and pricing values in Greece. We also present the daily temperature in Athens and the Gross Domestic Product of Greece. The code combines the data to a single report, which includes all visualizations with combinations of all variables in multiple time scales. The data and code were used in Tyralis et al. (2017) [1].

  6. Energy levels, oscillator strengths, and transition probabilities for sulfur-like scandium, Sc VI

    NASA Astrophysics Data System (ADS)

    El-Maaref, A. A.; Abou Halaka, M. M.; Saddeek, Yasser B.

    2017-09-01

    Energy levels, Oscillator strengths, and transition probabilities for sulfur-like scandium are calculated using CIV3 code. The calculations have been executed in an intermediate coupling scheme using Breit-Pauli Hamiltonian. The present calculations have been compared with the experimental data and other theoretical calculations. LANL code has been used to confirm the accuracy of the present calculations, where the calculations using CIV3 code agree well with the corresponding values by LANL code. The calculated energy levels and oscillator strengths are in reasonable agreement with the published experimental data and theoretical values. We have calculated lifetimes of some excited levels, as well.

  7. P.L. 102-486, "Energy Policy Act" (1992)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-12-13

    Amends the Energy Conservation and Production Act to set a deadline by which each State must certify to the Secretary of Energy whether its energy efficiency standards with respect to residential and commercial building codes meet or exceed those of the Council of American Building Officials (CABO) Model Energy Code, 1992, and of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers, respectively.

  8. PTF11mnb: First analog of supernova 2005bf. Long-rising, double-peaked supernova Ic from a massive progenitor

    NASA Astrophysics Data System (ADS)

    Taddia, F.; Sollerman, J.; Fremling, C.; Karamehmetoglu, E.; Quimby, R. M.; Gal-Yam, A.; Yaron, O.; Kasliwal, M. M.; Kulkarni, S. R.; Nugent, P. E.; Smadja, G.; Tao, C.

    2018-01-01

    Aims: We study PTF11mnb, a He-poor supernova (SN) whose light curves resemble those of SN 2005bf, a peculiar double-peaked stripped-envelope (SE) SN, until the declining phase after the main peak. We investigate the mechanism powering its light curve and the nature of its progenitor star. Methods: Optical photometry and spectroscopy of PTF11mnb are presented. We compared light curves, colors and spectral properties to those of SN 2005bf and normal SE SNe. We built a bolometric light curve and modeled this light curve with the SuperNova Explosion Code (SNEC) hydrodynamical code explosion of a MESA progenitor star and semi-analytic models. Results: The light curve of PTF11mnb turns out to be similar to that of SN 2005bf until 50 d when the main (secondary) peaks occur at -18.5 mag. The early peak occurs at 20 d and is about 1.0 mag fainter. After the main peak, the decline rate of PTF11mnb is remarkably slower than what was observed in SN 2005bf, and it traces well the 56Co decay rate. The spectra of PTF11mnb reveal a SN Ic and have no traces of He unlike in the case of SN Ib 2005bf, although they have velocities comparable to those of SN 2005bf. The whole evolution of the bolometric light curve is well reproduced by the explosion of a massive (Mej = 7.8 M⊙), He-poor star characterized by a double-peaked 56Ni distribution, a total 56Ni mass of 0.59 M⊙, and an explosion energy of 2.2 × 1051 erg. Alternatively, a normal SN Ib/c explosion (M(56Ni) = 0.11 M⊙, EK = 0.2 × 1051 erg, Mej = 1 M⊙) can power the first peak while a magnetar, with a magnetic field characterized by B = 5.0 × 1014 G, and a rotation period of P = 18.1 ms, provides energy for the main peak. The early g-band light curve can be fit with a shock-breakout cooling tail or an extended envelope model from which a radius of at least 30 R⊙ is obtained. Conclusions: We presented a scenario where PTF11mnb was the explosion of a massive, He-poor star, characterized by a double-peaked 56Ni distribution. In this case, the ejecta mass and the absence of He imply a large ZAMS mass ( 85 M⊙) for the progenitor, which most likely was a Wolf-Rayet star, surrounded by an extended envelope formed either by a pre-SN eruption or due to a binary configuration. Alternatively, PTF11mnb could be powered by a SE SN with a less massive progenitor during the first peak and by a magnetar afterward. Photometric tables are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A106

  9. Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics

    NASA Astrophysics Data System (ADS)

    Ciappina, M. F.; Kirchner, T.; Schulz, M.

    2010-04-01

    We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.

  10. EDDIX--a database of ionisation double differential cross sections.

    PubMed

    MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H

    2011-02-01

    The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.

  11. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.

    PubMed

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun

    2017-09-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.

  12. Adaptation in the auditory midbrain of the barn owl (Tyto alba) induced by tonal double stimulation.

    PubMed

    Singheiser, Martin; Ferger, Roland; von Campenhausen, Mark; Wagner, Hermann

    2012-02-01

    During hunting, the barn owl typically listens to several successive sounds as generated, for example, by rustling mice. As auditory cells exhibit adaptive coding, the earlier stimuli may influence the detection of the later stimuli. This situation was mimicked with two double-stimulus paradigms, and adaptation was investigated in neurons of the barn owl's central nucleus of the inferior colliculus. Each double-stimulus paradigm consisted of a first or reference stimulus and a second stimulus (probe). In one paradigm (second level tuning), the probe level was varied, whereas in the other paradigm (inter-stimulus interval tuning), the stimulus interval between the first and second stimulus was changed systematically. Neurons were stimulated with monaural pure tones at the best frequency, while the response was recorded extracellularly. The responses to the probe were significantly reduced when the reference stimulus and probe had the same level and the inter-stimulus interval was short. This indicated response adaptation, which could be compensated for by an increase of the probe level of 5-7 dB over the reference level, if the latter was in the lower half of the dynamic range of a neuron's rate-level function. Recovery from adaptation could be best fitted with a double exponential showing a fast (1.25 ms) and a slow (800 ms) component. These results suggest that neurons in the auditory system show dynamic coding properties to tonal double stimulation that might be relevant for faithful upstream signal propagation. Furthermore, the overall stimulus level of the masker also seems to affect the recovery capabilities of auditory neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  13. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    PubMed

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  14. Validation of a multi-layer Green's function code for ion beam transport

    NASA Astrophysics Data System (ADS)

    Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.

  15. RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade

    DTIC Science & Technology

    2014-09-30

    Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the

  16. A mobility based vibroacoustic energy transmission simulation into an enclosure through a double-wall panel.

    PubMed

    Sahu, Atanu; Bhattacharya, Partha; Niyogi, Arup Guha; Rose, Michael

    2017-06-01

    Double-wall panels are known for their superior sound insulation properties over single wall panels as a sound barrier. The sound transmission phenomenon through a double-wall structure is a complex process involving vibroacoustic interaction between structural panels, the air-cushion in between, and the secondary acoustic domain. It is in this context a versatile and a fully coupled technique based on the finite-element-boundary element model is developed that enables estimation of sound transfer through a double-wall panel into an adjacent enclosure while satisfying the displacement compatibility across the interface. The contribution of individual components in the transmitted energy is identified through numerical simulations.

  17. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  18. User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E)

    DTIC Science & Technology

    2014-06-01

    User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of

  19. Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope.

    PubMed

    Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo

    2018-01-17

    Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.

  20. Simulation of double-pass stimulated Raman backscattering

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Chen, Q.; Morozov, A.; Suckewer, S.

    2018-04-01

    Experiments on Stimulated Raman Backscattering (SRBS) in plasma have demonstrated significantly higher energy conversion in a double-pass amplifier where the laser pulses go through the plasma twice compared with a single-pass amplifier with double the plasma length of a single pass. In this paper, the improvement in understanding recent experimental results is presented by considering quite in detail the effects of plasma heating on the modeling of SRBS. Our simulation results show that the low efficiency of single-pass amplifiers can be attributed to Landau damping and the frequency shift of Langmuir waves. In double-pass amplifiers, these issues can be avoided, to some degree, because pump-induced heating could be reduced, while the plasma cools down between the passes. Therefore, double-pass amplifiers yield considerably enhanced energy transfer from the pump to the seed, hence the output pulse intensity.

  1. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  2. Industrial Facility Combustion Energy Use

    DOE Data Explorer

    McMillan, Colin

    2016-08-01

    Facility-level industrial combustion energy use is calculated from greenhouse gas emissions data reported by large emitters (>25,000 metric tons CO2e per year) under the U.S. EPA's Greenhouse Gas Reporting Program (GHGRP, https://www.epa.gov/ghgreporting). The calculation applies EPA default emissions factors to reported fuel use by fuel type. Additional facility information is included with calculated combustion energy values, such as industry type (six-digit NAICS code), location (lat, long, zip code, county, and state), combustion unit type, and combustion unit name. Further identification of combustion energy use is provided by calculating energy end use (e.g., conventional boiler use, co-generation/CHP use, process heating, other facility support) by manufacturing NAICS code. Manufacturing facilities are matched by their NAICS code and reported fuel type with the proportion of combustion fuel energy for each end use category identified in the 2010 Energy Information Administration Manufacturing Energy Consumption Survey (MECS, http://www.eia.gov/consumption/manufacturing/data/2010/). MECS data are adjusted to account for data that were withheld or whose end use was unspecified following the procedure described in Fox, Don B., Daniel Sutter, and Jefferson W. Tester. 2011. The Thermal Spectrum of Low-Temperature Energy Use in the United States, NY: Cornell Energy Institute.

  3. Peak Performance for Healthy Schools

    ERIC Educational Resources Information Center

    McKale, Chuck; Townsend, Scott

    2012-01-01

    Far from the limelight of LEED, Energy Star or Green Globes certifications are the energy codes developed and updated by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) and the International Code Council (ICC) through the support of the Department of Energy (DOE) as minimum guidelines for building envelope,…

  4. Metalloid Aluminum Clusters with Fluorine

    DTIC Science & Technology

    2016-12-01

    molecular dynamics, binding energy , siesta code, density of states, projected density of states 15. NUMBER OF PAGES 69 16. PRICE CODE 17. SECURITY...high energy density compared to explosives, but typically release this energy slowly via diffusion-limited combustion. There is recent interest in using...examine the cluster binding energy and electronic structure. Partial fluorine substitution in a prototypical aluminum-cyclopentadienyl cluster results

  5. Dynamics of vacuum-sealed, double-leaf partitions

    NASA Astrophysics Data System (ADS)

    Kavanaugh, Joshua Stephen

    The goal of this research is to investigate the feasibility and potential effectiveness of using vacuum-sealed, double-leaf partitions for applications in noise control. Substantial work has been done previously on double-leaf partitions where the acoustics of the inner chamber and mechanical vibrations of structural supports are passively and actively controlled. The work presented here is unique in that the proposed system aims to eliminate the need for active acoustic control of transmitted acoustic energy by removing all the air between the two panels of the double partition. Therefore, the only remaining energy paths would be along the boundary and at the points where there are intermediate structural supports connecting the two panels. The eventual goal of the research is to develop a high-loss double-leaf partition that simplifies active control by removing the need for control of the air cavity and channeling all the energy into discrete structural paths. The work presented here is a first step towards the goal of designing a high-loss, actively-controlled double-leaf partition with an air-evacuated inner chamber. One experiment is conducted to investigate the effects of various levels of vacuum on the response of a double-leaf partition whose panels are mechanically coupled only at the boundary. Another experiment is conducted which investigates the effect of changing the stiffness of an intermediate support coupling the two panels of a double-leaf partition in which a vacuum has been applied to the inner cavity. The available equipment was able to maintain a 99% vacuum between the panels. Both experiments are accompanied by analytical models used to investigate the importance of various dynamic parameters. Results show that the vacuum-sealed system shows some potential for increased transmission loss, primarily by the changing the natural frequencies of the double-leaf partition.

  6. Monte Carlo calculations of initial energies of electrons in water irradiated by photons with energies up to 1GeV.

    PubMed

    Todo, A S; Hiromoto, G; Turner, J E; Hamm, R N; Wright, H A

    1982-12-01

    Previous calculations of the initial energies of electrons produced in water irradiated by photons are extended to 1 GeV by including pair and triplet production. Calculations were performed with the Monte Carlo computer code PHOEL-3, which replaces the earlier code, PHOEL-2. Tables of initial electron energies are presented for single interactions of monoenergetic photons at a number of energies from 10 keV to 1 GeV. These tables can be used to compute kerma in water irradiated by photons with arbitrary energy spectra to 1 GeV. In addition, separate tables of Compton-and pair-electron spectra are given over this energy range. The code PHOEL-3 is available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, Oak Ridge, TN 37830.

  7. Lessons learned from new construction utility demand side management programs and their implications for implementing building energy codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wise, B.K.; Hughes, K.R.; Danko, S.L.

    1994-07-01

    This report was prepared for the US Department of Energy (DOE) Office of Codes and Standards by the Pacific Northwest Laboratory (PNL) through its Building Energy Standards Program (BESP). The purpose of this task was to identify demand-side management (DSM) strategies for new construction that utilities have adopted or developed to promote energy-efficient design and construction. PNL conducted a survey of utilities and used the information gathered to extrapolate lessons learned and to identify evolving trends in utility new-construction DSM programs. The ultimate goal of the task is to identify opportunities where states might work collaboratively with utilities to promotemore » the adoption, implementation, and enforcement of energy-efficient building energy codes.« less

  8. Coupled Atom-Polar Molecule Condensate Systems: A Theoretical Adventure

    DTIC Science & Technology

    2014-07-14

    second uses the linear-response theory more familiar to people working in the �eld of condensed-matter physics. We have introduced a quasiparticle ...picture and found that in this picture the bare EIT model in Fig. 2 (a) can be compared to a double EIT system shown in Fig. 2 (b). The quasiparticle ...energy levels consists of a particle (with positive quasiparticle energy ) and a hole (with negative quasiparticle energy) branch. The double EIT

  9. Double ionization of He(1[ital s][sup 2]) and He(1[ital s]2[ital s] [sup 3][ital S]) by a single high-energy photon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, Z.; Shakeshaft, R.

    1994-05-01

    We have calculated the energy and angular distributions for double ionization of He(1[ital s][sup 2]) and He(1[ital s]2[ital s] [sup 3][ital S]) by one photon, over a range of photon energies up to a few keV. The calculations were based on using a fairly accurate initial-state wave function, determined so as to exactly satisfy the Kato cusp conditions, and a final-state wave function which is a product of three Coulomb wave functions modified by a short-range correction term. There are at least three different mechanisms for double ionization, and each one leaves a mark on the angular distribution. When themore » energies of the two electrons are equal, the contribution of each mechanism to the angular asymmetry parameter can be estimated on theoretical grounds; we compare these estimates with the calculated results to give a further indication of the roles of the various mechanisms. Concerning the shapes of the energy and angular distributions, we find significant differences between double ionization of singlet and triplet helium; in particular, the probability for one high-energy photon to eject two equal-energy electrons from triplet helium nearly vanishes owing to the Pauli exclusion principle and to interference effects resulting from antisymmetrization. In two appendixes we present some details of the integration involved in the calculations.« less

  10. Families’ Experiences of Doubling Up After Homelessness

    PubMed Central

    Bush, Hannah; Shinn, Marybeth

    2017-01-01

    This study examined experiences of doubling up among families after episodes of homelessness. Doubling up refers to two or more adults or families residing in the same housing unit, which has been an increasing trend in the United States in recent decades. Within the past 14 years, the number of households containing more than one family, related or unrelated, has more than tripled. Although doubling up is increasingly common among families at all income levels, this study seeks to understand the experiences of doubling up among families who have been homeless. Through qualitative interviews with caregivers of 29 families, we analyzed advantages and disadvantages of doubling up with the caregiver’s parent, other family, and nonfamily. Experiences were rated on a four-point scale—(1) mostly negative, (2) negative mixed, (3) positive mixed, and (4) mostly positive—and coded for various positive and negative themes. Overall, we found that doubling up was a generally negative experience for families in our sample, regardless of their relationship to their hosts. Common themes included negative effects on children, undesirable environments, interpersonal tension, and feelings of impermanence and instability. For formerly sheltered families in this study, doubling up after shelter did not resolve their period of housing instability and may be only another stop in an ongoing cycle of homelessness. PMID:29326758

  11. Single-intensity-recording optical encryption technique based on phase retrieval algorithm and QR code

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi

    2014-12-01

    Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dillon, Heather E.; Antonopoulos, Chrissi A.; Solana, Amy E.

    As the model energy codes are improved to reach efficiency levels 50 percent greater than current codes, use of on-site renewable energy generation is likely to become a code requirement. This requirement will be needed because traditional mechanisms for code improvement, including envelope, mechanical and lighting, have been pressed to the end of reasonable limits. Research has been conducted to determine the mechanism for implementing this requirement (Kaufman 2011). Kaufmann et al. determined that the most appropriate way to structure an on-site renewable requirement for commercial buildings is to define the requirement in terms of an installed power density permore » unit of roof area. This provides a mechanism that is suitable for the installation of photovoltaic (PV) systems on future buildings to offset electricity and reduce the total building energy load. Kaufmann et al. suggested that an appropriate maximum for the requirement in the commercial sector would be 4 W/ft{sup 2} of roof area or 0.5 W/ft{sup 2} of conditioned floor area. As with all code requirements, there must be an alternative compliance path for buildings that may not reasonably meet the renewables requirement. This might include conditions like shading (which makes rooftop PV arrays less effective), unusual architecture, undesirable roof pitch, unsuitable building orientation, or other issues. In the short term, alternative compliance paths including high performance mechanical equipment, dramatic envelope changes, or controls changes may be feasible. These options may be less expensive than many renewable systems, which will require careful balance of energy measures when setting the code requirement levels. As the stringency of the code continues to increase however, efficiency trade-offs will be maximized, requiring alternative compliance options to be focused solely on renewable electricity trade-offs or equivalent programs. One alternate compliance path includes purchase of Renewable Energy Credits (RECs). Each REC represents a specified amount of renewable electricity production and provides an offset of environmental externalities associated with non-renewable electricity production. The purpose of this paper is to explore the possible issues with RECs and comparable alternative compliance options. Existing codes have been examined to determine energy equivalence between the energy generation requirement and the RECs alternative over the life of the building. The price equivalence of the requirement and the alternative are determined to consider the economic drivers for a market decision. This research includes case studies that review how the few existing codes have incorporated RECs and some of the issues inherent with REC markets. Section 1 of the report reviews compliance options including RECs, green energy purchase programs, shared solar agreements and leases, and other options. Section 2 provides detailed case studies on codes that include RECs and community based alternative compliance methods. The methods the existing code requirements structure alternative compliance options like RECs are the focus of the case studies. Section 3 explores the possible structure of the renewable energy generation requirement in the context of energy and price equivalence. The price of RECs have shown high variation by market and over time which makes it critical to for code language to be updated frequently for a renewable energy generation requirement or the requirement will not remain price-equivalent over time. Section 4 of the report provides a maximum case estimate for impact to the PV market and the REC market based on the Kaufmann et al. proposed requirement levels. If all new buildings in the commercial sector complied with the requirement to install rooftop PV arrays, nearly 4,700 MW of solar would be installed in 2012, a major increase from EIA estimates of 640 MW of solar generation capacity installed in 2009. The residential sector could contribute roughly an additional 2,300 MW based on the same code requirement levels of 4 W/ft{sup 2} of roof area. Section 5 of the report provides a basic framework for draft code language recommendations based on the analysis of the alternative compliance levels.« less

  13. Haag duality for Kitaev’s quantum double model for abelian groups

    NASA Astrophysics Data System (ADS)

    Fiedler, Leander; Naaijkens, Pieter

    2015-11-01

    We prove Haag duality for cone-like regions in the ground state representation corresponding to the translational invariant ground state of Kitaev’s quantum double model for finite abelian groups. This property says that if an observable commutes with all observables localized outside the cone region, it actually is an element of the von Neumann algebra generated by the local observables inside the cone. This strengthens locality, which says that observables localized in disjoint regions commute. As an application, we consider the superselection structure of the quantum double model for abelian groups on an infinite lattice in the spirit of the Doplicher-Haag-Roberts program in algebraic quantum field theory. We find that, as is the case for the toric code model on an infinite lattice, the superselection structure is given by the category of irreducible representations of the quantum double.

  14. Cryptographic salting for security enhancement of double random phase encryption schemes

    NASA Astrophysics Data System (ADS)

    Velez Zea, Alejandro; Fredy Barrera, John; Torroba, Roberto

    2017-10-01

    Security in optical encryption techniques is a subject of great importance, especially in light of recent reports of successful attacks. We propose a new procedure to reinforce the ciphertexts generated in double random phase encrypting experimental setups. This ciphertext is protected by multiplexing with a ‘salt’ ciphertext coded with the same setup. We present an experimental implementation of the ‘salting’ technique. Thereafter, we analyze the resistance of the ‘salted’ ciphertext under some of the commonly known attacks reported in the literature, demonstrating the validity of our proposal.

  15. Flow and Temperature Distribution Evaluation on Sodium Heated Large-sized Straight Double-wall-tube Steam Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisohara, Naoyuki; Moribe, Takeshi; Sakai, Takaaki

    2006-07-01

    The sodium heated steam generator (SG) being designed in the feasibility study on commercialized fast reactor cycle systems is a straight double-wall-tube type. The SG is large sized to reduce its manufacturing cost by economics of scale. This paper addresses the temperature and flow multi-dimensional distributions at steady state to obtain the prospect of the SG. Large-sized heat exchanger components are prone to have non-uniform flow and temperature distributions. These phenomena might lead to tube buckling or tube to tube-sheet junction failure in straight tube type SGs, owing to tubes thermal expansion difference. The flow adjustment devices installed in themore » SG are optimized to prevent these issues, and the temperature distribution properties are uncovered by analysis methods. The analysis model of the SG consists of two parts, a sodium inlet distribution plenum (the plenum) and a heat transfer tubes bundle region (the bundle). The flow and temperature distributions in the plenum and the bundle are evaluated by the three-dimensional code 'FLUENT' and the two dimensional thermal-hydraulic code 'MSG', respectively. The MSG code is particularly developed for sodium heated SGs in JAEA. These codes have revealed that the sodium flow is distributed uniformly by the flow adjustment devices, and that the lateral tube temperature distributions remain within the allowable temperature range for the structural integrity of the tubes and the tube to tube-sheet junctions. (authors)« less

  16. Double Hits in Schizophrenia.

    PubMed

    Vorstman, Jacob A S; Olde Loohuis, Loes M; Kahn, René S; Ophoff, Roel A

    2018-05-14

    The co-occurrence of a Copy Number Variant (CNV) and a functional variant on the other allele may be a relevant genetic mechanism in schizophrenia. We hypothesized that the cumulative burden of such double hits - in particular those composed of a deletion and a coding single nucleotide variation (SNV) - is increased in patients with schizophrenia.We combined CNV data with coding variants data in 795 patients with schizophrenia and 474 controls. To limit false CNV-detection, only CNVs called only by two algorithms we included. CNV-affected genes were subsequently examined for coding SNVs, which we termed "CNV-SNVs". Correcting for total queried sequence, we assessed the CNV-SNV-burden and the combined predicted deleterious effect. We estimated p-values by permutation of the phenotype.We detected 105 CNV-SNVs; 67 in duplicated and 38 in deleted genic sequence. While the difference in CNV-SNVs rates was not significant, the combined deleteriousness inferred by CNV-SNVs in deleted sequence was almost fourfold higher in cases compared to controls (nominal p = 0.009). This effect may be driven by a higher number of CNV-SNVs and/or by a higher degree of predicted deleteriousness of CNV-SNVs. No such effect was observed for duplications.We provide early evidence that deletions co-occurring with a functional variant may be relevant, albeit of modest impact, for the genetic etiology of schizophrenia. Large-scale consortium studies are required to validate our findings. Sequence-based analyses would provide the best resolution for detection of CNVs as well as coding variants genome-wide.

  17. Oscillators: Old and new perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharjee, Jayanta K.; Roy, Jyotirmoy

    We consider some of the well known oscillators in literature which are known to exhibit interesting effects of nonlinearity. We review the Lindstedt-Poincare technique for dealing with with the nonlinear effects and then go on to introduce the relevance of the renormalization group for the oscillator following the pioneering work of Chen et al. It is pointed out that the traditional Lindstedt-Poincare and the renormalization group techniques have operational connections. We use this to find an unexpected mode softening in the double pendulum. This mode softening prompted us to look for chaos in the double pendulum at low energies-energies thatmore » are just sufficient to allow the outer pendulum to rotate (the double pendulum is known to be chaotic at high energies-energies that are greater than that needed to make both pendulums to rotate). The emergence of the chaos is strongly dependent on initial conditions.« less

  18. Necessities for the First Life to Emerge

    NASA Astrophysics Data System (ADS)

    Ikehara, K.

    2017-07-01

    For the first life to emerge, the first protein must be produced by random joining of amino acids in protein 0th-order structure. In addition, the first genetic code and the first double-stranded gene must encode the protein 0th-order structure.

  19. Double Wall Framing Technique An Example of High Performance, Sustainable Building Envelope Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosny, Dr. Jan; Asiz, Andi; Shrestha, Som S

    2015-01-01

    Double wall technologies utilizing wood framing have been well-known and used in North American buildings for decades. Most of double wall designs use only natural materials such as wood products, gypsum, and cellulose fiber insulation, being one of few building envelope technologies achieving high thermal performance without use of plastic foams or fiberglass. Today, after several material and structural design modifications, these technologies are considered as highly thermally efficient, sustainable option for new constructions and sometimes, for retrofit projects. Following earlier analysis performed for U.S. Department of Energy by Fraunhofer CSE, this paper discusses different ways to build double wallsmore » and to optimize their thermal performance to minimize the space conditioning energy consumption. Description of structural configuration alternatives and thermal performance analysis are presented as well. Laboratory tests to evaluate thermal properties of used insulation and whole wall system thermal performance are also discussed in this paper. Finally, the thermal loads generated in field conditions by double walls are discussed utilizing results from a joined project performed by Zero Energy Building Research Alliance and Oak Ridge National Laboratory (ORNL), which made possible evaluation of the market viability of low-energy homes built in the Tennessee Valley. Experimental data recorded in two of the test houses built during this field study is presented in this work.« less

  20. Energy transmission through a double-wall curved stiffened panel using Green's theorem

    NASA Astrophysics Data System (ADS)

    Ghosh, Subha; Bhattacharya, Partha

    2015-04-01

    It is a common practice in aerospace and automobile industries to use double wall panels as fuselage skins or in window panels to improve acoustic insulation. However, the scientific community is yet to develop a reliable prediction method for a suitable vibro-acoustic model for sound transmission through a curved double-wall panel. In this quest, the present work tries to delve into the modeling of energy transmission through a double-wall curved panel. Subsequently the radiation of sound power into the free field from the curved panel in the low to mid frequency range is also studied. In the developed model to simulate a stiffened aircraft fuselage configuration, the outer wall is provided with longitudinal stiffeners. A modal expansion theory based on Green's theorem is implemented to model the energy transmission through an acoustically coupled double-wall curved panel. An elemental radiator approach is implemented to calculate the radiated energy from the curved surface in to the free field. The developed model is first validated with various numerical models available. It has been observed in the present study that the radius of curvature of the surface has a prominent effect on the behavior of radiated sound power into the free field. Effect of the thickness of the air gap between the two curved surfaces on the sound power radiation has also been noted.

  1. Improvements to the nuclear model code GNASH for cross section calculations at higher energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, P.G.; Chadwick, M.B.

    1994-05-01

    The nuclear model code GNASH, which in the past has been used predominantly for incident particle energies below 20 MeV, has been modified extensively for calculations at higher energies. The model extensions and improvements are described in this paper, and their significance is illustrated by comparing calculations with experimental data for incident energies up to 160 MeV.

  2. IUTAM Symposium on Statistical Energy Analysis, 8-11 July 1997, Programme

    DTIC Science & Technology

    1997-01-01

    distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (Maximum200 words) This was the first international scientific gathering devoted...energy flow, continuum dynamics, vibrational energy, statistical energy analysis (SEA) 15. NUMBER OF PAGES 16. PRICE CODE INSECURITY... correlation v=V(ɘ ’• • determination of the correlation n^, =11^, (<?). When harmonic motion and time-average are considered, the following I

  3. Electron-helium S-wave model benchmark calculations. II. Double ionization, single ionization with excitation, and double excitation

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.

    2010-02-01

    The propagating exterior complex scaling (PECS) method is extended to all four-body processes in electron impact on helium in an S-wave model. Total and energy-differential cross sections are presented with benchmark accuracy for double ionization, single ionization with excitation, and double excitation (to autoionizing states) for incident-electron energies from threshold to 500 eV. While the PECS three-body cross sections for this model given in the preceding article [Phys. Rev. A 81, 022715 (2010)] are in good agreement with other methods, there are considerable discrepancies for these four-body processes. With this model we demonstrate the suitability of the PECS method for the complete solution of the electron-helium system.

  4. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials

    PubMed Central

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin

    2017-01-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits “0” and “1” to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency‐spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments. PMID:28932671

  5. Earlier onset of motor deficits in mice with double mutations in Dyt1 and Sgce.

    PubMed

    Yokoi, Fumiaki; Yang, Guang; Li, Jindong; DeAndrade, Mark P; Zhou, Tong; Li, Yuqing

    2010-10-01

    DYT1 early-onset generalized torsion dystonia is an inherited movement disorder caused by mutations in DYT1 coding for torsinA with ∼30% penetrance. Most of the DYT1 dystonia patients exhibit symptoms during childhood and adolescence. On the other hand, DYT1 mutation carriers without symptoms during these periods mostly do not exhibit symptoms later in their life. Little is known about what controls the timing of the onset, a critical issue for DYT1 mutation carriers. DYT11 myoclonus-dystonia is caused by mutations in SGCE coding for ε-sarcoglycan. Two dystonia patients from a single family with double mutations in DYT1 and SGCE exhibited more severe symptoms. A recent study suggested that torsinA contributes to the quality control of ε-sarcoglycan. Here, we derived mice carrying mutations in both Dyt1 and Sgce and found that these double mutant mice showed earlier onset of motor deficits in beam-walking test. A novel monoclonal antibody against mouse ε-sarcoglycan was developed by using Sgce knock-out mice to avoid the immune tolerance. Western blot analysis suggested that functional deficits of torsinA and ε-sarcoglycan may independently cause motor deficits. Examining additional mutations in other dystonia genes may be beneficial to predict the onset in DYT1 mutation carriers.

  6. Data in support of energy performance of double-glazed windows.

    PubMed

    Shakouri, Mahmoud; Banihashemi, Saeed

    2016-06-01

    This paper provides the data used in a research project to propose a new simplified windows rating system based on saved annual energy ("Developing an empirical predictive energy-rating model for windows by using Artificial Neural Network" (Shakouri Hassanabadi and Banihashemi Namini, 2012) [1], "Climatic, parametric and non-parametric analysis of energy performance of double-glazed windows in different climates" (Banihashemi et al., 2015) [2]). A full factorial simulation study was conducted to evaluate the performance of 26 different types of windows in a four-story residential building. In order to generalize the results, the selected windows were tested in four climates of cold, tropical, temperate, and hot and arid; and four different main orientations of North, West, South and East. The accompanied datasets include the annual saved cooling and heating energy in different climates and orientations by using the selected windows. Moreover, a complete dataset is provided that includes the specifications of 26 windows, climate data, month, and orientation of the window. This dataset can be used to make predictive models for energy efficiency assessment of double glazed windows.

  7. Bond-rearrangement and ionization mechanisms in the photo-double-ionization of simple hydrocarbons (C 2H 4, C 2H 3F, and 1,1-C 2H 2F 2) near and above threshold

    DOE PAGES

    Gaire, B.; Gatton, A. S.; Wiegandt, F.; ...

    2016-09-14

    We have investigated bond-rearrangement driven by photo-double-ionization (PDI) near and above the double ionization threshold in a sequence of carbon-carbon double bonded hydrocarbon molecules: ethylene, fluoroethylene, and 1,1-difluoroethylene. We employ the kinematically complete cold target recoil ion momentum spectroscopy (COLTRIMS) method to resolve all photo-double-ionization events leading to two-ionic fragments. We observe changes in the branching ratios of different dissociative ionization channels depending on the presence of none, one, or two fluorine atoms. The role of the fluorine atom in the bond-rearrangement channels is intriguing as evident by the re-ordering of the threshold energies of the PDI in the fluorinatedmore » molecules. These effects offer a compelling argument that the electronegativity of the fluorine (or the polarity of the molecule) strongly influences the potential energy surfaces of the molcules and drives bond-rearrangement during the dissociation process. The energy sharing and the relative angle between the 3D-momentum vectors of the two electrons provide clear evidence of direct and indirect PDI processes.« less

  8. Generation and Analysis of Subpicosecond Double Electron Bunch at the Brookhaven Accelerator Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babzien, M.; Kusche, K.; Yakimenko, V.

    2011-08-09

    Two compressed electron beam bunches from a single 60-MeV bunch have been generated in a reproducible manner during compression in the magnetic chicane - 'dog leg' arrangement at ATF. Measurements indicate they have comparable bunch lengths ({approx}100-200 fs) and are separated in energy by {approx}1.8 MeV with the higher-energy bunch preceding the lower-energy bunch by 0.5-1 ps. Some simulation results for analyzing the double-bunch formation process are also presented.

  9. Building Code Compliance and Enforcement: The Experience of SanFrancisco's Residential Energy Conservation Ordinanace and California'sBuildign Standards for New Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vine, E.

    1990-11-01

    As part of Lawrence Berkeley Laboratory's (LBL) technical assistance to the Sustainable City Project, compliance and enforcement activities related to local and state building codes for existing and new construction were evaluated in two case studies. The analysis of the City of San Francisco's Residential Energy Conservation Ordinance (RECO) showed that a limited, prescriptive energy conservation ordinance for existing residential construction can be enforced relatively easily with little administrative costs, and that compliance with such ordinances can be quite high. Compliance with the code was facilitated by extensive publicity, an informed public concerned with the cost of energy and knowledgeablemore » about energy efficiency, the threat of punishment (Order of Abatement), the use of private inspectors, and training workshops for City and private inspectors. The analysis of California's Title 24 Standards for new residential and commercial construction showed that enforcement of this type of code for many climate zones is more complex and requires extensive administrative support for education and training of inspectors, architects, engineers, and builders. Under this code, prescriptive and performance approaches for compliance are permitted, resulting in the demand for alternative methods of enforcement: technical assistance, plan review, field inspection, and computer analysis. In contrast to existing construction, building design and new materials and construction practices are of critical importance in new construction, creating a need for extensive technical assistance and extensive interaction between enforcement personnel and the building community. Compliance problems associated with building design and installation did occur in both residential and nonresidential buildings. Because statewide codes are enforced by local officials, these problems may increase over time as energy standards change and become more complex and as other standards (eg, health and safety codes) remain a higher priority. The California Energy Commission realizes that code enforcement by itself is insufficient and expects that additional educational and technical assistance efforts (eg, manuals, training programs, and toll-free telephone lines) will ameliorate these problems.« less

  10. Ultrahigh contrast from a frequency-doubled chirped-pulse-amplification beamline.

    PubMed

    Hillier, David; Danson, Colin; Duffield, Stuart; Egan, David; Elsmere, Stephen; Girling, Mark; Harvey, Ewan; Hopps, Nicholas; Norman, Michael; Parker, Stefan; Treadwell, Paul; Winter, David; Bett, Thomas

    2013-06-20

    This paper describes frequency-doubled operation of a high-energy chirped-pulse-amplification beamline. Efficient type-I second-harmonic generation was achieved using a 3 mm thick 320 mm aperture KDP crystal. Shots were fired at a range of energies achieving more than 100 J in a subpicosecond, 527 nm laser pulse with a power contrast of 10(14).

  11. Progress on China nuclear data processing code system

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  12. An efficient HZETRN (a galactic cosmic ray transport code)

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.

    1992-01-01

    An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.

  13. Computer code for analyzing the performance of aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.

    1985-05-01

    A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.

  14. Electromagnetic and Radiative Properties of Neutron Star Magnetospheres

    NASA Astrophysics Data System (ADS)

    Li, Jason G.

    2014-05-01

    Magnetospheres of neutron stars are commonly modeled as either devoid of plasma in "vacuum'' models or filled with perfectly conducting plasma with negligible inertia in "force-free'' models. While numerically tractable, neither of these idealized limits can simultaneously account for both the plasma currents and the accelerating electric fields that are needed to explain the morphology and spectra of high-energy emission from pulsars. In this work we improve upon these models by considering the structure of magnetospheres filled with resistive plasma. We formulate Ohm's Law in the minimal velocity fluid frame and implement a time-dependent numerical code to construct a family of resistive solutions that smoothly bridges the gap between the vacuum and force-free magnetosphere solutions. We further apply our method to create a self-consistent model for the recently discovered intermittent pulsars that switch between two distinct states: an "on'', radio-loud state, and an "off'', radio-quiet state with lower spin-down luminosity. Essentially, we allow plasma to leak off open field lines in the absence of pair production in the "off'' state, reproducing observed differences in spin-down rates. Next, we examine models in which the high-energy emission from gamma-ray pulsars comes from reconnecting current sheets and layers near and beyond the light cylinder. The reconnected magnetic field provides a reservoir of energy that heats particles and can power high-energy synchrotron radiation. Emitting particles confined to the sheet naturally result in a strong caustic on the skymap and double peaked light curves for a broad range of observer angles. Interpulse bridge emission likely arises from interior to the light cylinder, along last open field lines that traverse the space between the polar caps and the current sheet. Finally, we apply our code to solve for the magnetospheric structure of merging neutron star binaries. We find that the scaling of electromagnetic luminosity with orbital angular velocity varies between the power 4 for nonspinning stars and the power 1.5 for rapidly spinning millisecond pulsars near contact. Our derived scalings and magnetospheres can be used to help understand electromagnetic signatures from merging neutron stars to be observed by Advanced LIGO.

  15. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  16. Double-heterojunction nanorod light-responsive LEDs for display applications.

    PubMed

    Oh, Nuri; Kim, Bong Hoon; Cho, Seong-Yong; Nam, Sooji; Rogers, Steven P; Jiang, Yiran; Flanagan, Joseph C; Zhai, You; Kim, Jae-Hwan; Lee, Jungyup; Yu, Yongjoon; Cho, Youn Kyoung; Hur, Gyum; Zhang, Jieqian; Trefonas, Peter; Rogers, John A; Shim, Moonsub

    2017-02-10

    Dual-functioning displays, which can simultaneously transmit and receive information and energy through visible light, would enable enhanced user interfaces and device-to-device interactivity. We demonstrate that double heterojunctions designed into colloidal semiconductor nanorods allow both efficient photocurrent generation through a photovoltaic response and electroluminescence within a single device. These dual-functioning, all-solution-processed double-heterojunction nanorod light-responsive light-emitting diodes open feasible routes to a variety of advanced applications, from touchless interactive screens to energy harvesting and scavenging displays and massively parallel display-to-display data communication. Copyright © 2017, American Association for the Advancement of Science.

  17. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  18. Transcription and DNA Damage: Holding Hands or Crossing Swords?

    PubMed

    D'Alessandro, Giuseppina; d'Adda di Fagagna, Fabrizio

    2017-10-27

    Transcription has classically been considered a potential threat to genome integrity. Collision between transcription and DNA replication machinery, and retention of DNA:RNA hybrids, may result in genome instability. On the other hand, it has been proposed that active genes repair faster and preferentially via homologous recombination. Moreover, while canonical transcription is inhibited in the proximity of DNA double-strand breaks, a growing body of evidence supports active non-canonical transcription at DNA damage sites. Small non-coding RNAs accumulate at DNA double-strand break sites in mammals and other organisms, and are involved in DNA damage signaling and repair. Furthermore, RNA binding proteins are recruited to DNA damage sites and participate in the DNA damage response. Here, we discuss the impact of transcription on genome stability, the role of RNA binding proteins at DNA damage sites, and the function of small non-coding RNAs generated upon damage in the signaling and repair of DNA lesions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  20. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  1. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  2. 12 CFR 1807.503 - Project completion.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... applicable: One of three model codes (Uniform Building Code (ICBO), National Building Code (BOCA), Standard (Southern) Building Code (SBCCI)); or the Council of American Building Officials (CABO) one or two family... must meet the current edition of the Model Energy Code published by the Council of American Building...

  3. Energy transfer, orbital angular momentum, and discrete current in a double-ring fiber array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexeyev, C. N.; Volyar, A. V.; Yavorsky, M. A.

    We study energy transfer and orbital angular momentum of supermodes in a double-ring array of evanescently coupled monomode optical fibers. The structure of supermodes and the spectra of their propagation constants are obtained. The geometrical parameters of the array, at which the energy is mostly confined within the layers, are determined. The developed method for finding the supermodes of concentric arrays is generalized for the case of multiring arrays. The orbital angular momentum carried by a supermode of a double-ring array is calculated. The discrete lattice current is introduced. It is shown that the sum of discrete currents over themore » array is a conserved quantity. The connection of the total discrete current with orbital angular momentum of discrete optical vortices is made.« less

  4. Localization or tunneling in asymmetric double-well potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Dae-Yup, E-mail: dsong@sunchon.ac.kr

    An asymmetric double-well potential is considered, assuming that the wells are parabolic around the minima. The WKB wave function of a given energy is constructed inside the barrier between the wells. By matching the WKB function to the exact wave functions of the parabolic wells on both sides of the barrier, for two almost degenerate states, we find a quantization condition for the energy levels which reproduces the known energy splitting formula between the two states. For the other low-lying non-degenerate states, we show that the eigenfunction should be primarily localized in one of the wells with negligible magnitude inmore » the other. Using Dekker’s method (Dekker, 1987), the present analysis generalizes earlier results for weakly biased double-well potentials to systems with arbitrary asymmetry.« less

  5. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  6. Incorporation of coupled nonequilibrium chemistry into a two-dimensional nozzle code (SEAGULL)

    NASA Technical Reports Server (NTRS)

    Ratliff, A. W.

    1979-01-01

    A two-dimensional multiple shock nozzle code (SEAGULL) was extended to include the effects of finite rate chemistry. The basic code that treats multiple shocks and contact surfaces was fully coupled with a generalized finite rate chemistry and vibrational energy exchange package. The modified code retains all of the original SEAGULL features plus the capability to treat chemical and vibrational nonequilibrium reactions. Any chemical and/or vibrational energy exchange mechanism can be handled as long as thermodynamic data and rate constants are available for all participating species.

  7. Two-dimensional Electronic Double-Quantum Coherence Spectroscopy

    PubMed Central

    Kim, Jeongho; Mukamel, Shaul

    2009-01-01

    CONSPECTUS The theory of electronic structure of many-electron systems like molecules is extraordinarily complicated. A lot can be learned by considering how electron density is distributed, on average, in the average field of the other electrons in the system. That is, mean field theory. However, to describe quantitatively chemical bonds, reactions, and spectroscopy requires consideration of the way that electrons avoid each other by the way they move; this is called electron correlation (or in physics, the many-body problem for fermions). While great progress has been made in theory, there is a need for incisive experimental tests that can be undertaken for large molecular systems in the condensed phase. Here we report a two-dimensional (2D) optical coherent spectroscopy that correlates the double excited electronic states to constituent single excited states. The technique, termed two-dimensional double-coherence spectroscopy (2D-DQCS), makes use of multiple, time-ordered ultrashort coherent optical pulses to create double- and single-quantum coherences over time intervals between the pulses. The resulting two-dimensional electronic spectrum maps the energy correlation between the first excited state and two-photon allowed double-quantum states. The principle of the experiment is that when the energy of the double-quantum state, viewed in simple models as a double HOMO to LUMO excitation, equals twice that of a single excitation, then no signal is radiated. However, electron-electron interactions—a combination of exchange interactions and electron correlation—in real systems generates a signal that reveals precisely how the energy of the double-quantum resonance differs from twice the single-quantum resonance. The energy shift measured in this experiment reveals how the second excitation is perturbed by both the presence of the first excitation and the way that the other electrons in the system have responded to the presence of that first excitation. We compare a series of organic dye molecules and find that the energy offset for adding a second electronic excitation to the system relative to the first excitation is on the order of tens of milli-electronvolts, and it depends quite sensitively on molecular geometry. These results demonstrate the effectiveness of 2D-DQCS for elucidating quantitative information about electron-electron interactions, many-electron wavefunctions, and electron correlation in electronic excited states and excitons. PMID:19552412

  8. Development of Polyimide Foam for Aircraft Sidewall Applications

    NASA Technical Reports Server (NTRS)

    Silcox, Richard; Cano, Roberto J.; Howerton, Brian M.; Bolton, J. Stuart; Kim, Nicholas N.

    2013-01-01

    In this paper, the use of polyimide foam as a lining in double panel applications is considered. It is being investigated here as a replacement for aircraft grade glass fiber and has a number of attractive functional attributes, not the least of which is its high fire resistance. The test configuration studied here consisted of two 1mm (0.04 in.) thick, flat aluminum panels separated by 12.7 cm (5.0 in.) with a 7.6 cm (3.0 in.) thick layer of foam centered in that space. Random incidence transmission loss measurements were conducted on this buildup, and conventional poro-elastic models were used to predict the performance of the lining material. Results from two densities of foam are considered. The Biot parameters of the foam were determined by a combination of direct measurement (for density, flow resistivity and Young s modulus) and inverse characterization procedures (for porosity, tortuosity, viscous and thermal characteristic length, Poisson s ratio and loss factor). The inverse characterization procedure involved matching normal incidence standing wave tube measurements of absorption coefficient and transmission loss of the isolated foam with finite element predictions. When the foam parameters determined in this way were used to predict the performance of the complete double panel system, reasonable agreement was obtained between the measured transmission loss and predictions made using a commercial statistical energy analysis code.

  9. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  10. First results from the new double velocity-double energy spectrometer VERDI

    NASA Astrophysics Data System (ADS)

    Frégeau, M. O.; Oberstedt, S.; Gamboni, Th.; Geerts, W.; Hambsch, F.-J.; Vidali, M.

    2016-05-01

    The VERDI spectrometer (VElocity foR Direct mass Identification) is a two arm time-of-flight spectrometer built at the European Commission Joint Research Centre IRMM. It determines fragment masses and kinetic energy distributions produced in nuclear fission by means of the double velocity and double energy (2v-2E) method. The simultaneous measurement of pre- and post neutron fragment characteristics allows studying the share of excitation energy between the two fragments. In particular, the evolution of fission modes and neutron multiplicity may be studied as a function of the available excitation energy. Both topics are of great importance for the development of models used in the evaluation of nuclear data, and also have important implications for the fundamental understanding of the fission process. The development of VERDI focus on maximum geometrical efficiency while striving for highest possible mass resolution. An innovative transmission start detector, using electrons ejected from the target itself, was developed. Stop signal and kinetic energy of both fragments are provided by two arrays of silicon detectors. The present design provides about 200 times higher geometrical efficiency than that of the famous COSI FAN TUTTE spectrometer [Nuclear Instruments and Methods in Physics Research 219 (1984) 569]. We report about a commissioning experiment of the VERDI spectrometer, present first results from a 2v-2E measurement of 252Cf spontaneous fission and discuss the potential of this instrument to contribute to the investigation prompt fission neutron characteristics as a function of fission fragment properties.

  11. Effects from the Reduction of Air Leakage on Energy and Durability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hun, Diana E.; Childs, Phillip W.; Atchley, Jerald Allen

    2014-01-01

    Buildings are responsible for approximately 40% of the energy used in the US. Codes have been increasing building envelope requirements, and in particular those related to improving airtightness, in order to reduce energy consumption. The main goal of this research was to evaluate the effects from reductions in air leakage on energy loads and material durability. To this end, we focused on the airtightness and thermal resistance criteria set by the 2012 International Energy Conservation Code (IECC).

  12. Single and Double Photoionization of Mg

    NASA Astrophysics Data System (ADS)

    Abdel-Naby, Shahin; Pindzola, M. S.; Colgan, J.

    2014-05-01

    Single and double photoionization cross sections for Mg are calculated using a time-dependent close-coupling method. The correlation between the two 3 s subshell electrons of Mg is obtained by relaxation of the close-coupled equations in imaginary time. An implicit method is used to propagate the close-coupled equations in real time to obtain single and double ionization cross sections for Mg. Energy and angle triple differential cross sections for double photoionization at equal energy sharing of E1 =E2 = 16 . 4 eV are compared with Elettra experiments and previous theoretical calculations. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California, NICS in Knoxville, Tennessee, and OLCF in Oak Ridge, Tennessee.

  13. Femtosecond laser-induced periodic surface structures on silicon upon polarization controlled two-color double-pulse irradiation.

    PubMed

    Höhm, Sandra; Herzlieb, Marcel; Rosenfeld, Arkadi; Krüger, Jörg; Bonse, Jörn

    2015-01-12

    Two-color double-fs-pulse experiments were performed on silicon wafers to study the temporally distributed energy deposition in the formation of laser-induced periodic surface structures (LIPSS). A Mach-Zehnder interferometer generated parallel or cross-polarized double-pulse sequences at 400 and 800 nm wavelength, with inter-pulse delays up to a few picoseconds between the sub-ablation 50-fs-pulses. Multiple two-color double-pulse sequences were collinearly focused by a spherical mirror to the sample. The resulting LIPSS characteristics (periods, areas) were analyzed by scanning electron microscopy. A wavelength-dependent plasmonic mechanism is proposed to explain the delay-dependence of the LIPSS. These two-color experiments extend previous single-color studies and prove the importance of the ultrafast energy deposition for LIPSS formation.

  14. A generic double-curvature piezoelectric shell energy harvester: Linear/nonlinear theory and applications

    NASA Astrophysics Data System (ADS)

    Zhang, X. F.; Hu, S. D.; Tzou, H. S.

    2014-12-01

    Converting vibration energy to useful electric energy has attracted much attention in recent years. Based on the electromechanical coupling of piezoelectricity, distributed piezoelectric zero-curvature type (e.g., beams and plates) energy harvesters have been proposed and evaluated. The objective of this study is to develop a generic linear and nonlinear piezoelectric shell energy harvesting theory based on a double-curvature shell. The generic piezoelectric shell energy harvester consists of an elastic double-curvature shell and piezoelectric patches laminated on its surface(s). With a current model in the closed-circuit condition, output voltages and energies across a resistive load are evaluated when the shell is subjected to harmonic excitations. Steady-state voltage and power outputs across the resistive load are calculated at resonance for each shell mode. The piezoelectric shell energy harvesting mechanism can be simplified to shell (e.g., cylindrical, conical, spherical, paraboloidal, etc.) and non-shell (beam, plate, ring, arch, etc.) distributed harvesters using two Lamé parameters and two curvature radii of the selected harvester geometry. To demonstrate the utility and simplification procedures, the generic linear/nonlinear shell energy harvester mechanism is simplified to three specific structures, i.e., a cantilever beam case, a circular ring case and a conical shell case. Results show the versatility of the generic linear/nonlinear shell energy harvesting mechanism and the validity of the simplification procedures.

  15. Extension of TOPAS for the simulation of proton radiation effects considering molecular and cellular endpoints

    NASA Astrophysics Data System (ADS)

    Polster, Lisa; Schuemann, Jan; Rinaldi, Ilaria; Burigo, Lucas; McNamara, Aimee L.; Stewart, Robert D.; Attili, Andrea; Carlson, David J.; Sato, Tatsuhiko; Ramos Méndez, José; Faddegon, Bruce; Perl, Joseph; Paganetti, Harald

    2015-07-01

    The aim of this work is to extend a widely used proton Monte Carlo tool, TOPAS, towards the modeling of relative biological effect (RBE) distributions in experimental arrangements as well as patients. TOPAS provides a software core which users configure by writing parameter files to, for instance, define application specific geometries and scoring conditions. Expert users may further extend TOPAS scoring capabilities by plugging in their own additional C++ code. This structure was utilized for the implementation of eight biophysical models suited to calculate proton RBE. As far as physics parameters are concerned, four of these models are based on the proton linear energy transfer, while the others are based on DNA double strand break induction and the frequency-mean specific energy, lineal energy, or delta electron generated track structure. The biological input parameters for all models are typically inferred from fits of the models to radiobiological experiments. The model structures have been implemented in a coherent way within the TOPAS architecture. Their performance was validated against measured experimental data on proton RBE in a spread-out Bragg peak using V79 Chinese Hamster cells. This work is an important step in bringing biologically optimized treatment planning for proton therapy closer to the clinical practice as it will allow researchers to refine and compare pre-defined as well as user-defined models.

  16. Extension of TOPAS for the simulation of proton radiation effects considering molecular and cellular endpoints

    PubMed Central

    Polster, Lisa; Schuemann, Jan; Rinaldi, Ilaria; Burigo, Lucas; McNamara, Aimee L.; Stewart, Robert D.; Attili, Andrea; Carlson, David J.; Sato, Tatsuhiko; Méndez, José Ramos; Faddegon, Bruce; Perl, Joseph; Paganetti, Harald

    2015-01-01

    The aim of this work is to extend a widely used proton Monte Carlo tool, TOPAS, towards the modeling of relative biological effect (RBE) distributions in experimental arrangements as well as patients. TOPAS provides a software core which users configure by writing parameter files to, for instance, define application specific geometries and scoring conditions. Expert users may further extend TOPAS scoring capabilities by plugging in their own additional C++ code. This structure was utilized for the implementation of eight biophysical models suited to calculate proton RBE. As far as physics parameters are concerned, four of these models are based on the proton linear energy transfer (LET), while the others are based on DNA Double Strand Break (DSB) induction and the frequency-mean specific energy, lineal energy, or delta electron generated track structure. The biological input parameters for all models are typically inferred from fits of the models to radiobiological experiments. The model structures have been implemented in a coherent way within the TOPAS architecture. Their performance was validated against measured experimental data on proton RBE in a spread-out Bragg peak using V79 Chinese Hamster cells. This work is an important step in bringing biologically optimized treatment planning for proton therapy closer to the clinical practice as it will allow researchers to refine and compare pre-defined as well as user-defined models. PMID:26061666

  17. Investigation of deformation effects on the decay properties of 12 C + α Cluster states in 16O

    NASA Astrophysics Data System (ADS)

    Soylu, A.; Koyuncu, F.; Coban, A.; Bayrak, O.; Freer, M.

    2018-04-01

    We have analyzed the elastic scattering angular distributions data of the α +12C reaction over a wide energy range (Elab = 28 . 2 to 35.5 MeV) within the framework of the Optical Model formalism. A double folding (DF) type real potential was used with a phenomenological Woods-Saxon-squared (WS2) type imaginary potential. Good agreement between the calculations and experimental data was obtained. By using the real DF potential we have calculated the properties of the α-cluster states in 16O by using the Gamow code as well as the α-decay widths by using the WKB method. We implemented a 12C + α cluster framework for the calculation of the excitation energies and decay widths of 16O as a function of the orientation of the planar 12C nucleus with respect to the α-particle. These calculations showed strong sensitivity of the widths and excitation energies to the orientation. Branching ratios were also calculated and though less sensitive to the 12C orientation, it was found that 12Cgs + α structure, with the α-particle orbiting the 12C in its ground state, is dominant. This work demonstrates that deformation, and the orientation, of 12C plays a crucial role in the understanding of the nature of the α-cluster states in 16O.

  18. Broad ion energy distributions in helicon wave-coupled helium plasma

    NASA Astrophysics Data System (ADS)

    Woller, K. B.; Whyte, D. G.; Wright, G. M.

    2017-05-01

    Helium ion energy distributions were measured in helicon wave-coupled plasmas of the dynamics of ion implantation and sputtering of surface experiment using a retarding field energy analyzer. The shape of the energy distribution is a double-peak, characteristic of radiofrequency plasma potential modulation. The broad distribution is located within a radius of 0.8 cm, while the quartz tube of the plasma source has an inner radius of 2.2 cm. The ion energy distribution rapidly changes from a double-peak to a single peak in the radius range of 0.7-0.9 cm. The average ion energy is approximately uniform across the plasma column including the double-peak and single peak regions. The widths of the broad distribution, ΔE , in the wave-coupled mode are large compared to the time-averaged ion energy, ⟨E ⟩. On the axis (r = 0), ΔE / ⟨E ⟩ ≲ 3.4, and at a radius near the edge of the plasma column (r = 2.2 cm), ΔE / ⟨E ⟩ ˜ 1.2. The discharge parameter space is scanned to investigate the effects of the magnetic field, input power, and chamber fill pressure on the wave-coupled mode that exhibits the sharp radial variation in the ion energy distribution.

  19. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  20. Validation of the "HAMP" mapping algorithm: a tool for long-term trauma research studies in the conversion of AIS 2005 to AIS 98.

    PubMed

    Adams, Derk; Schreuder, Astrid B; Salottolo, Kristin; Settell, April; Goss, J Richard

    2011-07-01

    There are significant changes in the abbreviated injury scale (AIS) 2005 system, which make it impractical to compare patients coded in AIS version 98 with patients coded in AIS version 2005. Harborview Medical Center created a computer algorithm "Harborview AIS Mapping Program (HAMP)" to automatically convert AIS 2005 to AIS 98 injury codes. The mapping was validated using 6 months of double-coded patient injury records from a Level I Trauma Center. HAMP was used to determine how closely individual AIS and injury severity scores (ISS) were converted from AIS 2005 to AIS 98 versions. The kappa statistic was used to measure the agreement between manually determined codes and HAMP-derived codes. Seven hundred forty-nine patient records were used for validation. For the conversion of AIS codes, the measure of agreement between HAMP and manually determined codes was [kappa] = 0.84 (95% confidence interval, 0.82-0.86). The algorithm errors were smaller in magnitude than the manually determined coding errors. For the conversion of ISS, the agreement between HAMP versus manually determined ISS was [kappa] = 0.81 (95% confidence interval, 0.78-0.84). The HAMP algorithm successfully converted injuries coded in AIS 2005 to AIS 98. This algorithm will be useful when comparing trauma patient clinical data across populations coded in different versions, especially for longitudinal studies.

  1. 1W frequency-doubled VCSEL-pumped blue laser with high pulse energy

    NASA Astrophysics Data System (ADS)

    Van Leeuwen, Robert; Chen, Tong; Watkins, Laurence; Xu, Guoyang; Seurin, Jean-Francois; Wang, Qing; Zhou, Delai; Ghosh, Chuni

    2015-02-01

    We report on a Q-switched VCSEL side-pumped 946 nm Nd:YAG laser that produces high average power blue light with high pulse energy after frequency doubling in BBO. The gain medium was water cooled and symmetrically pumped by three 1 kW 808 nm VCSEL pump modules. More than 1 W blue output was achieved at 210 Hz with 4.9 mJ pulse energy and at 340 Hz with 3.2 mJ pulse energy, with 42% and 36% second harmonic conversion efficiency respectively. Higher pulse energy was obtained at lower repetition frequencies, up to 9.3 mJ at 70 Hz with 52% conversion efficiency.

  2. Hypothesis of Lithocoding: Origin of the Genetic Code as a "Double Jigsaw Puzzle" of Nucleobase-Containing Molecules and Amino Acids Assembled by Sequential Filling of Apatite Mineral Cellules.

    PubMed

    Skoblikow, Nikolai E; Zimin, Andrei A

    2016-05-01

    The hypothesis of direct coding, assuming the direct contact of pairs of coding molecules with amino acid side chains in hollow unit cells (cellules) of a regular crystal-structure mineral is proposed. The coding nucleobase-containing molecules in each cellule (named "lithocodon") partially shield each other; the remaining free space determines the stereochemical character of the filling side chain. Apatite-group minerals are considered as the most preferable for this type of coding (named "lithocoding"). A scheme of the cellule with certain stereometric parameters, providing for the isomeric selection of contacting molecules is proposed. We modelled the filling of cellules with molecules involved in direct coding, with the possibility of coding by their single combination for a group of stereochemically similar amino acids. The regular ordered arrangement of cellules enables the polymerization of amino acids and nucleobase-containing molecules in the same direction (named "lithotranslation") preventing the shift of coding. A table of the presumed "LithoCode" (possible and optimal lithocodon assignments for abiogenically synthesized α-amino acids involved in lithocoding and lithotranslation) is proposed. The magmatic nature of the mineral, abiogenic synthesis of organic molecules and polymerization events are considered within the framework of the proposed "volcanic scenario".

  3. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  4. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  5. Energy and Energy Cost Savings Analysis of the 2015 IECC for Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jian; Xie, YuLong; Athalye, Rahul A.

    As required by statute (42 USC 6833), DOE recently issued a determination that ANSI/ASHRAE/IES Standard 90.1-2013 would achieve greater energy efficiency in buildings subject to the code compared to the 2010 edition of the standard. Pacific Northwest National Laboratory (PNNL) conducted an energy savings analysis for Standard 90.1-2013 in support of its determination . While Standard 90.1 is the model energy standard for commercial and multi-family residential buildings over three floors (42 USC 6833), many states have historically adopted the International Energy Conservation Code (IECC) for both residential and commercial buildings. This report provides an assessment as to whether buildingsmore » constructed to the commercial energy efficiency provisions of the 2015 IECC would save energy and energy costs as compared to the 2012 IECC. PNNL also compared the energy performance of the 2015 IECC with the corresponding Standard 90.1-2013. The goal of this analysis is to help states and local jurisdictions make informed decisions regarding model code adoption.« less

  6. The effect of total noise on two-dimension OCDMA codes

    NASA Astrophysics Data System (ADS)

    Dulaimi, Layth A. Khalil Al; Badlishah Ahmed, R.; Yaakob, Naimah; Aljunid, Syed A.; Matem, Rima

    2017-11-01

    In this research, we evaluate the performance of total noise effect on two dimension (2-D) optical code-division multiple access (OCDMA) performance systems using 2-D Modified Double Weight MDW under various link parameters. The impact of the multi-access interference (MAI) and other noise effect on the system performance. The 2-D MDW is compared mathematically with other codes which use similar techniques. We analyzed and optimized the data rate and effective receive power. The performance and optimization of MDW code in OCDMA system are reported, the bit error rate (BER) can be significantly improved when the 2-D MDW code desired parameters are selected especially the cross correlation properties. It reduces the MAI in the system compensate BER and phase-induced intensity noise (PIIN) in incoherent OCDMA The analysis permits a thorough understanding of PIIN, shot and thermal noises impact on 2-D MDW OCDMA system performance. PIIN is the main noise factor in the OCDMA network.

  7. Electron temperature differences and double layers

    NASA Technical Reports Server (NTRS)

    Chan, C.; Hershkowitz, N.; Lonngren, K. E.

    1983-01-01

    Electron temperature differences across plasma double layers are studied experimentally. It is shown that the temperature differences across a double layer can be varied and are not a result of thermalization of the bump-on-tail distribution. The implications of these results for electron thermal energy transport in laser-pellet and tandem-mirror experiments are also discussed.

  8. DFMSPH14: A C-code for the double folding interaction potential of two spherical nuclei

    NASA Astrophysics Data System (ADS)

    Gontchar, I. I.; Chushnyakova, M. V.

    2016-09-01

    This is a new version of the DFMSPH code designed to obtain the nucleus-nucleus potential by using the double folding model (DFM) and in particular to find the Coulomb barrier. The new version uses the charge, proton, and neutron density distributions provided by the user. Also we added an option for fitting the DFM potential by the Gross-Kalinowski profile. The main functionalities of the original code (e.g. the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei, the Coulomb barrier characteristics, etc.) have not been modified. Catalog identifier: AEFH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 114404 Distribution format: tar.gz Programming language: C Computer: PC and Mac Operation system: Windows XP and higher, MacOS, Unix/Linux Memory required to execute with typical data: below 10 Mbyte Classification: 17.9 Catalog identifier of previous version: AEFH_v1_0 Journal reference of previous version: Comp. Phys. Comm. 181 (2010) 168 Does the new version supersede the previous version?: Yes Nature of physical problem: The code calculates in a semimicroscopic way the bare interaction potential between two colliding spherical nuclei as a function of the center of mass distance. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by an analytical profile (Woods-Saxon or Gross-Kalinowski) near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e.g. the range of the exchange part of the nuclear term) can be investigated. Method of solution: The nucleus-nucleus potential is calculated using the double folding model with the Coulomb and the effective M3Y NN interactions. For the direct parts of the Coulomb and the nuclear terms, the Fourier transform method is used. In order to calculate the exchange parts, the density matrix expansion method is applied. Typical running time: less than 1 minute. Reason for new version: Many users asked us how to implement their own density distributions in the DFMSPH. Now this option has been added. Also we found that the calculated Double-Folding Potential (DFP) is approximated more accurately by the Gross-Kalinowski (GK) profile. This option has been also added.

  9. Defect-induced band-edge reconstruction of a bismuth-halide double perovskite for visible-light absorption

    DOE PAGES

    Slavney, Adam H.; Leppert, Linn; Bartesaghi, Davide; ...

    2017-03-29

    In this study, halide double perovskites have recently been developed as less toxic analogs of the lead perovskite solar-cell absorbers APbX 3 (A = monovalent cation; X = Br or I). However, all known halide double perovskites have large bandgaps that afford weak visible-light absorption. The first halide double perovskite evaluated as an absorber, Cs 2AgBiBr 6 (1), has a bandgap of 1.95 eV. Here, we show that dilute alloying decreases 1’s bandgap by ca. 0.5 eV. Importantly, time-resolved photoconductivity measurements reveal long-lived carriers with microsecond lifetimes in the alloyed material, which is very promising for photovoltaic applications. The alloyedmore » perovskite described herein is the first double perovskite to show comparable bandgap energy and carrier lifetime to those of (CH 3NH 3)PbI 3. By describing how energy- and symmetry-matched impurity orbitals, at low concentrations, dramatically alter 1’s band edges, we open a potential pathway for the large and diverse family of halide double perovskites to compete with APbX 3 absorbers.« less

  10. New precursors for direct synthesis of single phase Na- and K-{beta}{double_prime}-aluminas for use in AMTEC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, R.L.; MacQueen, D.B.; Bader, K.E.

    1997-12-31

    Alkali Metal Thermoelectric Converters (AMTEC) are efficient direct energy conversion devices that depend on the use of highly conductive beta-alumina membranes for their operation. The key component of the AMTEC system is a highly conductive Na-{beta}{double_prime}-alumina solid electrolyte which conducts sodium ions from the high to low temperature zone, thereby generating electricity. AMTEC cells convert thermal to electrical energy by using heat to produce and maintain an alkali metal concentration gradient across the ion transporting BASE membrane. They have developed a method for producing pure phase Na-{beta}{double_prime}-alumina and K-{beta}{double_prime}-alumina powders from single phase nano-sized carboxylato-alumoxanes precursors. Sodium or potassium ionsmore » (the mobile ions) and either Mg{sup 2+} or Li{sup +} ions (which stabilize the {beta}{double_prime}-alumina structure) can be atomically dispersed into the carboxylato-alumoxane lattice at low (< 100 C) temperature. Calculation of the carboxylato-alumoxane precursors at 1,200--1,500 C produces pure phase {beta}{double_prime}-alumina powders.« less

  11. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  13. Ionic Liquids as Electrolytes for Electrochemical Double-Layer Capacitors: Structures that Optimize Specific Energy.

    PubMed

    Mousavi, Maral P S; Wilson, Benjamin E; Kashefolgheta, Sadra; Anderson, Evan L; He, Siyao; Bühlmann, Philippe; Stein, Andreas

    2016-02-10

    Key parameters that influence the specific energy of electrochemical double-layer capacitors (EDLCs) are the double-layer capacitance and the operating potential of the cell. The operating potential of the cell is generally limited by the electrochemical window of the electrolyte solution, that is, the range of applied voltages within which the electrolyte or solvent is not reduced or oxidized. Ionic liquids are of interest as electrolytes for EDLCs because they offer relatively wide potential windows. Here, we provide a systematic study of the influence of the physical properties of ionic liquid electrolytes on the electrochemical stability and electrochemical performance (double-layer capacitance, specific energy) of EDLCs that employ a mesoporous carbon model electrode with uniform, highly interconnected mesopores (3DOm carbon). Several ionic liquids with structurally diverse anions (tetrafluoroborate, trifluoromethanesulfonate, trifluoromethanesulfonimide) and cations (imidazolium, ammonium, pyridinium, piperidinium, and pyrrolidinium) were investigated. We show that the cation size has a significant effect on the electrolyte viscosity and conductivity, as well as the capacitance of EDLCs. Imidazolium- and pyridinium-based ionic liquids provide the highest cell capacitance, and ammonium-based ionic liquids offer potential windows much larger than imidazolium and pyridinium ionic liquids. Increasing the chain length of the alkyl substituents in 1-alkyl-3-methylimidazolium trifluoromethanesulfonimide does not widen the potential window of the ionic liquid. We identified the ionic liquids that maximize the specific energies of EDLCs through the combined effects of their potential windows and the double-layer capacitance. The highest specific energies are obtained with ionic liquid electrolytes that possess moderate electrochemical stability, small ionic volumes, low viscosity, and hence high conductivity, the best performing ionic liquid tested being 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide.

  14. B-DNA Structure and Stability as Function of Nucleic Acid Composition: Dispersion-Corrected DFT Study of Dinucleoside Monophosphate Single and Double Strands

    PubMed Central

    Barone, Giampaolo; Fonseca Guerra, Célia; Bickelhaupt, F Matthias

    2013-01-01

    We have computationally investigated the structure and stability of all 16 combinations of two out of the four natural DNA bases A, T, G and C in a di-2′-deoxyribonucleoside-monophosphate model DNA strand as well as in 10 double-strand model complexes thereof, using dispersion-corrected density functional theory (DFT-D). Optimized geometries with B-DNA conformation were obtained through the inclusion of implicit water solvent and, in the DNA models, of sodium counterions, to neutralize the negative charge of the phosphate groups. The results obtained allowed us to compare the relative stability of isomeric single and double strands. Moreover, the energy of the Watson–Crick pairing of complementary single strands to form double-helical structures was calculated. The latter furnished the following increasing stability trend of the double-helix formation energy: d(TpA)2

  15. Observation of double-well potential of NaH C 1Σ+ state: Deriving the dissociation energy of its ground state

    NASA Astrophysics Data System (ADS)

    Chu, Chia-Ching; Huang, Hsien-Yu; Whang, Thou-Jen; Tsai, Chin-Chun

    2018-03-01

    Vibrational levels (v = 6-42) of the NaH C 1Σ+ state including the inner and outer wells and the near-dissociation region were observed by pulsed optical-optical double resonance fluorescence depletion spectroscopy. The absolute vibrational quantum number is identified by comparing the vibrational energy difference of this experiment with the ab initio calculations. The outer well with v up to 34 is analyzed using the Dunham expansion and a Rydberg-Klein-Rees (RKR) potential energy curve is constructed. A hybrid double-well potential combined with the RKR potential, the ab initio calculation, and a long-range potential is able to describe the whole NaH C 1Σ+ state including the higher vibrational levels (v = 35-42). The dissociation energy of the NaH C 1Σ+ state is determined to be De(C) = 6595.10 ± 5 cm-1 and then the dissociation energy of the NaH ground state De(X) = 15 807.87 ± 5 cm-1 can be derived.

  16. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    PubMed

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.

  17. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    PubMed Central

    Yu, Shidi; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%. PMID:29748525

  18. Improving building energy efficiency in India: State-level analysis of building energy efficiency policies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Tan, Qing; Evans, Meredydd

    India is expected to add 40 billion m2 of new buildings till 2050. Buildings are responsible for one third of India’s total energy consumption today and building energy use is expected to continue growing driven by rapid income and population growth. The implementation of the Energy Conservation Building Code (ECBC) is one of the measures to improve building energy efficiency. Using the Global Change Assessment Model, this study assesses growth in the buildings sector and impacts of building energy policies in Gujarat, which would help the state adopt ECBC and expand building energy efficiency programs. Without building energy policies, buildingmore » energy use in Gujarat would grow by 15 times in commercial buildings and 4 times in urban residential buildings between 2010 and 2050. ECBC improves energy efficiency in commercial buildings and could reduce building electricity use in Gujarat by 20% in 2050, compared to the no policy scenario. Having energy codes for both commercial and residential buildings could result in additional 10% savings in electricity use. To achieve these intended savings, it is critical to build capacity and institution for robust code implementation.« less

  19. Relay selection in energy harvesting cooperative networks with rateless codes

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiyan; Wang, Fei

    2018-04-01

    This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.

  20. CHMWTR: A Plasma Chemistry Code for Water Vapor

    DTIC Science & Technology

    2012-02-01

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6790--12-9383 CHMWTR: A Plasma Chemistry Code for Water Vapor Daniel F. GorDon Michael...NUMBER OF PAGES 17. LIMITATION OF ABSTRACT CHMWTR: A Plasma Chemistry Code for Water Vapor Daniel F. Gordon, Michael H. Helle, Theodore G. Jones, and K...October 2011 NRL *Directed Energy Scholar, Directed Energy Professional Society Plasma chemistry Breakdown field Conductivity 67-4270-02 CHMWTR: a Plasma

  1. Minnesota Energy and Cost Savings for New Single- and Multifamily Homes: 2009 and 2012 IECC as Compared to the Minnesota Residential Energy Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.

    2012-04-01

    The 2009 and 2012 International Energy Conservation Codes (IECC) yield positive benefits for Minnesota homeowners. Moving to either the 2009 or 2012 IECC from the current Minnesota Residential Energy Code is cost effective over a 30-year life cycle. On average, Minnesota homeowners will save $1,277 over 30 years under the 2009 IECC, with savings still higher at $9,873 with the 2012 IECC. After accounting for upfront costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceed cumulative cash outlays) in 3 years for the 2009 IECC and 1 year for themore » 2012 IECC. Average annual energy savings are $122 for the 2009 IECC and $669 for the 2012 IECC.« less

  2. Simulations of neutron transport at low energy: a comparison between GEANT and MCNP.

    PubMed

    Colonna, N; Altieri, S

    2002-06-01

    The use of the simulation tool GEANT for neutron transport at energies below 20 MeV is discussed, in particular with regard to shielding and dose calculations. The reliability of the GEANT/MICAP package for neutron transport in a wide energy range has been verified by comparing the results of simulations performed with this package in a wide energy range with the prediction of MCNP-4B, a code commonly used for neutron transport at low energy. A reasonable agreement between the results of the two codes is found for the neutron flux through a slab of material (iron and ordinary concrete), as well as for the dose released in soft tissue by neutrons. These results justify the use of the GEANT/MICAP code for neutron transport in a wide range of applications, including health physics problems.

  3. Absorptive coding metasurface for further radar cross section reduction

    NASA Astrophysics Data System (ADS)

    Sui, Sai; Ma, Hua; Wang, Jiafu; Pang, Yongqiang; Feng, Mingde; Xu, Zhuo; Qu, Shaobo

    2018-02-01

    Lossless coding metasurfaces and metamaterial absorbers have been widely used for radar cross section (RCS) reduction and stealth applications, which merely depend on redirecting electromagnetic wave energy into various oblique angles or absorbing electromagnetic energy, respectively. Here, an absorptive coding metasurface capable of both the flexible manipulation of backward scattering and further wideband bistatic RCS reduction is proposed. The original idea is carried out by utilizing absorptive elements, such as metamaterial absorbers, to establish a coding metasurface. We establish an analytical connection between an arbitrary absorptive coding metasurface arrangement of both the amplitude and phase and its far-field pattern. Then, as an example, an absorptive coding metasurface is demonstrated as a nonperiodic metamaterial absorber, which indicates an expected better performance of RCS reduction than the traditional lossless coding metasurface and periodic metamaterial-absorber. Both theoretical analysis and full-wave simulation results show good accordance with the experiment.

  4. Doubled-lined eclipsing binary system KIC~2306740 with pulsating component discovered from Kepler space photometry

    NASA Astrophysics Data System (ADS)

    Yakut, Kadri

    2015-08-01

    We present a detailed study of KIC 2306740, an eccentric double-lined eclipsing binary system with a pulsating component.Archive Kepler satellite data were combined with newly obtained spectroscopic data with 4.2\\,m William Herschel Telescope(WHT). This allowed us to determine rather precise orbital and physical parameters of this long period, slightly eccentric, pulsating binary system. Duplicity effects are extracted from the light curve in order to estimate pulsation frequencies from the residuals.We modelled the detached binary system assuming non-conservative evolution models with the Cambridge STARS(TWIN) code.

  5. Testing Common Envelopes on Double White Dwarf Binaries

    NASA Astrophysics Data System (ADS)

    Nandez, Jose L. A.; Ivanova, Natalia; Lombardi, James C., Jr.

    2015-06-01

    The formation of a double white dwarf binary likely involves a common envelope (CE) event between a red giant and a white dwarf (WD) during the most recent episode of Roche lobe overflow mass transfer. We study the role of recombination energy with hydrodynamic simulations of such stellar interactions. We find that the recombination energy helps to expel the common envelope entirely, while if recombination energy is not taken into account, a significant fraction of the common envelope remains bound. We apply our numerical methods to constrain the progenitor system for WD 1101+364 - a double WD binary that has well-measured mass ratio of q=0.87±0.03 and an orbital period of 0.145 days. Our best-fit progenitor for the pre-common envelope donor is a 1.5 ⊙ red giant.

  6. Double photoionization of the Be isoelectronic sequence

    NASA Astrophysics Data System (ADS)

    Barmaki, S.; Albert, M. A.; Belliveau, J.; Laulan, S.

    2018-05-01

    We investigate the double photoionization (DPI) process along the Be isoelectronic sequence (Be‑Ne6+) by solving the time-dependent Schrödinger equation with a spectral method of configuration interaction type. The results that we obtain of the DPI cross sections are in a good agreement with other reported data. We also present the first results of double-to-single photoionization cross sections ratios for Be-like ions in support of possible photofragmentation experiments with x-ray free electron lasers. Finally, we probe the mutual interaction of the valence electrons at different photon energies and examine the subsequent redistribution of the excess photon energy among them.

  7. Double-hybrid density-functional theory with meta-generalized-gradient approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souvi, Sidi M. O., E-mail: sidi.souvi@irsn.fr; Sharkas, Kamal; Toulouse, Julien, E-mail: julien.toulouse@upmc.fr

    2014-02-28

    We extend the previously proposed one-parameter double-hybrid density-functional theory [K. Sharkas, J. Toulouse, and A. Savin, J. Chem. Phys. 134, 064113 (2011)] to meta-generalized-gradient-approximation (meta-GGA) exchange-correlation density functionals. We construct several variants of one-parameter double-hybrid approximations using the Tao-Perdew-Staroverov-Scuseria (TPSS) meta-GGA functional and test them on test sets of atomization energies and reaction barrier heights. The most accurate variant uses the uniform coordinate scaling of the density and of the kinetic energy density in the correlation functional, and improves over both standard Kohn-Sham TPSS and second-order Møller-Plesset calculations.

  8. Overtaking collision effects in a cw double-pass proton linac

    DOE PAGES

    Tao, Yue; Qiang, Ji; Hwang, Kilean

    2017-12-22

    The recirculating superconducting proton linac has the advantage of reducing the number of cavities in the accelerator and the corresponding construction and operational costs. Beam dynamics simulations were done recently in a double-pass recirculating proton linac using a single proton beam bunch. For continuous wave (cw) operation, the high-energy proton bunch during the second pass through the linac will overtake and collide with the low-energy bunch during the first pass at a number of locations of the linac. These collisions might cause proton bunch emittance growth and beam quality degradation. Here, we study the collisional effects due to Coulomb space-chargemore » forces between the high-energy bunch and the low-energy bunch. Our results suggest that these effects on the proton beam quality would be small and might not cause significant emittance growth or beam blowup through the linac. A 10 mA, 500 MeV cw double-pass proton linac is feasible without using extra hardware for phase synchronization.« less

  9. New Ways of Treating Data for Diatomic Molecule 'shelf' and Double-Minimum States

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Tao, Jason; Khanna, Shirin; Pashov, Asen; Tellinghuisen, Joel

    2017-06-01

    Electronic states whose potential energy functions have 'shelf' or double-minimum shapes have always presented special challenges because, as functions of vibrational quantum number, the vibrational energies/spacings and inertial rotational constants either have an abrupt change of character with discontinuous slope, or past a given point, become completely chaotic. The present work shows that a `traditional' methodology developed for deep `regular' single-well potentials can also provide accurate `parameter-fit' descriptions of the v-dependence of the vibrational energies and rotational constants of shelf-state potentials that allow a conventional RKR calculation of their Potential energy functions. It is also shown that a merging of Pashov's uniquely flexible 'spline point-wise' potential function representation with Le Roy's `Morse/Long-Range' (MLR) analytic functional form which automatically incorporates the correct theoretically known long-range form, yields an analytic function that incorporates most of the advantages of both approaches. An illustrative application of this method to data to a double-minimum state of Na_2 will be described.

  10. Overtaking collision effects in a cw double-pass proton linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Yue; Qiang, Ji; Hwang, Kilean

    The recirculating superconducting proton linac has the advantage of reducing the number of cavities in the accelerator and the corresponding construction and operational costs. Beam dynamics simulations were done recently in a double-pass recirculating proton linac using a single proton beam bunch. For continuous wave (cw) operation, the high-energy proton bunch during the second pass through the linac will overtake and collide with the low-energy bunch during the first pass at a number of locations of the linac. These collisions might cause proton bunch emittance growth and beam quality degradation. Here, we study the collisional effects due to Coulomb space-chargemore » forces between the high-energy bunch and the low-energy bunch. Our results suggest that these effects on the proton beam quality would be small and might not cause significant emittance growth or beam blowup through the linac. A 10 mA, 500 MeV cw double-pass proton linac is feasible without using extra hardware for phase synchronization.« less

  11. Development of REBCO HTS Magnet of Magnetic Bearing for Large Capacity Flywheel Energy Storage System

    NASA Astrophysics Data System (ADS)

    Mukoyama, Shinichi; Matsuoka, Taro; Furukawa, Makoto; Nakao, Kengo; Nagashima, Ken; Ogata, Masafumi; Yamashita, Tomohisa; Hasegawa, Hitoshi; Yoshizawa, Kazuhiro; Arai, Yuuki; Miyazaki, Kazuki; Horiuchi, Shinichi; Maeda, Tadakazu; Shimizu, Hideki

    A flywheel energy storage system (FESS) is a promising electrical storage system that moderates fluctuation of electrical power from renewable energy sources. The FESS can charge and discharge the surplus electrical power repetitively with the rotating energy. Particularly, the FESS that utilizes a high temperature superconducting magnetic bearing (HTS bearing) is lower loss than conventional FESS that has mechanical bearing, and has property of longer life operation than secondary batteries. The HTS bearing consists of a HTS bulk and double-pancake coils used 2nd generation REBCO wires. In the development, the HTS double-pancake coils were fabricated and were provided for a levitation test to verify the possibility of the HTS bearing. We successfully confirmed the magnetic field was achieved to design value, and levitation force in the configuration of one YBCO bulk and five double pan-cake coils was obtained to a satisfactory force of 39.2 kN (4 tons).

  12. Extension of applicable neutron energy of DARWIN up to 1 GeV.

    PubMed

    Satoh, D; Sato, T; Endo, A; Matsufuji, N; Takada, M

    2007-01-01

    The radiation-dose monitor, DARWIN, needs a set of response functions of the liquid organic scintillator to assess a neutron dose. SCINFUL-QMD is a Monte Carlo based computer code to evaluate the response functions. In order to improve the accuracy of the code, a new light-output function based on the experimental data was developed for the production and transport of protons deuterons, tritons, (3)He nuclei and alpha particles, and incorporated into the code. The applicable energy of DARWIN was extended to 1 GeV using the response functions calculated by the modified SCINFUL-QMD code.

  13. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  14. Energy Cost Impact of Non-Residential Energy Code Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jian; Hart, Philip R.; Rosenberg, Michael I.

    2016-08-22

    The 2012 International Energy Conservation Code contains 396 separate requirements applicable to non-residential buildings; however, there is no systematic analysis of the energy cost impact of each requirement. Consequently, limited code department budgets for plan review, inspection, and training cannot be focused on the most impactful items. An inventory and ranking of code requirements based on their potential energy cost impact is under development. The initial phase focuses on office buildings with simple HVAC systems in climate zone 4C. Prototype building simulations were used to estimate the energy cost impact of varying levels of non-compliance. A preliminary estimate of themore » probability of occurrence of each level of non-compliance was combined with the estimated lost savings for each level to rank the requirements according to expected savings impact. The methodology to develop and refine further energy cost impacts, specific to building type, system type, and climate location is demonstrated. As results are developed, an innovative alternative method for compliance verification can focus efforts so only the most impactful requirements from an energy cost perspective are verified for every building and a subset of the less impactful requirements are verified on a random basis across a building population. The results can be further applied in prioritizing training material development and specific areas of building official training.« less

  15. Dissociative double-photoionization of butadiene in the 25-45 eV energy range using 3-D multi-coincidence ion momentum imaging spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oghbaie, Shabnam; Gisselbrecht, Mathieu; Laksman, Joakim

    Dissociative double-photoionization of butadiene in the 25-45 eV energy range has been studied with tunable synchrotron radiation using full three-dimensional ion momentum imaging. Using ab initio calculations, the electronic states of the molecular dication below 33 eV are identified. The results of the measurement and calculation show that double ionization from π orbitals selectively triggers twisting about the terminal or central C–C bonds. We show that this conformational rearrangement depends upon the dication electronic state, which effectively acts as a gateway for the dissociation reaction pathway. For photon energies above 33 eV, three-body dissociation channels where neutral H-atom evaporation precedesmore » C–C charge-separation in the dication species appear in the correlation map. The fragment angular distributions support a model where the dication species is initially aligned with the molecular backbone parallel to the polarization vector of the light, indicating a high probability for double-ionization to the “gateway states” for molecules with this orientation.« less

  16. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  17. Through the Past Decade: How Advanced Energy Design Guides have influenced the Design Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Bing; Athalye, Rahul A.

    Advanced Energy Design Guides (AEDGs) were originally developed intended to provide a simple approach to building professionals seeking energy efficient building designs better than ASHRAE Standard 90.1. Since its first book was released in 2004, the AEDG series provided inspiration for the design industry and were seen by designers as a starting point for buildings that wished to go beyond minimum codes and standards. In addition, U.S. Department of Energy’s successful Commercial Building Partnerships (CBP) program leveraged many of the recommendations from the AEDGs to achieve 50% energy savings over ASHRAE Standard 90.1-2004 for prototypical designs of large commercial entitiesmore » in the retail, banking and lodging sectors. Low-energy technologies and strategies developed during the CBP process have been applied by commercial partners throughout their national portfolio of buildings. Later, the AEDGs served as the perfect platform for both Standard 90.1 and ASHRAE’s high performance buildings standard, Standard 189.1. What was high performance a few years ago, however, has become minimum code today. Indeed, most of the prescriptive envelope component requirements in ASHRAE Standard 90.1-2013 are values recommended in the 50% AEDGs several years ago. Similarly, AEDG strategies and recommendations have penetrated the lighting and HVAC sections of both Standard 189.1 and Standard 90.1. Finally, as we look to the future of codes and standards, the AEDGs are serving as a blueprint for how minimum code requirements could be expressed. By customizing codes to specific building types, design strategies tailored for individual buildings could be prescribed as minimum code, just like in the AEDGs. This paper describes the impact that AEDGs have had over the last decade on the design industry and how they continue to influence the future of codes and Standards. From design professionals to code officials, everyone in the building industry has been affected by the AEDGs.« less

  18. A Rapid Screen for Host-Encoded miRNAs with Inhibitory Effects against Ebola Virus Using a Transcription- and Replication-Competent Virus-Like Particle System.

    PubMed

    Wang, Zhongyi; Li, Jiaming; Fu, Yingying; Zhao, Zongzheng; Zhang, Chunmao; Li, Nan; Li, Jingjing; Cheng, Hongliang; Jin, Xiaojun; Lu, Bing; Guo, Zhendong; Qian, Jun; Liu, Linna

    2018-05-16

    MicroRNAs (miRNAs) may become efficient antiviral agents against the Ebola virus (EBOV) targeting viral genomic RNAs or transcripts. We previously conducted a genome-wide search for differentially expressed miRNAs during viral replication and transcription. In this study, we established a rapid screen for miRNAs with inhibitory effects against EBOV using a tetracistronic transcription- and replication-competent virus-like particle (trVLP) system. This system uses a minigenome comprising an EBOV leader region, luciferase reporter, VP40, GP, VP24, EBOV trailer region, and three noncoding regions from the EBOV genome and can be used to model the life cycle of EBOV under biosafety level (BSL) 2 conditions. Informatic analysis was performed to select up-regulated miRNAs targeting the coding regions of the minigenome with the highest binding energy to perform inhibitory effect screening. Among these miRNAs, miR-150-3p had the most significant inhibitory effect. Reverse transcription polymerase chain reaction (RT-PCR), Western blot, and double fluorescence reporter experiments demonstrated that miR-150-3p inhibited the reproduction of trVLPs via the regulation of GP and VP40 expression by directly targeting the coding regions of GP and VP40. This novel, rapid, and convenient screening method will efficiently facilitate the exploration of miRNAs against EBOV under BSL-2 conditions.

  19. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  20. Trial of Naltrexone and Dextromethorphan for Gulf War Veterens’ Illness

    DTIC Science & Technology

    2016-03-01

    held by the research pharmacist . Randomization was performed by drawing a card from a box that specified the order of administration. The study...study. The pills were administered in a randomized, double- blinded fashion. The code for the blinding was held by the research pharmacist

  1. English in Political Discourse of Post-Suharto Indonesia.

    ERIC Educational Resources Information Center

    Bernsten, Suzanne

    This paper illustrates increases in the use of English in political speeches in post-Suharto Indonesia by analyzing the phonological, morphological, and syntactic assimilation of loanwords (linguistic borrowing), as well as hybridization and code switching, and phenomena such as doubling and loan translations. The paper also examines the mixed…

  2. Development of a new EMP code at LANL

    NASA Astrophysics Data System (ADS)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  3. Canonical formulation and conserved charges of double field theory

    DOE PAGES

    Naseer, Usman

    2015-10-26

    We provide the canonical formulation of double field theory. It is shown that this dynamics is subject to primary and secondary constraints. The Poisson bracket algebra of secondary constraints is shown to close on-shell according to the C-bracket. We also give a systematic way of writing boundary integrals in doubled geometry. Finally, by including appropriate boundary terms in the double field theory Hamiltonian, expressions for conserved energy and momentum of an asymptotically flat doubled space-time are obtained and applied to a number of solutions.

  4. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  5. Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldwin, G.T.; Craven, R.E.

    X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.

  6. Critical Assessment of Theoretical Methods for Li3+ Collisions with He at Intermediate and High Impact Energies

    NASA Astrophysics Data System (ADS)

    Belkić, Dževad; Mančev, Ivan; Milojevićb, Nenad

    2013-09-01

    The total cross sections for the various processes for Li3+-He collisions at intermediate-to-high impact energies are compared with the corresponding theories. The possible reasons for the discrepancies among various theoretical predictions are thoroughly discussed. Special attention has been paid to single and double electron capture, simultaneous transfer and ionization, as well as to single and double ionization.

  7. Advanced Power Electronic Interfaces for Distributed Energy Systems, Part 2: Modeling, Development, and Experimental Evaluation of Advanced Control Functions for Single-Phase Utility-Connected Inverter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, S.; Kroposki, B.; Kramer, W.

    Integrating renewable energy and distributed generations into the Smart Grid architecture requires power electronic (PE) for energy conversion. The key to reaching successful Smart Grid implementation is to develop interoperable, intelligent, and advanced PE technology that improves and accelerates the use of distributed energy resource systems. This report describes the simulation, design, and testing of a single-phase DC-to-AC inverter developed to operate in both islanded and utility-connected mode. It provides results on both the simulations and the experiments conducted, demonstrating the ability of the inverter to provide advanced control functions such as power flow and VAR/voltage regulation. This report alsomore » analyzes two different techniques used for digital signal processor (DSP) code generation. Initially, the DSP code was written in C programming language using Texas Instrument's Code Composer Studio. In a later stage of the research, the Simulink DSP toolbox was used to self-generate code for the DSP. The successful tests using Simulink self-generated DSP codes show promise for fast prototyping of PE controls.« less

  8. Energy Storage System Safety: Plan Review and Inspection Checklist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pam C.; Conover, David R.

    Codes, standards, and regulations (CSR) governing the design, construction, installation, commissioning, and operation of the built environment are intended to protect the public health, safety, and welfare. While these documents change over time to address new technology and new safety challenges, there is generally some lag time between the introduction of a technology into the market and the time it is specifically covered in model codes and standards developed in the voluntary sector. After their development, there is also a timeframe of at least a year or two until the codes and standards are adopted. Until existing model codes andmore » standards are updated or new ones are developed and then adopted, one seeking to deploy energy storage technologies or needing to verify the safety of an installation may be challenged in trying to apply currently implemented CSRs to an energy storage system (ESS). The Energy Storage System Guide for Compliance with Safety Codes and Standards1 (CG), developed in June 2016, is intended to help address the acceptability of the design and construction of stationary ESSs, their component parts, and the siting, installation, commissioning, operations, maintenance, and repair/renovation of ESS within the built environment.« less

  9. Maintenance Energy Requirements of Double-Muscled Belgian Blue Beef Cows.

    PubMed

    Fiems, Leo O; De Boever, Johan L; Vanacker, José M; De Campeneere, Sam

    2015-02-13

    Sixty non-pregnant, non-lactating double-muscled Belgian Blue (DMBB) cows were used to estimate the energy required to maintain body weight (BW). They were fed one of three energy levels for 112 or 140 days, corresponding to approximately 100%, 80% or 70% of their total energy requirements. The relationship between daily energy intake and BW and daily BW change was developed using regression analysis. Maintenance energy requirements were estimated from the regression equation by setting BW gain to zero. Metabolizable and net energy for maintenance amounted to 0.569 ± 0.001 and 0.332 ± 0.001 MJ per kg BW(0.75)/d, respectively. Maintenance energy requirements were not dependent on energy level (p > 0.10). Parity affected maintenance energy requirements (p < 0.001), although the small numerical differences between parities may hardly be nutritionally relevant. Maintenance energy requirements of DMBB beef cows were close to the mean energy requirements of other beef genotypes reported in the literature.

  10. Scaling Laws of the Two-Electron Sum-Energy Spectrum in Strong-Field Double Ionization.

    PubMed

    Ye, Difa; Li, Min; Fu, Libin; Liu, Jie; Gong, Qihuang; Liu, Yunquan; Ullrich, J

    2015-09-18

    The sum-energy spectrum of two correlated electrons emitted in nonsequential strong-field double ionization (SFDI) of Ar was studied for intensities of 0.3 to 2×10^{14} W/cm^{2}. We find the mean sum energy, the maximum of the distributions as well as the high-energy tail of the scaled (to the ponderomotive energy) spectra increase with decreasing intensity below the recollision threshold (BRT). At higher intensities the spectra collapse into a single distribution. This behavior can be well explained within a semiclassical model providing clear evidence of the importance of multiple recollisions in the BRT regime. Here, ultrafast thermalization between both electrons is found occurring within three optical cycles only and leaving its clear footprint in the sum-energy spectra.

  11. Double Resummation for Higgs Production

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone

    2018-05-01

    We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.

  12. Measurements of the {sup 116}Cd(p,n) and {sup 116}Sn(n,p) reactions at 300 MeV for studying Gamow-Teller transition strengths in the intermediate nucleus of the {sup 116}Cd double-{beta} decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasano, M.; Kuboki, H.; Sekiguchi, K.

    2009-11-09

    The double differential cross sections for the {sup 116}Cd(p,n) and {sup 116}Sn(n,p) reactions at 300 MeV have been measured over a wide excitation-energy region including Gamow-Teller (GT) giant resonance (GTGR) for studying GT transition strengths in the intermediate nucleus of the {sup 116}Cd double-{beta} decay, namely {sup 116}In. A large amount of the strengths in the {beta}{sup +} direction has been newly found in the energy region up to 30 MeV, which may imply that the GT strengths in the GTGR region contribute to the nuclear matrix element of the two-neutrino double-{beta} decay.

  13. Modification of codes NUALGAM and BREMRAD, Volume 1

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Huang, R.; Firstenberg, H.

    1971-01-01

    The NUGAM2 code predicts forward and backward angular energy differential and integrated distributions for gamma photons and fluorescent radiation emerging from finite laminar transport media. It determines buildup and albedo data for scientific research and engineering purposes; it also predicts the emission characteristics of finite radioisotope sources. The results are shown to be in very good agreement with available published data. The code predicts data for many situations in which no published data is available in the energy range up to 5 MeV. The NUGAM3 code predicts the pulse height response of inorganic (NaI and CsI) scintillation detectors to gamma photons. Because it allows the scintillator to be clad and mounted on a photomultiplier as in the experimental or industrial application, it is a more practical and thus useful code than others previously reported. Results are in excellent agreement with published Monte Carlo and experimental data in the energy range up to 4.5 MeV.

  14. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. Themore » resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.« less

  15. Double coding and mapping using Abbreviated Injury Scale 1998 and 2005: identifying issues for trauma data.

    PubMed

    Palmer, Cameron S; Niggemeyer, Louise E; Charman, Debra

    2010-09-01

    The 2005 version of the Abbreviated Injury Scale (AIS05) potentially represents a significant change in injury spectrum classification, due to a substantial increase in the codeset size and alterations to the agreed severity of many injuries compared to the previous version (AIS98). Whilst many trauma registries around the world are moving to adopt AIS05 or its 2008 update (AIS08), its effect on patient classification in existing registries, and the optimum method of comparing existing data collections with new AIS05 collections are unknown. The present study aimed to assess the potential impact of adopting the AIS05 codeset in an established trauma system, and to identify issues associated with this change. A current subset of consecutive major trauma patients admitted to two large hospitals in the Australian state of Victoria were double-coded in AIS98 and AIS05. Assigned codesets were also mapped to the other AIS version using code lists supplied in the AIS05 manual, giving up to four AIS codes per injury sustained. Resulting codesets were assessed for agreement in codes used, injury severity and calculated severity scores. 602 injuries sustained by 109 patients were compared. Adopting AIS05 would lead to a decrease in the number of designated major trauma patients in Victoria, estimated at 22% (95% confidence interval, 15-31%). Differences in AIS level between versions were significantly more likely to occur amongst head and chest injuries. Data mapped to a different codeset performed better in paired comparisons than raw AIS98 and AIS05 codesets, with data mapping of AIS05 codes back to AIS98 giving significantly higher levels of agreement in AIS level, ISS and NISS than other potential comparisons, and resulting in significantly fewer conversion problems than attempting to map AIS98 codes to AIS05. This study provides new insights into AIS codeset change impact. Adoption of AIS05 or AIS08 in established registries will decrease major trauma patient numbers. Code mapping between AIS versions can improve comparisons between datasets in different AIS versions, although the injury profile of a trauma population will affect the degree of comparability. At present, mapping AIS05 data back to AIS98 is recommended. 2009 Elsevier Ltd. All rights reserved.

  16. Fluid flow and heat transfer characteristics of an enclosure with fin as a top cover of a solar collector

    NASA Astrophysics Data System (ADS)

    Ambarita, H.; Ronowikarto, A. D.; Siregar, R. E. T.; Setyawan, E. Y.

    2018-03-01

    To reduce heat loses in a flat plate solar collector, double glasses cover is employed. Several studies show that the heat loss from the glass cover is still very significant in comparison with other losses. Here, double glasses cover with attached fins is proposed. In the present work, the fluid flow and heat transfer characteristics of the enclosure between the double glass cover are investigated numerically. The objective is to examine the effect of the fin to the heat transfer rate of the cover. Two-dimensional governing equations are developed. The governing equations and the boundary conditions are solved using commercial Computational Fluid Dynamics code. The fluid flow and heat transfer characteristics are plotted, and numerical results are compared with empirical correlation. The results show that the presence of the fin strongly affects the fluid flow and heat transfer characteristics. The fin can reduce the heat transfer rate up to 22.42% in comparison with double glasses cover without fins.

  17. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  18. Computer codes for checking, plotting and processing of neutron cross-section covariance data and their application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sartori, E.; Roussin, R.W.

    This paper presents a brief review of computer codes concerned with checking, plotting, processing and using of covariances of neutron cross-section data. It concentrates on those available from the computer code information centers of the United States and the OECD/Nuclear Energy Agency. Emphasis will be placed also on codes using covariances for specific applications such as uncertainty analysis, data adjustment and data consistency analysis. Recent evaluations contain neutron cross section covariance information for all isotopes of major importance for technological applications of nuclear energy. It is therefore important that the available software tools needed for taking advantage of this informationmore » are widely known as hey permit the determination of better safety margins and allow the optimization of more economic, I designs of nuclear energy systems.« less

  19. Nonperturbative methods in HZE ion transport

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.

    1993-01-01

    A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.

  20. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  1. Low inductance diode design of the Proto 2 accelerator for imploding plasma loads

    NASA Astrophysics Data System (ADS)

    Hsing, W. W.; Coats, R.; McDaniel, D. H.; Spielman, R. B.

    A new water transmission line convolute, single piece insulator, and double accelerator. The water transmission lines have a 5 cm gap to eliminate any water arcing. A two-dimensional magnetic field code was used to calculate the convolute inductance. An acrylic insulator was used as well as a single piece, laminated polycarbonate insulator. They have been successfully tested at over 90% of the Shipman criteria for classical insulator breakdown, although the laminations in the polycarbonate insulator failed after a few shots. The anode and cathode each have two pieces and are held together mechanically. The vacuum MITL tapers to a 3 mm minimum gap. The total inductance is 8.4 nH for gas puff loads and 7.8 nH for imploding foil loads. Out of a forward-going energy of 290 kJ, 175 kJ has been delivered past the insulator, and 100 kJ has been successfully delivered to the load.

  2. The LHCf experiment at the LHC: Physics Goals and Status

    NASA Astrophysics Data System (ADS)

    Tricomi, A.; Adriani, O.; Bonechi, L.; Bongi, M.; Castellini, G.; D'Alessandro, R.; Faus, A.; Fukui, K.; Haguenauer, M.; Itow, Y.; Kasahara, K.; Macina, D.; Mase, T.; Masuda, K.; Matsubara, Y.; Menjo, H.; Mizuishi, M.; Muraki, Y.; Papini, P.; Perrot, A. L.; Ricciarini, S.; Sako, T.; Shimizu, Y.; Taki, K.; Tamura, T.; Torii, S.; Turner, W. C.; Velasco, J.; Viciani, A.; Yoshida, K.

    2009-12-01

    The LHCf experiment is the smallest of the six experiments installed at the Large Hadron Collider (LHC). While the general purpose detectors have been mainly designed to answer the open questions of Elementary Particle Physics, LHCf has been designed as a fully devoted Astroparticle experiment at the LHC. Indeed, thanks to the excellent performances of its double arm calorimeters, LHCf will be able to measure the flux of neutral particles produced in p-p collisions at LHC in the very forward region, thus providing an invaluable help in the calibration of air-shower Monte Carlo codes currently used for modeling cosmic rays interactions in the Earth atmosphere. Depending on the LHC machine schedule, LHCf will take data in an energy range from 900 GeV up to 14 TeV in the centre of mass system (equivalent to 10 eV in the laboratory frame), thus covering one of the most interesting and debated region of the Cosmic Ray spectrum, the region around and beyond the "knee".

  3. Studies on the coupling transformer to improve the performance of microwave ion source.

    PubMed

    Misra, Anuraag; Pandit, V S

    2014-06-01

    A 2.45 GHz microwave ion source has been developed and installed at the Variable Energy Cyclotron Centre to produce high intensity proton beam. It is operational and has already produced more than 12 mA of proton beam with just 350 W of microwave power. In order to optimize the coupling of microwave power to the plasma, a maximally flat matching transformer has been used. In this paper, we first describe an analytical method to design the matching transformer and then present the results of rigorous simulation performed using ANSYS HFSS code to understand the effect of different parameters on the transformed impedance and reflection and transmission coefficients. Based on the simulation results, we have chosen two different coupling transformers which are double ridged waveguides with ridge widths of 24 mm and 48 mm. We have fabricated these transformers and performed experiments to study the influence of these transformers on the coupling of microwave to plasma and extracted beam current from the ion source.

  4. Studies on the coupling transformer to improve the performance of microwave ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Misra, Anuraag, E-mail: pandit@vecc.gov.in, E-mail: vspandit12@gmail.com, E-mail: anuraag@vecc.gov.in; Pandit, V. S., E-mail: pandit@vecc.gov.in, E-mail: vspandit12@gmail.com, E-mail: anuraag@vecc.gov.in

    A 2.45 GHz microwave ion source has been developed and installed at the Variable Energy Cyclotron Centre to produce high intensity proton beam. It is operational and has already produced more than 12 mA of proton beam with just 350 W of microwave power. In order to optimize the coupling of microwave power to the plasma, a maximally flat matching transformer has been used. In this paper, we first describe an analytical method to design the matching transformer and then present the results of rigorous simulation performed using ANSYS HFSS code to understand the effect of different parameters on themore » transformed impedance and reflection and transmission coefficients. Based on the simulation results, we have chosen two different coupling transformers which are double ridged waveguides with ridge widths of 24 mm and 48 mm. We have fabricated these transformers and performed experiments to study the influence of these transformers on the coupling of microwave to plasma and extracted beam current from the ion source.« less

  5. Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.

    PubMed

    Wallace, Rodrick

    2015-06-01

    A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.

  6. Effects of alcohol and energy drink on mood and subjective intoxication: a double-blind, placebo-controlled, crossover study.

    PubMed

    Benson, Sarah; Scholey, Andrew

    2014-07-01

    There is concern that combining energy drinks with alcohol may 'mask' subjective intoxication leading to greater alcohol consumption. This study examines the effects of alcohol alone and combined with energy drink on objective and subjective intoxication and mood over the course of 3 h. Using a double-blind, placebo-controlled, balanced, crossover design, 24 participants (mean age 22.23 years) were administered with double placebo, 0.6 g/kg alcohol (mean peak blood alcohol content of 0.051%), 250 ml energy drink and alcohol/energy drink, according to a Latin square design, with a washout of >48 h. On each visit, they were breathalysed and rated themselves on a comprehensive battery of mood items at baseline and then at 45, 90 and 180 min post-drink. Blood alcohol and subjective intoxication were significantly increased following both alcohol alone and alcohol/energy drink. Both measures were statistically indistinguishable between alcohol conditions. In keeping with its (80 mg) caffeine content, the energy drink alone significantly increased self-rated 'alertness' and reduced 'depression-dejection' scores compared with the combined alcohol/energy drink. The alcohol/energy drink increased 'vigor' and 'contentment' at 45 min and decreased 'contentment' at 180 min. The co-ingestion of an energy drink with alcohol does not differently influence blood alcohol content recordings or subjective intoxication compared with alcohol alone, although some mood items are differentially affected. Copyright © 2014 John Wiley & Sons, Ltd.

  7. A three-dimensional code for muon propagation through the rock: MUSIC

    NASA Astrophysics Data System (ADS)

    Antonioli, P.; Ghetti, C.; Korolkova, E. V.; Kudryavtsev, V. A.; Sartorelli, G.

    1997-10-01

    We present a new three-dimensional Monte-Carlo code MUSIC (MUon SImulation Code) for muon propagation through the rock. All processes of muon interaction with matter with high energy loss (including the knock-on electron production) are treated as stochastic processes. The angular deviation and lateral displacement of muons due to multiple scattering, as well as bremsstrahlung, pair production and inelastic scattering are taken into account. The code has been applied to obtain the energy distribution and angular and lateral deviations of single muons at different depths underground. The muon multiplicity distributions obtained with MUSIC and CORSIKA (Extensive Air Shower simulation code) are also presented. We discuss the systematic uncertainties of the results due to different muon bremsstrahlung cross-sections.

  8. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  9. Recommendations on Implementing the Energy Conservation Building Code in Rajasthan, India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Makela, Eric J.; Evans, Meredydd

    India launched the Energy Conservation Building Code (ECBC) in 2007 and Indian Bureau of Energy Efficiency (BEE) recently indicated that it would move to mandatory implementation in the 12th Five-Year Plan. The State of Rajasthan adopted ECBC with minor modifications; the new regulation is known as the Energy Conservation Building Directives – Rajasthan 2011 (ECBD-R). It became mandatory in Rajasthan on September 28, 2011. This report provides recommendations on an ECBD-R enforcement roadmap for the State of Rajasthan.

  10. Three-dimensional Monte-Carlo simulation of gamma-ray scattering and production in the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, D.J.

    1989-05-15

    Monte Carlo codes have been developed to simulate gamma-ray scattering and production in the atmosphere. The scattering code simulates interactions of low-energy gamma rays (20 to several hundred keV) from an astronomical point source in the atmosphere; a modified code also simulates scattering in a spacecraft. Four incident spectra, typical of gamma-ray bursts, solar flares, and the Crab pulsar, and 511 keV line radiation have been studied. These simulations are consistent with observations of solar flare radiation scattered from the atmosphere. The production code simulates the interactions of cosmic rays which produce high-energy (above 10 MeV) photons and electrons. Itmore » has been used to calculate gamma-ray and electron albedo intensities at Palestine, Texas and at the equator; the results agree with observations in most respects. With minor modifications this code can be used to calculate intensities of other high-energy particles. Both codes are fully three-dimensional, incorporating a curved atmosphere; the production code also incorporates the variation with both zenith and azimuth of the incident cosmic-ray intensity due to geomagnetic effects. These effects are clearly reflected in the calculated albedo by intensity contrasts between the horizon and nadir, and between the east and west horizons.« less

  11. Amino acid fermentation at the origin of the genetic code.

    PubMed

    de Vladar, Harold P

    2012-02-10

    There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments into a proto-code that optimises the energetic yield. Monte Carlo simulations are performed to evaluate the establishment of these simple proto-codes, based on amino acid substitutions and codon swapping. In all cases, donor amino acids are assigned to anticodons composed of U+G, and have low redundancy (1-2 codons), whereas acceptor amino acids are assigned to the the remaining codons. These bioenergetic and structural constraints allow for a metabolic role for amino acids before their co-option as catalyst cofactors.

  12. DOUBLE POWER LAWS IN THE EVENT-INTEGRATED SOLAR ENERGETIC PARTICLE SPECTRUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Lulu; Zhang, Ming; Rassoul, Hamid K., E-mail: lzhao@fit.edu

    2016-04-10

    A double power law or a power law with exponential rollover at a few to tens of MeV nucleon{sup −1} of the event-integrated differential spectra has been reported in many solar energetic particle (SEP) events. The rollover energies per nucleon of different elements correlate with a particle's charge-to-mass ratio (Q/A). The probable causes are suggested as residing in shock finite lifetimes, shock finite sizes, shock geometry, and an adiabatic cooling effect. In this work, we conduct a numerical simulation to investigate a particle's transport process in the inner heliosphere. We solve the focused transport equation using a time-backward Markov stochasticmore » approach. The convection, magnetic focusing, adiabatic cooling effect, and pitch-angle scattering are included. The effects that the interplanetary turbulence imposes on the shape of the resulting SEP spectra are examined. By assuming a pure power-law differential spectrum at the Sun, a perfect double-power-law feature with a break energy ranging from 10 to 120 MeV nucleon{sup −1} is obtained at 1 au. We found that the double power law of the differential energy spectrum is a robust result of SEP interplanetary propagation. It works for many assumptions of interplanetary turbulence spectra that give various forms of momentum dependence of a particle's mean free path. The different spectral shapes in low-energy and high-energy ends are not just a transition from the convection-dominated propagation to diffusion-dominated propagation.« less

  13. Mutational analysis of the multicopy hao gene coding for hydroxylamine oxidoreductase in Nitrosomonas sp. strain ENI-11.

    PubMed

    Yamagata, A; Hirota, R; Kato, J; Kuroda, A; Ikeda, T; Takiguchi, N; Ohtake, H

    2000-08-01

    The ammonia-oxidizing bacterium Nitrosomonas sp. strain ENI-11 contains three copies of the hao gene (hao1, hao2, and hao3) coding for hydroxylamine oxidoreductase (HAO). Three single mutants (hao1::kan, hao2::kan, or hao3::kan) had 68 to 75% of the wild-type growth rate and 58 to 89% of the wild-type HAO activity when grown under the same conditions. A double mutant (hao1::kan and hao3::amp) also had 68% of the wild-type growth and 37% of the wild-type HAO activity.

  14. Modeling interfacial fracture in Sierra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang

    2013-09-01

    This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less

  15. Field‐readable alphanumeric flags are valuable markers for shorebirds: use of double‐marking to identify cases of misidentification

    USGS Publications Warehouse

    Roche, Erin A.; Dovichin, Colin M.; Arnold, Todd W.

    2014-01-01

    Implicit assumptions for most mark-recapture studies are that individuals do not lose their markers and all observed markers are correctly recorded. If these assumptions are violated, e.g., due to loss or extreme wear of markers, estimates of population size and vital rates will be biased. Double-marking experiments have been widely used to estimate rates of marker loss and adjust for associated bias, and we extended this approach to estimate rates of recording errors. We double-marked 309 Piping Plovers (Charadrius melodus) with unique combinations of color bands and alphanumeric flags and used multi-state mark recapture models to estimate the frequency with which plovers were misidentified. Observers were twice as likely to read and report an invalid color-band combination (2.4% of the time) as an invalid alphanumeric code (1.0%). Observers failed to read matching band combinations or alphanumeric flag codes 4.5% of the time. Unlike previous band resighting studies, use of two resightable markers allowed us to identify when resighting errors resulted in reports of combinations or codes that were valid, but still incorrect; our results suggest this may be a largely unappreciated problem in mark-resight studies. Field-readable alphanumeric flags offer a promising auxiliary marker for identifying and potentially adjusting for false-positive resighting errors that may otherwise bias demographic estimates.

  16. On the efficiency of the golf swing

    NASA Astrophysics Data System (ADS)

    White, Rod

    2006-12-01

    A non-driven double pendulum model is used to explain the principle underlying the surprising efficiency of the golf swing. The principle can be described as a parametric energy transfer between the arms and the club head due to the changing moment of inertia of the club. The transfer is a consequence of conservation of energy and angular momentum. Because the pendulum is not driven by an external force, it shows that the golfer need do little more than accelerate the arms with the wrists cocked and let the double pendulum transfer kinetic energy to the club head. A driven double pendulum model is used to study factors affecting the efficiency of a real golf swing. It is concluded that the wrist-cock angle is the most significant efficiency-determining parameter under the golfer's control and that improvements in golf technology have had a significant impact on driving distance.

  17. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  18. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  19. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  20. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  1. 18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...

  2. Selected DOE headquarters publications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1979-04-01

    This publication provides listings of (mainly policy and programmatic) publications which have been issued by headquarters organizations of the Department of Energy; assigned a DOE/XXX- type report number code, where XXX is the 1- to 4-letter code for the issuing headquarters organization; received by the Energy Library; and made available to the public.

  3. Neutron displacement cross-sections for tantalum and tungsten at energies up to 1 GeV

    NASA Astrophysics Data System (ADS)

    Broeders, C. H. M.; Konobeyev, A. Yu.; Villagrasa, C.

    2005-06-01

    The neutron displacement cross-section has been evaluated for tantalum and tungsten at energies from 10 -5 eV up to 1 GeV. The nuclear optical model, the intranuclear cascade model combined with the pre-equilibrium and evaporation models were used for the calculations. The number of defects produced by recoil atoms nuclei in materials was calculated by the Norgett, Robinson, Torrens model and by the approach combining calculations using the binary collision approximation model and the results of the molecular dynamics simulation. The numerical calculations were done using the NJOY code, the ECIS96 code, the MCNPX code and the IOTA code.

  4. Thermodynamics of rough colloidal surfaces

    NASA Astrophysics Data System (ADS)

    Goldstein, Raymond E.; Halsey, Thomas C.; Leibig, Michael

    1991-03-01

    In Debye-Hückel theory, the free energy of an electric double layer near a colloidal (or any other) surface can be related to the statistics of random walks near that surface. We present a numerical method based on this correspondence for the calculation of the double-layer free energy for an arbitrary charged or conducting surface. For self-similar surfaces, we propose a scaling law for the behavior of the free energy as a function of the screening length and the surface dimension. This scaling law is verified by numerical computation. Capacitance measurements on rough surfaces of, e.g., colloids can test these predictions.

  5. Photoluminescence and structural properties of unintentional single and double InGaSb/GaSb quantum wells grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Ahia, Chinedu Christian; Tile, Ngcali; Botha, Johannes R.; Olivier, E. J.

    2018-04-01

    The structural and photoluminescence (PL) characterization of InGaSb quantum well (QW) structures grown on GaSb substrate (100) using atmospheric pressure Metalorganic Vapor Phase Epitaxy (MOVPE) is presented. Both structures (single and double-InGaSb QWs) were inadvertently formed during an attempt to grow capped InSb/GaSb quantum dots (QDs). In this work, 10 K PL peak energies at 735 meV and 740 meV are suggested to be emissions from the single and double QWs, respectively. These lines exhibit red shifts, accompanied by a reduction in their full-widths at half-maximum (FWHM) as the excitation power decreases. The presence of a GaSb spacer in the double QW was found to increase the strength of the PL emission, which consequently gives rise to a reduced blue-shift and broadening of the PL emission line observed for the double QW with an increase in laser power, while the low thermal activation energy for the quenching of the PL from the double QW is attributed to the existence of threading dislocations, as seen in the bright field TEM image for this sample.

  6. Approximate Green's function methods for HZE transport in multilayered materials

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.

    1993-01-01

    A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.

  7. Double photoionization of Be-like (Be-F5+) ions

    NASA Astrophysics Data System (ADS)

    Abdel Naby, Shahin; Pindzola, Michael; Colgan, James

    2015-04-01

    The time-dependent close-coupling method is used to study the single photon double ionization of Be-like (Be - F5+) ions. Energy and angle differential cross sections are calculated to fully investigate the correlated motion of the two photoelectrons. Symmetric and antisymmetric amplitudes are presented along the isoelectronic sequence for different energy sharing of the emitted electrons. Our total double photoionization cross sections are in good agreement with available theoretical results and experimental measurements along the Be-like ions. This work was supported in part by grants from NSF and US DoE. Computational work was carried out at NERSC in Oakland, California and the National Institute for Computational Sciences in Knoxville, Tennessee.

  8. Measure Guideline: Deep Energy Enclosure Retrofit for Double-Stud Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loomis, H.; Pettit, B.

    2015-06-22

    This Measure Guideline describes a deep energy enclosure retrofit solution that provides insulation to the interior of the wall assembly with the use of a double-stud wall. The guide describes two approaches to retrofitting the existing walls—one that involves replacing the existing cladding and the other that leaves the cladding in place. This guideline also covers the design principles related to the use of various insulation types and provides strategies and procedures for implementing the double-stud wall retrofit. It also includes an evaluation of important moisture-related and indoor air quality measures that need to be implemented to achieve a durablemore » high-performance wall.« less

  9. Measure Guideline: Deep Energy Enclosure Retrofit for Double-Stud Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loomis, H.; Pettit, B.

    2015-06-01

    This Measure Guideline describes a deep energy enclosure retrofit (DEER) solution that provides insulation to the interior of the wall assembly with the use of a double stud wall. The guide describes two approaches to retrofitting the existing the walls: one involving replacement of the existing cladding, and the other that leaves the existing cladding in place. It discusses the design principles related to the use of various insulation types, and provides strategies and procedures for implementing the double stud wall retrofit. It also evaluates important moisture-related and indoor air quality measures that need to be implemented to achieve amore » durable, high performance wall.« less

  10. Stable continuous-wave single-frequency Nd:YAG blue laser at 473 nm considering the influence of the energy-transfer upconversion.

    PubMed

    Wang, Yaoting; Liu, Jianli; Liu, Qin; Li, Yuanji; Zhang, Kuanshou

    2010-06-07

    We report a continuous-wave (cw) single frequency Nd:YAG blue laser at 473 nm end-pumped by a laser diode. A ring laser resonator was designed, the frequency doubling efficiency and the length of nonlinear crystal were optimized based on the investigation of the influence of the frequency doubling efficiency on the thermal lensing effect induced by energy-transfer upconversion. By intracavity frequency doubling with PPKTP crystal, an output power of 1 W all-solid-state cw blue laser of single-frequency operation was achieved. The stability of the blue output power was better than +/- 1.8% in the given four hours.

  11. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  12. Modeling Laboratory Astrophysics Experiments in the High-Energy-Density Regime Using the CRASH Radiation-Hydrodynamics Model

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Trantham, M. R.; Kuranz, C. C.; Keiter, P. A.; Rutter, E. M.; Sweeney, R. M.; Malamud, G.

    2012-10-01

    The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density physics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. CRASH model results have shown good agreement with a experimental results from a variety of applications, including: radiative shock, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL), collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  13. Modeling and characterization of double resonant tunneling diodes for application as energy selective contacts in hot carrier solar cells

    NASA Astrophysics Data System (ADS)

    Jehl, Zacharie; Suchet, Daniel; Julian, Anatole; Bernard, Cyril; Miyashita, Naoya; Gibelli, Francois; Okada, Yoshitaka; Guillemolles, Jean-Francois

    2017-02-01

    Double resonant tunneling barriers are considered for an application as energy selective contacts in hot carrier solar cells. Experimental symmetric and asymmetric double resonant tunneling barriers are realized by molecular beam epitaxy and characterized by temperature dependent current-voltage measurements. The negative differential resistance signal is enhanced for asymmetric heterostructures, and remains unchanged between low- and room-temperatures. Within Tsu-Esaki description of the tunnel current, this observation can be explained by the voltage dependence of the tunnel transmission amplitude, which presents a resonance under finite bias for asymmetric structures. This effect is notably discussed with respect to series resistance. Different parameters related to the electronic transmission of the structure and the influence of these parameters on the current voltage characteristic are investigated, bringing insights on critical processes to optimize in double resonant tunneling barriers applied to hot carrier solar cells.

  14. Fundamental Study of Energy Storage for Electric Railway Combining Electric Double-layer Capacitors and Battery

    NASA Astrophysics Data System (ADS)

    Konishi, Takeshi; Hase, Shin-Ichi; Nakamichi, Yoshinobu; Nara, Hidetaka; Uemura, Tadashi

    The methods to stabilize power sources, which are the measures against voltage drop, power loading fluctuation, regenerative power lapse and so on, have been important issues in DC railway feeding circuits. Therefore, an energy storage medium that uses power efficiently and reduces above-mentioned problems is much concerned about. Electric double-layer capacitors (EDLC) can be charged and discharged rapidly in a short time with large power. On the other hand, a battery has a high energy density so that it is proper to be charged and discharged for a long time. Therefore, from a viewpoint of load pattern for electric railway, hybrid energy storage system combining both energy storage media may be effective. This paper introduces two methods for hybrid energy system theoretically, and describes the results of the fundamental tests.

  15. A Closed Parameterization of DNA–Damage by Charged Particles, as a Function of Energy — A Geometrical Approach

    PubMed Central

    Van den Heuvel, Frank

    2014-01-01

    Purpose To present a closed formalism calculating charged particle radiation damage induced in DNA. The formalism is valid for all types of charged particles and due to its closed nature is suited to provide fast conversion of dose to DNA-damage. Methods The induction of double strand breaks in DNA–strings residing in irradiated cells is quantified using a single particle model. This leads to a proposal to use the cumulative Cauchy distribution to express the mix of high and low LET type damage probability generated by a single particle. A microscopic phenomenological Monte Carlo code is used to fit the parameters of the model as a function of kinetic energy related to the damage to a DNA molecule embedded in a cell. The model is applied for four particles: electrons, protons, alpha–particles, and carbon ions. A geometric interpretation of this observation using the impact ionization mean free path as a quantifier, allows extension of the model to very low energies. Results The mathematical expression describes the model adequately using a chi–square test (). This applies to all particle types with an almost perfect fit for protons, while the other particles seem to result in some discrepancies at very low energies. The implementation calculating a strict version of the RBE based on complex damage alone is corroborated by experimental data from the measured RBE. The geometric interpretation generates a unique dimensionless parameter for each type of charged particle. In addition, it predicts a distribution of DNA damage which is different from the current models. PMID:25340636

  16. Measurements and Monte Carlo calculations of forward-angle secondary-neutron-production cross-sections for 137 and 200 MeV proton-induced reactions in carbon

    NASA Astrophysics Data System (ADS)

    Iwamoto, Yosuke; Hagiwara, Masayuki; Matsumoto, Tetsuro; Masuda, Akihiko; Iwase, Hiroshi; Yashima, Hiroshi; Shima, Tatsushi; Tamii, Atsushi; Nakamura, Takashi

    2012-10-01

    Secondary neutron-production double-differential cross-sections (DDXs) have been measured from interactions of 137 MeV and 200 MeV protons in a natural carbon target. The data were measured between 0° and 25° in the laboratory. DDXs were obtained with high energy resolution in the energy region from 3 MeV up to the maximum energy. The experimental data of 137 MeV protons at 10° and 25° were in good agreement with that of 113 MeV protons at 7.5° and 30° at LANSCE/WNR in the energy region below 80 MeV. Benchmark calculations were carried out with the PHITS code using the evaluated nuclear data files of JENDL/HE-2007 and ENDF/B-VII, and the theoretical models of Bertini-GEM and ISOBAR-GEM. For the 137 MeV proton incidence, calculations using JENDL/HE-2007 generally reproduced the shape and the intensity of experimental spectra well including the ground state of the 12N state produced by the 12C(p,n)12N reaction. For the 200 MeV proton incidence, all calculated results underestimated the experimental data by the factor of two except for the calculated result using ISOBAR model. ISOBAR predicts the nucleon emission to the forward angles qualitatively better than the Bertini model. These experimental data will be useful to evaluate the carbon data and as benchmark data for investigating the validity of the Monte Carlo simulation for the shielding design of accelerator facilities.

  17. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  18. Aluminum-carbon composite electrode

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    1998-07-07

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  19. Aluminum-carbon composite electrode

    DOEpatents

    Farahmandi, C.J.; Dispennette, J.M.

    1998-07-07

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg. 3 figs.

  20. Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Januszewski, M.; Kostur, M.

    2014-09-01

    We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).

  1. Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors

    PubMed Central

    2017-01-01

    Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K+ conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. PMID:28381642

  2. Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors.

    PubMed

    Heras, Francisco J H; Anderson, John; Laughlin, Simon B; Niven, Jeremy E

    2017-04-01

    Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K + conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. © 2017 The Author(s).

  3. An Energy Model of Place Cell Network in Three Dimensional Space.

    PubMed

    Wang, Yihong; Xu, Xuying; Wang, Rubin

    2018-01-01

    Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.

  4. 76 FR 19971 - Notice of Proposed Changes to the National Handbook of Conservation Practices for the Natural...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-11

    ... 344), Silvopasture Establishment (Code 381), Tree/Shrub Establishment (Code 612), Waste Recycling... Criteria were added. Tree/Shrub Establishment (Code 612)--A new Purpose of ``Develop Renewable Energy...

  5. How the Geothermal Community Upped the Game for Computer Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Geothermal Technologies Office Code Comparison Study brought 11 research institutions together to collaborate on coupled thermal, hydrologic, geomechanical, and geochemical numerical simulators. These codes have the potential to help facilitate widespread geothermal energy development.

  6. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  7. ASME Code Efforts Supporting HTGRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D.K. Morton

    2010-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This reportmore » discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.« less

  8. ASME Code Efforts Supporting HTGRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D.K. Morton

    2011-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This reportmore » discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.« less

  9. ASME Code Efforts Supporting HTGRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D.K. Morton

    2012-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This reportmore » discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.« less

  10. Multi-scale modeling of irradiation effects in spallation neutron source materials

    NASA Astrophysics Data System (ADS)

    Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.

    2011-07-01

    Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.

  11. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  12. How to differentiate collective variables in free energy codes: Computer-algebra code generation and automatic differentiation

    NASA Astrophysics Data System (ADS)

    Giorgino, Toni

    2018-07-01

    The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.

  13. Flowable Conducting Particle Networks in Redox-Active Electrolytes for Grid Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatzell, K. B.; Boota, M.; Kumbur, E. C.

    2015-01-01

    This study reports a new hybrid approach toward achieving high volumetric energy and power densities in an electrochemical flow capacitor for grid energy storage. The electrochemical flow capacitor suffers from high self-discharge and low energy density because charge storage is limited to the available surface area (electric double layer charge storage). Here, we examine two carbon materials as conducting particles in a flow battery electrolyte containing the VO2+/VO2+ redox couple. Highly porous activated carbon spheres (CSs) and multi-walled carbon nanotubes (MWCNTs) are investigated as conducting particle networks that facilitate both faradaic and electric double layer charge storage. Charge storage contributionsmore » (electric double layer and faradaic) are distinguished for flow-electrodes composed of MWCNTs and activated CSs. A MWCNT flow-electrode based in a redox-active electrolyte containing the VO2+/VO2+ redox couple demonstrates 18% less self-discharge, 10 X more energy density, and 20 X greater power densities (at 20 mV s-1) than one based on a non-redox active electrolyte. Furthermore, a MWCNT redox-active flow electrode demonstrates 80% capacitance retention, and >95% coulombic efficiency over 100 cycles, indicating the feasibility of utilizing conducting networks with redox chemistries for grid energy storage.« less

  14. Flowable conducting particle networks in redox-active electrolytes for grid energy storage

    DOE PAGES

    Hatzell, K. B.; Boota, M.; Kumbur, E. C.; ...

    2015-01-09

    This paper reports a new hybrid approach toward achieving high volumetric energy and power densities in an electrochemical flow capacitor for grid energy storage. The electrochemical flow capacitor suffers from high self-discharge and low energy density because charge storage is limited to the available surface area (electric double layer charge storage). Here, we examine two carbon materials as conducting particles in a flow battery electrolyte containing the VO 2+/VO 2 + redox couple. Highly porous activated carbon spheres (CSs) and multi-walled carbon nanotubes (MWCNTs) are investigated as conducting particle networks that facilitate both faradaic and electric double layer charge storage.more » Charge storage contributions (electric double layer and faradaic) are distinguished for flow-electrodes composed of MWCNTs and activated CSs. A MWCNT flow-electrode based in a redox-active electrolyte containing the VO 2+/VO 2 + redox couple demonstrates 18% less self-discharge, 10 X more energy density, and 20 X greater power densities (at 20 mV s -1) than one based on a non-redox active electrolyte. Additionally, a MWCNT redox-active flow electrode demonstrates 80% capacitance retention, and >95% coulombic efficiency over 100 cycles, indicating the feasibility of utilizing conducting networks with redox chemistries for grid energy storage.« less

  15. Electronics Devices and Materials

    DTIC Science & Technology

    2008-03-17

    Molecular -bea epitaxy MCNPX ............... Software code Misse6 ................. Satellite expected to carry ORMatE-I Misse7...patterning using electron beam lithography), spaces (class 1000 clean benches), and skills (appropriate mix of skilled technicians and professionals...34 Process samples for various projects such as Antimode Base High Electron Mobility Transistors ( HEMT ) and Double Heterojuction Bipolar Transistors

  16. γ production and neutron inelastic scattering cross sections for 76Ge

    NASA Astrophysics Data System (ADS)

    Rouki, C.; Domula, A. R.; Drohé, J. C.; Koning, A. J.; Plompen, A. J. M.; Zuber, K.

    2013-11-01

    The 2040.7-keV γ ray from the 69th excited state of 76Ge was investigated in the interest of Ge-based double-β-decay experiments like the Germanium Detector Array (GERDA) experiment. The predicted transition could interfere with valid 0νββ events at 2039.0 keV, creating false signals in large-volume 76Ge enriched detectors. The measurement was performed with the Gamma Array for Inelastic Neutron Scattering (GAINS) at the Geel Electron Linear Accelerator (GELINA) white neutron source, using the (n,n'γ) technique and focusing on the strongest γ rays originating from the level. Upper limits obtained for the production cross section of the 2040.7-keV γ ray showed no possible influence on GERDA data. Additional analysis of the data yielded high-resolution cross sections for the low-lying states of 76Ge and related γ rays, improving the accuracy and extending existing data for five transitions and five levels. The inelastic scattering cross section for 76Ge was determined for incident neutron energies up to 2.23 MeV, significantly increasing the energy range for which experimental data are available. Comparisons with model calculations using the talys code are presented indicating that accounting for the recently established asymmetric rotor structure should lead to an improved description of the data.

  17. Differential cross-sections measurements for hadrontherapy: 50 MeV/A 12C reactions on H, C, O, Al and natTi targets

    NASA Astrophysics Data System (ADS)

    Divay, C.; Colin, J.; Cussol, D.; Finck, Ch.; Karakaya, Y.; Labalme, M.; Rousseau, M.; Salvador, S.; Vanstalle, M.

    2017-09-01

    In order to keep the benefits of a carbon treatment, the dose and biological effects induced by secondary fragments must be taken into account when simulating the treatment plan. These Monte-Carlo simulations codes are done using nuclear models that are constrained by experimental data. It is hence necessary to have precise measurements of the production rates of these fragments all along the beam path and for its whole energy range. In this context, a series of experiments aiming to measure the double differential fragmentation cross-sections of carbon on thin targets of medical interest has been started by our collaboration. In March 2015, an experiment was performed with a 50 MeV/nucleon 12C beam at GANIL. During this experiment, energy and angular differential cross-section distributions on H, C, O, Al and natTi have been measured. In the following, the experimental set-up and analysis process are briefly described and some experimental results are presented. Comparisons between several exit channel models from Phits and Geant4 show great discrepancies with the experimental data. Finally, the homemade Sliipie model is briefly presented and preliminary results are compared to the data with a promising outcome.

  18. Nonlinear Diamagnetic Stabilization of Double Tearing Modes in Cylindrical MHD Simulations

    NASA Astrophysics Data System (ADS)

    Abbott, Stephen; Germaschewski, Kai

    2014-10-01

    Double tearing modes (DTMs) may occur in reversed-shear tokamak configurations if two nearby rational surfaces couple and begin reconnecting. During the DTM's nonlinear evolution it can enter an ``explosive'' growth phase leading to complete reconnection, making it a possible driver for off-axis sawtooth crashes. Motivated by similarities between this behavior and that of the m = 1 kink-tearing mode in conventional tokamaks we investigate diamagnetic drifts as a possible DTM stabilization mechanism. We extend our previous linear studies of an m = 2 , n = 1 DTM in cylindrical geometry to the fully nonlinear regime using the MHD code MRC-3D. A pressure gradient similar to observed ITB profiles is used, together with Hall physics, to introduce ω* effects. We find the diamagnetic drifts can have a stabilizing effect on the nonlinear DTM through a combination of large scale differential rotation and mechanisms local to the reconnection layer. MRC-3D is an extended MHD code based on the libMRC computational framework. It supports nonuniform grids in curvilinear coordinates with parallel implicit and explicit time integration.

  19. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  20. The possibility of applying spectral redundancy in DWDM systems on existing long-distance FOCLs for increasing the data transmission rate and decreasing nonlinear effects and double Rayleigh scattering without changes in the communication channel

    NASA Astrophysics Data System (ADS)

    Nekuchaev, A. O.; Shuteev, S. A.

    2014-04-01

    A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.

Top