Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.
Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun
2017-09-01
Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.
Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials
Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin
2017-01-01
Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits “0” and “1” to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency‐spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments. PMID:28932671
A finite-temperature Hartree-Fock code for shell-model Hamiltonians
NASA Astrophysics Data System (ADS)
Bertsch, G. F.; Mehlhaff, J. M.
2016-10-01
The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.
Extension of applicable neutron energy of DARWIN up to 1 GeV.
Satoh, D; Sato, T; Endo, A; Matsufuji, N; Takada, M
2007-01-01
The radiation-dose monitor, DARWIN, needs a set of response functions of the liquid organic scintillator to assess a neutron dose. SCINFUL-QMD is a Monte Carlo based computer code to evaluate the response functions. In order to improve the accuracy of the code, a new light-output function based on the experimental data was developed for the production and transport of protons deuterons, tritons, (3)He nuclei and alpha particles, and incorporated into the code. The applicable energy of DARWIN was extended to 1 GeV using the response functions calculated by the modified SCINFUL-QMD code.
Aerial Measuring System Sensor Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. S. Detwiler
2002-04-01
This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less
Efficiency turns the table on neural encoding, decoding and noise.
Deneve, Sophie; Chalk, Matthew
2016-04-01
Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.
Fixed mesh refinement in the characteristic formulation of general relativity
NASA Astrophysics Data System (ADS)
Barreto, W.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2017-08-01
We implement a spatially fixed mesh refinement under spherical symmetry for the characteristic formulation of General Relativity. The Courant-Friedrich-Levy condition lets us deploy an adaptive resolution in (retarded-like) time, even for the nonlinear regime. As test cases, we replicate the main features of the gravitational critical behavior and the spacetime structure at null infinity using the Bondi mass and the News function. Additionally, we obtain the global energy conservation for an extreme situation, i.e. in the threshold of the black hole formation. In principle, the calibrated code can be used in conjunction with an ADM 3+1 code to confirm the critical behavior recently reported in the gravitational collapse of a massless scalar field in an asymptotic anti-de Sitter spacetime. For the scenarios studied, the fixed mesh refinement offers improved runtime and results comparable to code without mesh refinement.
Novel Spectro-Temporal Codes and Computations for Auditory Signal Representation and Separation
2013-02-01
responses are shown). Bottom right panel (c) shows the Frequency responses of the tunable bandpass filter ( BPF ) triplets that adapt to the incoming...signal. One BPF triplet is associated with each fixed filter, such that coarse filtering of the fixed gammatone filters is followed by additional, finer...is achieved using a second layer of narrower bandpass filters ( BPFs , Q=8) that emulate the filtering functions of outer hair cells (OHCs). In the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depriest, Kendall
Unsuccessful attempts by members of the radiation effects community to independently derive the Norgett-Robinson-Torrens (NRT) damage energy factors for silicon in ASTM standard E722-14 led to an investigation of the software coding and data that produced those damage energy factors. The ad hoc collaboration to discover the reason for lack of agreement revealed a coding error and resulted in a report documenting the methodology to produce the response function for the standard. The recommended changes in the NRT damage energy factors for silicon are shown to have significant impact for a narrow energy region of the 1-MeV(Si) equivalent fluence responsemore » function. However, when evaluating integral metrics over all neutrons energies in various spectra important to the SNL electronics testing community, the change in the response results in a small decrease in the total 1- MeV(Si) equivalent fluence of ~0.6% compared to the E722-14 response. Response functions based on the newly recommended NRT damage energy factors have been produced and are available for users of both the NuGET and MCNP codes.« less
Wind turbine design codes: A comparison of the structural response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buhl, M.L. Jr.; Wright, A.D.; Pierce, K.G.
2000-03-01
The National Wind Technology Center (NWTC) of the National Renewable Energy Laboratory is continuing a comparison of several computer codes used in the design and analysis of wind turbines. The second part of this comparison determined how well the programs predict the structural response of wind turbines. In this paper, the authors compare the structural response for four programs: ADAMS, BLADED, FAST{_}AD, and YawDyn. ADAMS is a commercial, multibody-dynamics code from Mechanical Dynamics, Inc. BLADED is a commercial, performance and structural-response code from Garrad Hassan and Partners Limited. FAST{_}AD is a structural-response code developed by Oregon State University and themore » University of Utah for the NWTC. YawDyn is a structural-response code developed by the University of Utah for the NWTC. ADAMS, FAST{_}AD, and YawDyn use the University of Utah's AeroDyn subroutine package for calculating aerodynamic forces. Although errors were found in all the codes during this study, once they were fixed, the codes agreed surprisingly well for most of the cases and configurations that were evaluated. One unresolved discrepancy between BLADED and the AeroDyn-based codes was when there was blade and/or teeter motion in addition to a large yaw error.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreyer, Jonathan G.; Wang, Tzu-Fang; Vo, Duc T.
Under a 2006 agreement between the Department of Energy (DOE) of the United States of America and the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) of France, the National Nuclear Security Administration (NNSA) within DOE and IRSN initiated a collaboration to improve isotopic identification and analysis of nuclear material [i.e., plutonium (Pu) and uranium (U)]. The specific aim of the collaborative project was to develop new versions of two types of isotopic identification and analysis software: (1) the fixed-energy response-function analysis for multiple energies (FRAM) codes and (2) multi-group analysis (MGA) codes. The project is entitled Action Sheet 4more » – Cooperation on Improved Isotopic Identification and Analysis Software for Portable, Electrically Cooled, High-Resolution Gamma Spectrometry Systems (Action Sheet 4). FRAM and MGA/U235HI are software codes used to analyze isotopic ratios of U and Pu. FRAM is an application that uses parameter sets for the analysis of U or Pu. MGA and U235HI are two separate applications that analyze Pu or U, respectively. They have traditionally been used by safeguards practitioners to analyze gamma spectra acquired with high-resolution gamma spectrometry (HRGS) systems that are cooled by liquid nitrogen. However, it was discovered that these analysis programs were not as accurate when used on spectra acquired with a newer generation of more portable, electrically cooled HRGS (ECHRGS) systems. In response to this need, DOE/NNSA and IRSN collaborated to update the FRAM and U235HI codes to improve their performance with newer ECHRGS systems. Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL) performed this work for DOE/NNSA.« less
Silicon Drift Detector response function for PIXE spectra fitting
NASA Astrophysics Data System (ADS)
Calzolai, G.; Tapinassi, S.; Chiari, M.; Giannoni, M.; Nava, S.; Pazzi, G.; Lucarelli, F.
2018-02-01
The correct determination of the X-ray peak areas in PIXE spectra by fitting with a computer program depends crucially on accurate parameterization of the detector peak response function. In the Guelph PIXE software package, GUPIXWin, one of the most used PIXE spectra analysis code, the response of a semiconductor detector to monochromatic X-ray radiation is described by a linear combination of several analytical functions: a Gaussian profile for the X-ray line itself, and additional tail contributions (exponential tails and step functions) on the low-energy side of the X-ray line to describe incomplete charge collection effects. The literature on the spectral response of silicon X-ray detectors for PIXE applications is rather scarce, in particular data for Silicon Drift Detectors (SDD) and for a large range of X-ray energies are missing. Using a set of analytical functions, the SDD response functions were satisfactorily reproduced for the X-ray energy range 1-15 keV. The behaviour of the parameters involved in the SDD tailing functions with X-ray energy is described by simple polynomial functions, which permit an easy implementation in PIXE spectra fitting codes.
NASA Astrophysics Data System (ADS)
Kajimoto, Tsuyoshi; Shigyo, Nobuhiro; Sanami, Toshiya; Ishibashi, Kenji; Haight, Robert C.; Fotiades, Nikolaos
2011-02-01
Absolute neutron response functions and detection efficiencies of an NE213 liquid scintillator that was 12.7 cm in diameter and 12.7 cm in thickness were measured for neutron energies between 15 and 600 MeV at the Weapons Neutron Research facility of the Los Alamos Neutron Science Center. The experiment was performed with continuous-energy neutrons on a spallation neutron source by 800-MeV proton incidence. The incident neutron flux was measured using a 238U fission ionization chamber. Measured response functions and detection efficiencies were compared with corresponding calculations using the SCINFUL-QMD code. The calculated and experimental values were in good agreement for data below 70 MeV. However, there were discrepancies in the energy region between 70 and 150 MeV. Thus, the code was partly modified and the revised code provided better agreement with the experimental data.
NASA Astrophysics Data System (ADS)
Lu, Qiheng; Feng, Xiaoyun
2013-03-01
After analyzing the working principle of the four-aspect fixed autoblock system, an energy-saving control model was created based on the dynamics equations of the trains in order to study the energy-saving optimal control strategy of trains in a following operation. Besides the safety and punctuality, the main aims of the model were the energy consumption and the time error. Based on this model, the static and dynamic speed restraints under a four-aspect fixed autoblock system were put forward. The multi-dimension parallel genetic algorithm (GA) and the external punishment function were adopted to solve this problem. By using the real number coding and the strategy of ramps divided into three parts, the convergence of GA was speeded up and the length of chromosomes was shortened. A vector of Gaussian random disturbance with zero mean was superposed to the mutation operator. The simulation result showed that the method could reduce the energy consumption effectively based on safety and punctuality.
Whole-genome resequencing reveals candidate mutations for pig prolificacy.
Li, Wen-Ting; Zhang, Meng-Meng; Li, Qi-Gang; Tang, Hui; Zhang, Li-Fan; Wang, Ke-Jun; Zhu, Mu-Zhen; Lu, Yun-Feng; Bao, Hai-Gang; Zhang, Yuan-Ming; Li, Qiu-Yan; Wu, Ke-Liang; Wu, Chang-Xin
2017-12-20
Changes in pig fertility have occurred as a result of domestication, but are not understood at the level of genetic variation. To identify variations potentially responsible for prolificacy, we sequenced the genomes of the highly prolific Taihu pig breed and four control breeds. Genes involved in embryogenesis and morphogenesis were targeted in the Taihu pig, consistent with the morphological differences observed between the Taihu pig and others during pregnancy. Additionally, excessive functional non-coding mutations have been specifically fixed or nearly fixed in the Taihu pig. We focused attention on an oestrogen response element (ERE) within the first intron of the bone morphogenetic protein receptor type-1B gene ( BMPR1B ) that overlaps with a known quantitative trait locus (QTL) for pig fecundity. Using 242 pigs from 30 different breeds, we confirmed that the genotype of the ERE was nearly fixed in the Taihu pig. ERE function was assessed by luciferase assays, examination of histological sections, chromatin immunoprecipitation, quantitative polymerase chain reactions, and western blots. The results suggest that the ERE may control pig prolificacy via the cis-regulation of BMPR1B expression. This study provides new insight into changes in reproductive performance and highlights the role of non-coding mutations in generating phenotypic diversity between breeds. © 2017 The Author(s).
López-Igual, Rocío; Wilson, Adjélé; Bourcier de Carbon, Céline; Sutter, Markus; Turmo, Aiko
2016-01-01
The photoactive Orange Carotenoid Protein (OCP) is involved in cyanobacterial photoprotection. Its N-terminal domain (NTD) is responsible for interaction with the antenna and induction of excitation energy quenching, while the C-terminal domain is the regulatory domain that senses light and induces photoactivation. In most nitrogen-fixing cyanobacterial strains, there are one to four paralogous genes coding for homologs to the NTD of the OCP. The functions of these proteins are unknown. Here, we study the expression, localization, and function of these genes in Anabaena sp. PCC 7120. We show that the four genes present in the genome are expressed in both vegetative cells and heterocysts but do not seem to have an essential role in heterocyst formation. This study establishes that all four Anabaena NTD-like proteins can bind a carotenoid and the different paralogs have distinct functions. Surprisingly, only one paralog (All4941) was able to interact with the antenna and to induce permanent thermal energy dissipation. Two of the other Anabaena paralogs (All3221 and Alr4783) were shown to be very good singlet oxygen quenchers. The fourth paralog (All1123) does not seem to be involved in photoprotection. Structural homology modeling allowed us to propose specific features responsible for the different functions of these soluble carotenoid-binding proteins. PMID:27208286
Filter-fluorescer measurement of low-voltage simulator x-ray energy spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, G.T.; Craven, R.E.
X-ray energy spectra of the Maxwell Laboratories MBS and Physics International Pulserad 737 were measured using an eight-channel filter-fluorescer array. The PHOSCAT computer code was used to calculate channel response functions, and the UFO code to unfold spectrum.
NASA Astrophysics Data System (ADS)
Agosteo, S.; Bedogni, R.; Caresana, M.; Charitonidis, N.; Chiti, M.; Esposito, A.; Ferrarini, M.; Severino, C.; Silari, M.
2012-12-01
The accurate determination of the ambient dose equivalent in the mixed neutron-photon fields encountered around high-energy particle accelerators still represents a challenging task. The main complexity arises from the extreme variability of the neutron energy, which spans over 10 orders of magnitude or more. Operational survey instruments, which response function attempts to mimic the fluence-to-ambient dose equivalent conversion coefficient up to GeV neutrons, are available on the market, but their response is not fully reliable over the entire energy range. Extended range rem counters (ERRC) do not require the exact knowledge of the energy distribution of the neutron field and the calibration can be done with a source spectrum. If the actual neutron field has an energy distribution different from the calibration spectrum, the measurement is affected by an added uncertainty related to the partial overlap of the fluence-to-ambient dose equivalent conversion curve and the response function. For this reason their operational use should always be preceded by an "in-field" calibration, i.e. a calibration made against a reference instrument exposed in the same field where the survey-meter will be employed. In practice the extended-range Bonner Sphere Spectrometer (ERBSS) is the only device which can serve as reference instrument in these fields, because of its wide energy range and the possibility to assess the neutron fluence and the ambient dose equivalent (H*(10)) values with the appropriate accuracy. Nevertheless, the experience gained by a number of experimental groups suggests that mandatory conditions for obtaining accurate results in workplaces are: (1) the use of a well-established response matrix, thus implying validation campaigns in reference monochromatic neutrons fields, (2) the expert and critical use of suitable unfolding codes, and (3) the performance test of the whole system (experimental set-up, elaboration and unfolding procedures) in a well controlled workplace field. The CERF (CERN-EU high-energy reference field) facility is a unique example of such a field, where a number of experimental campaigns and Monte Carlo simulations have been performed over the past years. With the aim of performing this kind of workplace performance test, four different ERBSS with different degrees of validation, operated by three groups (CERN, INFN-LNF and Politecnico of Milano), were exposed in two fixed positions at CERF. Using different unfolding codes (MAXED, GRAVEL, FRUIT and FRUIT SGM), the experimental data were analyzed to provide the neutron spectra and the related dosimetric quantities. The results allow assessing the overall performance of each ERBSS and of the unfolding codes, as well as comparing the performance of three ERRCs when used in a neutron field with energy distribution different from the calibration spectrum.
A new response matrix for a 6LiI scintillator BSS system
NASA Astrophysics Data System (ADS)
Lacerda, M. A. S.; Méndez-Villafañe, R.; Lorente, A.; Ibañez, S.; Gallego, E.; Vega-Carrillo, H. R.
2017-10-01
A new response matrix was calculated for a Bonner Sphere Spectrometer (BSS) with a 6 LiI(Eu) scintillator, using the Monte Carlo N-Particle radiation transport code MCNPX. Responses were calculated for 6 spheres and the bare detector, for energies varying from 1.059E(-9) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 221 energy groups. A comparison was done among the responses obtained in this work and other published elsewhere, for the same detector model. The calculated response functions were inserted in the response input file of the MAXED code and used to unfold the total and direct neutron spectra generated by the 241Am-Be source of the Universidad Politécnica de Madrid (UPM). These spectra were compared with those obtained using the same unfolding code with the Mares and Schraube matrix response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farawila, Y.; Gohar, Y.; Maynard, C.
1989-04-01
KAOS/LIB-V: A library of processed nuclear responses for neutronics analyses of nuclear systems has been generated. The library was prepared using the KAOS-V code and nuclear data from ENDF/B-V. The library includes kerma (kinetic energy released in materials) factors and other nuclear response functions for all materials presently of interest in fusion and fission applications for 43 nonfissionable and 15 fissionable isotopes and elements. The nuclear response functions include gas production and tritium-breeding functions, and all important reaction cross sections. KAOS/LIB-V employs the VITAMIN-E weighting function and energy group structure of 174 neutron groups. Auxiliary nuclear data bases, e.g., themore » Japanese evaluated nuclear data library JENDL-2 were used as a source of isotopic cross sections when these data are not provided in ENDF/B-V files for a natural element. These are needed mainly to estimate average quantities such as effective Q-values for the natural element. This analysis of local energy deposition was instrumental in detecting and understanding energy balance deficiencies and other problems in the ENDF/B-V data. Pertinent information about the library and a graphical display of the main nuclear response functions for all materials in the library are given. 35 refs.« less
NASA Astrophysics Data System (ADS)
Bertani, C.; Falcone, N.; Bersano, A.; Caramello, M.; Matsushita, T.; De Salve, M.; Panella, B.
2017-11-01
High safety and reliability of advanced nuclear reactors, Generation IV and Small Modular Reactors (SMR), have a crucial role in the acceptance of these new plants design. Among all the possible safety systems, particular efforts are dedicated to the study of passive systems because they rely on simple physical principles like natural circulation, without the need of external energy source to operate. Taking inspiration from the second Decay Heat Removal system (DHR2) of ALFRED, the European Generation IV demonstrator of the fast lead cooled reactor, an experimental facility has been built at the Energy Department of Politecnico di Torino (PROPHET facility) to study single and two-phase flow natural circulation. The facility behavior is simulated using the thermal-hydraulic system code RELAP5-3D, which is widely used in nuclear applications. In this paper, the effect of the initial water inventory on natural circulation is analyzed. The experimental time behaviors of temperatures and pressures are analyzed. The experimental matrix ranges between 69 % and 93%; the influence of the opposite effects related to the increase of the volume available for the expansion and the pressure raise due to phase change is discussed. Simulations of the experimental tests are carried out by using a 1D model at constant heat power and fixed liquid and air mass; the code predictions are compared with experimental results. Two typical responses are observed: subcooled or two phase saturated circulation. The steady state pressure is a strong function of liquid and air mass inventory. The numerical results show that, at low initial liquid mass inventory, the natural circulation is not stable but pulsated.
NASA Astrophysics Data System (ADS)
Chatterjee, S.; Bakshi, A. K.; Tripathy, S. P.
2010-09-01
Response matrix for CaSO 4:Dy based neutron dosimeter was generated using Monte Carlo code FLUKA in the energy range thermal to 20 MeV for a set of eight Bonner spheres of diameter 3-12″ including the bare one. Response of the neutron dosimeter was measured for the above set of spheres for 241Am-Be neutron source covered with 2 mm lead. An analytical expression for the response function was devised as a function of sphere mass. Using Frascati Unfolding Iteration Tool (FRUIT) unfolding code, the neutron spectrum of 241Am-Be was unfolded and compared with standard IAEA spectrum for the same.
Schwab, Stefan; Ramos, Humberto J; Souza, Emanuel M; Pedrosa, Fábio O; Yates, Marshall G; Chubatsu, Leda S; Rigo, Liu U
2007-05-01
Random mutagenesis using transposons with promoterless reporter genes has been widely used to examine differential gene expression patterns in bacteria. Using this approach, we have identified 26 genes of the endophytic nitrogen-fixing bacterium Herbaspirillum seropedicae regulated in response to ammonium content in the growth medium. These include nine genes involved in the transport of nitrogen compounds, such as the high-affinity ammonium transporter AmtB, and uptake systems for alternative nitrogen sources; nine genes coding for proteins responsible for restoring intracellular ammonium levels through enzymatic reactions, such as nitrogenase, amidase, and arginase; and a third group includes metabolic switch genes, coding for sensor kinases or transcription regulation factors, whose role in metabolism was previously unknown. Also, four genes identified were of unknown function. This paper describes their involvement in response to ammonium limitation. The results provide a preliminary profile of the metabolic response of Herbaspirillum seropedicae to ammonium stress.
Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P.; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas
2015-01-01
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. PMID:25491118
Durability of switchable QR code carriers under hydrolytic and photolytic conditions
NASA Astrophysics Data System (ADS)
Ecker, Melanie; Pretsch, Thorsten
2013-09-01
Following a guest diffusion approach, the surface of a shape memory poly(ester urethane) (PEU) was either black or blue colored. Bowtie-shaped quick response (QR) code carriers were then obtained from laser engraving and cutting, before thermo-mechanical functionalization (programming) was applied to stabilize the PEU in a thermo-responsive (switchable) state. The stability of the dye within the polymer surface and long-term functionality of the polymer were investigated against UVA and hydrolytic ageing. Spectrophotometric investigations verified UVA ageing-related color shifts from black to yellow-brownish and blue to petrol-greenish whereas hydrolytically aged samples changed from black to greenish and blue to light blue. In the case of UVA ageing, color changes were accompanied by dye decolorization, whereas hydrolytic ageing led to contrast declines due to dye diffusion. The Michelson contrast could be identified as an effective tool to follow ageing-related contrast changes between surface-dyed and laser-ablated (undyed) polymer regions. As soon as the Michelson contrast fell below a crucial value of 0.1 due to ageing, the QR code was no longer decipherable with a scanning device. Remarkably, the PEU information carrier base material could even then be adequately fixed and recovered. Hence, the surface contrast turned out to be the decisive parameter for QR code carrier applicability.
NASA Astrophysics Data System (ADS)
Takada, M.; Taniguchi, S.; Nakamura, T.; Nakao, N.; Uwamino, Y.; Shibata, T.; Fujitaka, K.
2001-06-01
We have developed a phoswich neutron detector consisting of an NE213 liquid scintillator surrounded by an NE115 plastic scintillator to distinguish photon and neutron events in a charged-particle mixed field. To obtain the energy spectra by unfolding, the response functions to neutrons and photons were obtained by the experiment and calculation. The response functions to photons were measured with radionuclide sources, and were calculated with the EGS4-PRESTA code. The response functions to neutrons were measured with a white neutron source produced by the bombardment of 135 MeV protons onto a Be+C target using a TOF method, and were calculated with the SCINFUL code, which we revised in order to calculate neutron response functions up to 135 MeV. Based on these experimental and calculated results, response matrices for photons up to 20 MeV and neutrons up to 132 MeV could finally be obtained.
Stretching of short monatomic gold chains-some model calculations
NASA Astrophysics Data System (ADS)
Sumali, Priyanka, Verma, Veena; Dharamvir, Keya
2012-06-01
The Mechanical properties of zig-zag monatomic gold chains containing 5 and 7 atoms were studied using the Siesta Code (SC), which works within the framework of DFT formalism and Gupta Potential (GP), which is an effective atom-atom potential. The zig-zag chains were stretched by keeping the end atoms fixed while rest of the atoms were relaxed till minimum energy is obtained. Energy, Force and Young's Modulus found using GP and SC were plotted as functions of total length. It is found that the breaking force in case of GP is of order of 1.6nN while for SIESTA is of the order of 2.9nN for both the chains.
An examination of loads and responses of a wind turbine undergoing variable-speed operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.D.; Buhl, M.L. Jr.; Bir, G.S.
1996-11-01
The National Renewable Energy Laboratory has recently developed the ability to predict turbine loads and responses for machines undergoing variable-speed operation. The wind industry has debated the potential benefits of operating wind turbine sat variable speeds for some time. Turbine system dynamic responses (structural response, resonance, and component interactions) are an important consideration for variable-speed operation of wind turbines. The authors have implemented simple, variable-speed control algorithms for both the FAST and ADAMS dynamics codes. The control algorithm is a simple one, allowing the turbine to track the optimum power coefficient (C{sub p}). The objective of this paper is tomore » show turbine loads and responses for a particular two-bladed, teetering-hub, downwind turbine undergoing variable-speed operation. The authors examined the response of the machine to various turbulent wind inflow conditions. In addition, they compare the structural responses under fixed-speed and variable-speed operation. For this paper, they restrict their comparisons to those wind-speed ranges for which limiting power by some additional control strategy (blade pitch or aileron control, for example) is not necessary. The objective here is to develop a basic understanding of the differences in loads and responses between the fixed-speed and variable-speed operation of this wind turbine configuration.« less
Composition Dependence of the Properties of Noble-metal Nanoalloys
NASA Astrophysics Data System (ADS)
Fernández Seivane, Lucas; Barrón, Héctor; Benson, James; Weissker, Hans-Christian; López-Lozano, Xochitl
2012-03-01
Bimetallic nanostructured materials are of greater interest both from the scientific and technological points of view due to their potential to improve the catalytic properties of novel materials. Their applicability as well as the performance depends critically on their size, shape and composition, either as alloy or core-shell. In this work, the structural, electronic, magnetic and optical properties of bimetallic Au-Ag nanoclusters have been investigated through density-functional-theory-based calculations with the Siesta and Octopus codes. Different symmetries -tetrahedral, bipyramidal, decahedral and icosahedral- of bimetallic nanoparticles of 4-, 5-, 7- and 13-atoms, were taken into account including all the possibly different Au:Ag ratio concentrations. In combination with a statistical analysis of the performed calculations and the concepts of the Enthalpy of Mixing and Energy Excess, we have been able to predict the most probable gap and magnetic moment for all the composition stoichiometries. This approach allows us to understand the energy differences due to cluster shape effects, the stoichiometry and segregation. In addition, we can also obtain the bulk energy and surface energy of Au-Ag nanoalloys by looking at fixed number of atoms and fixed morphologies.
Impact of ASTM Standard E722 update on radiation damage metrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
DePriest, Kendall Russell
2014-06-01
The impact of recent changes to the ASTM Standard E722 is investigated. The methodological changes in the production of the displacement kerma factors for silicon has significant impact for some energy regions of the 1-MeV(Si) equivalent fluence response function. When evaluating the integral over all neutrons energies in various spectra important to the SNL electronics testing community, the change in the response results in an increase in the total 1-MeV(Si) equivalent fluence of 2 7%. Response functions have been produced and are available for users of both the NuGET and MCNP codes.
Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas
2015-10-01
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.
Cao, Ou; Hoffman, Brad E; Moghimi, Babak; Nayak, Sushrusha; Cooper, Mario; Zhou, Shangzhen; Ertl, Hildegund C J; High, Katherine A; Herzog, Roland W
2009-10-01
Immune responses to factor IX (F.IX), a major concern in gene therapy for hemophilia, were analyzed for adeno-associated viral (AAV-2) gene transfer to skeletal muscle and liver as a function of the F9 underlying mutation. Vectors identical to those recently used in clinical trials were administered to four lines of hemophilia B mice on a defined genetic background [C3H/HeJ with deletion of endogenous F9 and transgenic for a range of nonfunctional human F.IX (hF.IX) variants]. The strength of the immune response to AAV-encoded F.IX inversely correlated with the degree of conservation of endogenous coding information and levels of endogenous antigen. Null mutation animals developed T- and B-cell responses in both protocols. However, inhibitor titers were considerably higher upon muscle gene transfer (or protein therapy). Transduced muscles of Null mice had strong infiltrates with CD8+ cells, which were much more limited in the liver and not seen for the other mutations. Sustained expression was achieved with liver transduction in mice with crm(-) nonsense and missense mutations, although they still formed antibodies upon muscle gene transfer. Therefore, endogenous expression prevented T-cell responses more effectively than antibody formation, and immune responses varied substantially depending on the protocol and the underlying mutation.
Study on Response Function of Organic Liquid Scintillator for High-Energy Neutrons
NASA Astrophysics Data System (ADS)
Satoh, Daiki; Sato, Tatsuhiko; Endo, Akira; Yamaguchi, Yasuhiro; Takada, Masashi; Ishibashi, Kenji
2005-05-01
Response functions of liquid organic scintillator for neutrons up to 800 MeV have been measured at the Heavy-Ion Medical Accelerator in Chiba (HIMAC) of National Institute of Radiological Sciences (NIRS). 800-MeV/u Si ions and 400-MeV/u C ions bombarded a thick carbon target to produce neutrons. The kinetic energies of emitted neutrons were determined by the time-of-flight (TOF) method. Light output for neutrons was evaluated by eliminating events due to gamma-rays and charged particles. The measured response functions were compared with calculations using SCINFUL-QMD and CECIL codes. It was found that SCINFUL-QMD reproduced our experimental data adequately.
Study on Response Function of Organic Liquid Scintillator for High-Energy Neutrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Satoh, Daiki; Sato, Tatsuhiko; Endo, Akira
2005-05-24
Response functions of liquid organic scintillator for neutrons up to 800 MeV have been measured at the Heavy-Ion Medical Accelerator in Chiba (HIMAC) of National Institute of Radiological Sciences (NIRS). 800-MeV/u Si ions and 400-MeV/u C ions bombarded a thick carbon target to produce neutrons. The kinetic energies of emitted neutrons were determined by the time-of-flight (TOF) method. Light output for neutrons was evaluated by eliminating events due to gamma-rays and charged particles. The measured response functions were compared with calculations using SCINFUL-QMD and CECIL codes. It was found that SCINFUL-QMD reproduced our experimental data adequately.
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei
2015-06-01
The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratta, A.J.
1997-07-01
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts andmore » engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.« less
Skyshine line-beam response functions for 20- to 100-MeV photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.
1996-06-01
The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Xu, Minjie; Tian, Ailing
2017-04-01
A novel optical image encryption scheme is proposed based on quick response code and high dimension chaotic system, where only the intensity distribution of encoded information is recorded as ciphertext. Initially, the quick response code is engendered from the plain image and placed in the input plane of the double random phase encoding architecture. Then, the code is encrypted to the ciphertext with noise-like distribution by using two cascaded gyrator transforms. In the process of encryption, the parameters such as rotation angles and random phase masks are generated as interim variables and functions based on Chen system. A new phase retrieval algorithm is designed to reconstruct the initial quick response code in the process of decryption, in which a priori information such as three position detection patterns is used as the support constraint. The original image can be obtained without any energy loss by scanning the decrypted code with mobile devices. The ciphertext image is the real-valued function which is more convenient for storing and transmitting. Meanwhile, the security of the proposed scheme is enhanced greatly due to high sensitivity of initial values of Chen system. Extensive cryptanalysis and simulation have performed to demonstrate the feasibility and effectiveness of the proposed scheme.
Bit-Wise Arithmetic Coding For Compression Of Data
NASA Technical Reports Server (NTRS)
Kiely, Aaron
1996-01-01
Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.
Fixed-Rate Tuition Plans: A Survey in Response to Senate Bill 806
ERIC Educational Resources Information Center
State Council of Higher Education for Virginia, 2015
2015-01-01
Legislation introduced in 2015, including Senate Bill 806, sought to amend the "Code of Virginia" regarding fixed four-year tuition and other costs. Eventually, Senate Bill 1183 was incorporated into Senate Bill 806; the substitute amendment directed the board of visitors of each four-year public institution with an in-state…
Implicit Coupling Approach for Simulation of Charring Carbon Ablators
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Gokcen, Tahir
2013-01-01
This study demonstrates that coupling of a material thermal response code and a flow solver with nonequilibrium gas/surface interaction for simulation of charring carbon ablators can be performed using an implicit approach. The material thermal response code used in this study is the three-dimensional version of Fully Implicit Ablation and Thermal response program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation method. Coupling between the material response and flow codes is performed by solving the surface mass balance in flow solver and the surface energy balance in material response code. Thus, the material surface recession is predicted in flow code, and the surface temperature and pyrolysis gas injection rate are computed in material response code. It is demonstrated that the time-lagged explicit approach is sufficient for simulations at low surface heating conditions, in which the surface ablation rate is not a strong function of the surface temperature. At elevated surface heating conditions, the implicit approach has to be taken, because the carbon ablation rate becomes a stiff function of the surface temperature, and thus the explicit approach appears to be inappropriate resulting in severe numerical oscillations of predicted surface temperature. Implicit coupling for simulation of arc-jet models is performed, and the predictions are compared with measured data. Implicit coupling for trajectory based simulation of Stardust fore-body heat shield is also conducted. The predicted stagnation point total recession is compared with that predicted using the chemical equilibrium surface assumption
Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu
2016-02-15
We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less
Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian
2018-05-10
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.
An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks
Yu, Shidi; Liu, Xiao; Cai, Zhiping; Wang, Tian
2018-01-01
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%. PMID:29748525
NASA Astrophysics Data System (ADS)
Huang, Han-Xiong; Ruan, Xi-Chao; Chen, Guo-Chang; Zhou, Zu-Ying; Li, Xia; Bao, Jie; Nie, Yang-Bo; Zhong, Qi-Ping
2009-08-01
The light output function of a varphi50.8 mm × 50.8 mm BC501A scintillation detector was measured in the neutron energy region of 1 to 30 MeV by fitting the pulse height (PH) spectra for neutrons with the simulations from the NRESP code at the edge range. Using the new light output function, the neutron detection efficiency was determined with two Monte-Carlo codes, NEFF and SCINFUL. The calculated efficiency was corrected by comparing the simulated PH spectra with the measured ones. The determined efficiency was verified at the near threshold region and normalized with a Proton-Recoil-Telescope (PRT) at the 8-14 MeV energy region.
NASA Astrophysics Data System (ADS)
Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Fuchs, B.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2015-02-01
In previous studies of KASCADE-Grande data, a Monte Carlo simulation code based on the GEANT3 program has been developed to describe the energy deposited by EAS particles in the detector stations. In an attempt to decrease the simulation time and ensure compatibility with the geometry description in standard KASCADE-Grande analysis software, several structural elements have been neglected in the implementation of the Grande station geometry. To improve the agreement between experimental and simulated data, a more accurate simulation of the response of the KASCADE-Grande detector is necessary. A new simulation code has been developed based on the GEANT4 program, including a realistic geometry of the detector station with structural elements that have not been considered in previous studies. The new code is used to study the influence of a realistic detector geometry on the energy deposited in the Grande detector stations by particles from EAS events simulated by CORSIKA. Lateral Energy Correction Functions are determined and compared with previous results based on GEANT3.
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen
2017-04-01
Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.
Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin
2017-10-01
Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparing Geant4 hadronic models for the WENDI-II rem meter response function.
Vanaudenhove, T; Dubus, A; Pauly, N
2013-01-01
The WENDI-II rem meter is one of the most popular neutron dosemeters used to assess a useful quantity of radiation protection, namely the ambient dose equivalent. This is due to its high sensitivity and its energy response that approximately follows the conversion function between neutron fluence and ambient dose equivalent in the range of thermal to 5 GeV. The simulation of the WENDI-II response function with the Geant4 toolkit is then perfectly suited to compare low- and high-energy hadronic models provided by this Monte Carlo code. The results showed that the thermal treatment of hydrogen in polyethylene for neutron <4 eV has a great influence over the whole detector range. Above 19 MeV, both Bertini Cascade and Binary Cascade models show a good correlation with the results found in the literature, while low-energy parameterised models are not suitable for this application.
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
NASA Astrophysics Data System (ADS)
Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei
2011-10-01
High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Creating and Testing Simulation Software
NASA Technical Reports Server (NTRS)
Heinich, Christina M.
2013-01-01
The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Fly Photoreceptors Demonstrate Energy-Information Trade-Offs in Neural Coding
Niven, Jeremy E; Anderson, John C; Laughlin, Simon B
2007-01-01
Trade-offs between energy consumption and neuronal performance must shape the design and evolution of nervous systems, but we lack empirical data showing how neuronal energy costs vary according to performance. Using intracellular recordings from the intact retinas of four flies, Drosophila melanogaster, D. virilis, Calliphora vicina, and Sarcophaga carnaria, we measured the rates at which homologous R1–6 photoreceptors of these species transmit information from the same stimuli and estimated the energy they consumed. In all species, both information rate and energy consumption increase with light intensity. Energy consumption rises from a baseline, the energy required to maintain the dark resting potential. This substantial fixed cost, ∼20% of a photoreceptor's maximum consumption, causes the unit cost of information (ATP molecules hydrolysed per bit) to fall as information rate increases. The highest information rates, achieved at bright daylight levels, differed according to species, from ∼200 bits s−1 in D. melanogaster to ∼1,000 bits s−1 in S. carnaria. Comparing species, the fixed cost, the total cost of signalling, and the unit cost (cost per bit) all increase with a photoreceptor's highest information rate to make information more expensive in higher performance cells. This law of diminishing returns promotes the evolution of economical structures by severely penalising overcapacity. Similar relationships could influence the function and design of many neurons because they are subject to similar biophysical constraints on information throughput. PMID:17373859
Response functions for neutron skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-02-01
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.
Wang, Fangnian; Deeney, Jude T.; Denis, Gerald V.
2014-01-01
Disturbed body energy balance can lead to obesity and obesity-driven diseases such as Type 2 diabetes, which have reached an epidemic level. Evidence indicates that obesity induced inflammation is a major cause of insulin resistance and Type 2 diabetes. Environmental factors, such as nutrients, affect body energy balance through epigenetic or chromatin-based mechanisms. As a bromodomain and external domain family transcription regulator, Brd2 regulates expression of many genes through interpretation of chromatin codes, and participates in the regulation of body energy balance and immune function. In the severely obese state, Brd2 knockdown in mice prevented obesity-induced inflammatory responses, protected animals from Type 2 diabetes, and thus uncoupled obesity from diabetes. Brd2 provides an important model for investigation of the function of transcription regulators and the development of obesity and diabetes; it also provides a possible target to treat obesity and diabetes through modulation of the function of a chromatin code reader. PMID:23374712
Numerical modelling of biomass combustion: Solid conversion processes in a fixed bed furnace
NASA Astrophysics Data System (ADS)
Karim, Md. Rezwanul; Naser, Jamal
2017-06-01
Increasing demand for energy and rising concerns over global warming has urged the use of renewable energy sources to carry a sustainable development of the world. Bio mass is a renewable energy which has become an important fuel to produce thermal energy or electricity. It is an eco-friendly source of energy as it reduces carbon dioxide emissions. Combustion of solid biomass is a complex phenomenon due to its large varieties and physical structures. Among various systems, fixed bed combustion is the most commonly used technique for thermal conversion of solid biomass. But inadequate knowledge on complex solid conversion processes has limited the development of such combustion system. Numerical modelling of this combustion system has some advantages over experimental analysis. Many important system parameters (e.g. temperature, density, solid fraction) can be estimated inside the entire domain under different working conditions. In this work, a complete numerical model is used for solid conversion processes of biomass combustion in a fixed bed furnace. The combustion system is divided in to solid and gas phase. This model includes several sub models to characterize the solid phase of the combustion with several variables. User defined subroutines are used to introduce solid phase variables in commercial CFD code. Gas phase of combustion is resolved using built-in module of CFD code. Heat transfer model is modified to predict the temperature of solid and gas phases with special radiation heat transfer solution for considering the high absorptivity of the medium. Considering all solid conversion processes the solid phase variables are evaluated. Results obtained are discussed with reference from an experimental burner.
ARES: automated response function code. Users manual. [HPGAM and LSQVM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maung, T.; Reynolds, G.M.
This ARES user's manual provides detailed instructions for a general understanding of the Automated Response Function Code and gives step by step instructions for using the complete code package on a HP-1000 system. This code is designed to calculate response functions of NaI gamma-ray detectors, with cylindrical or rectangular geometries.
Coded aperture imaging with self-supporting uniformly redundant arrays
Fenimore, Edward E.
1983-01-01
A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.
Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications
NASA Astrophysics Data System (ADS)
Blackburn, Megan Satterfield
2009-12-01
Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.
FLUKA simulation of TEPC response to cosmic radiation.
Beck, P; Ferrari, A; Pelliccioni, M; Rollet, S; Villari, R
2005-01-01
The aircrew exposure to cosmic radiation can be assessed by calculation with codes validated by measurements. However, the relationship between doses in the free atmosphere, as calculated by the codes and from results of measurements performed within the aircraft, is still unclear. The response of a tissue-equivalent proportional counter (TEPC) has already been simulated successfully by the Monte Carlo transport code FLUKA. Absorbed dose rate and ambient dose equivalent rate distributions as functions of lineal energy have been simulated for several reference sources and mixed radiation fields. The agreement between simulation and measurements has been well demonstrated. In order to evaluate the influence of aircraft structures on aircrew exposure assessment, the response of TEPC in the free atmosphere and on-board is now simulated. The calculated results are discussed and compared with other calculations and measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A
2017-03-21
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
2017-03-16
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors
2017-01-01
Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K+ conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. PMID:28381642
Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors.
Heras, Francisco J H; Anderson, John; Laughlin, Simon B; Niven, Jeremy E
2017-04-01
Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K + conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. © 2017 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergmann, Ryan M.; Rowland, Kelly L.
2017-04-12
WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro
2015-11-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. Copyright © 2015 the American Physiological Society.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N.; Iijima, Toshio
2015-01-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201
Mechanism on brain information processing: Energy coding
NASA Astrophysics Data System (ADS)
Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa
2006-09-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.
Energy coding in biological neural networks
Zhang, Zhikang
2007-01-01
According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model’s ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function. PMID:19003513
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
MCNP Version 6.2 Release Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, Christopher John; Bull, Jeffrey S.; Solomon, C. J.
Monte Carlo N-Particle or MCNP ® is a general-purpose Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP Version 6.2 follows the MCNP6.1.1 beta version and has been released in order to provide the radiation transport community with the latest feature developments and bug fixes for MCNP. Since the last release of MCNP major work has been conducted to improve the code base, add features, and provide tools to facilitate ease of use of MCNP version 6.2 as well as the analysis of results. These release notes serve as a general guidemore » for the new/improved physics, source, data, tallies, unstructured mesh, code enhancements and tools. For more detailed information on each of the topics, please refer to the appropriate references or the user manual which can be found at http://mcnp.lanl.gov. This release of MCNP version 6.2 contains 39 new features in addition to 172 bug fixes and code enhancements. There are still some 33 known issues the user should familiarize themselves with (see Appendix).« less
NASA Astrophysics Data System (ADS)
Vassiliadis, D.
2008-11-01
The solar wind velocity is the primary driver of the electron flux variability in Earth's radiation belts. The response of the logarithmic flux ("log-flux") to this driver has been determined at the geosynchronous orbit and at a fixed energy [Baker, D.N., McPherron, R.L., Cayton, T.E., Klebesadel, R.W., 1990. Linear prediction filter analysis of relativistic electron properties at 6.6 RE. Journal of Geophysical Research 95(A9), 15,133-15,140) and as a function of L shell and fixed energy [Vassiliadis, D., Klimas, A.J., Kanekal, S.G., Baker, D.N., Weigel, R.S., 2002. Long-term average, solar-cycle, and seasonal response of magnetospheric energetic electrons to the solar wind speed. Journal of Geophysical Research 107, doi:10.1029/2001JA000506). In this paper we generalize the response model as a function of particle energy (0.8-6.4 MeV) using POLAR HIST measurements. All three response peaks identified earlier figure prominently in the high-altitude POLAR measurements. The positive response around the geosynchronous orbit is peak P1 ([tau]=2±1 d; L=5.8±0.5; E=0.8-6.4 MeV), associated with high-speed, low-density streams and the ULF wave activity they produce. Deeper in the magnetosphere, the response is dominated by a positive peak P0 (0±1 d; 2.9±0.5RE; 0.8-1.1 MeV), of a shorter duration and producing lower-energy electrons. The P0 response occurs during the passage of geoeffective structures containing high IMF and high-density parts, such as ICMEs and other mass ejecta. Finally, the negative peak V1 (0±0.5 d; 5.7±0.5RE; 0.8-6.4 MeV) is associated with the "Dst effect" or the quasiadiabatic transport produced by ring-current intensifications. As energies increase, the P1 and V1 peaks appear at lower L, while the Dst effect becomes more pronounced in the region L<3. The P0 effectively disappears for E>1.6 MeV because of low statistics, although it is evident in individual events. The continuity of the response across radial and energy scales supports the earlier hypothesis that each of the three modes corresponds to a qualitatively different type of large-scale electron acceleration and transport.
Di Remigio, Roberto; Beerepoot, Maarten T P; Cornaton, Yann; Ringholm, Magnus; Steindal, Arnfinn Hykkerud; Ruud, Kenneth; Frediani, Luca
2016-12-21
The study of high-order absorption properties of molecules is a field of growing importance. Quantum-chemical studies can help design chromophores with desirable characteristics. Given that most experiments are performed in solution, it is important to devise a cost-effective strategy to include solvation effects in quantum-chemical studies of these properties. We here present an open-ended formulation of self-consistent field (SCF) response theory for a molecular solute coupled to a polarizable continuum model (PCM) description of the solvent. Our formulation relies on the open-ended, density matrix-based quasienergy formulation of SCF response theory of Thorvaldsen, et al., [J. Chem. Phys., 2008, 129, 214108] and the variational formulation of the PCM, as presented by Lipparini et al., [J. Chem. Phys., 2010, 133, 014106]. Within the PCM approach to solvation, the mutual solute-solvent polarization is represented by means of an apparent surface charge (ASC) spread over the molecular cavity defining the solute-solvent boundary. In the variational formulation, the ASC is an independent, variational degree of freedom. This allows us to formulate response theory for molecular solutes in the fixed-cavity approximation up to arbitrary order and with arbitrary perturbation operators. For electric dipole perturbations, pole and residue analyses of the response functions naturally lead to the identification of excitation energies and transition moments. We document the implementation of this approach in the Dalton program package using a recently developed open-ended response code and the PCMSolver libraries and present results for one-, two-, three-, four- and five-photon absorption processes of three small molecules in solution.
Radiological characteristics of MRI-based VIP polymer gel under carbon beam irradiation
NASA Astrophysics Data System (ADS)
Maeyama, T.; Fukunishi, N.; Ishikawa, K. L.; Furuta, T.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Fukuda, S.
2015-02-01
We study the radiological characteristics of VIP polymer gel dosimeters under carbon beam irradiation with energy of 135 and 290 AMeV. To evaluate dose response of VIP polymer gels, the transverse (or spin-spin) relaxation rate R2 of the dosimeters measured by magnetic resonance imaging (MRI) as a function of linear energy transfer (LET), rather than penetration depth, as is usually done in previous reports. LET is evaluated by use of the particle transport simulation code PHITS. Our results reveal that the dose response decreases with increasing dose-averaged LET and that the dose response-LET relation also varies with incident carbon beam energy. The latter can be explained by taking into account the contribution from fragmentation products.
Overview of Particle and Heavy Ion Transport Code System PHITS
NASA Astrophysics Data System (ADS)
Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit
2014-06-01
A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Chandler, G.A.; Biggs, F.
X-ray-producing hohlraums are being studied as indirect drives for Inertial Confinement Fusion targets. In a 1994 target series on the PBFAII accelerator, cylindrical hohlraum targets were heated by an intense Li{sup +} ion beam and viewed by an array of 13 time-resolved, filtered x-ray detectors (XRDs). The UFO unfold code and its suite of auxiliary functions were used extensively in obtaining time- resolved x-ray spectra and radiation temperatures from this diagnostic. UFO was also used to obtain fitted response functions from calibration data, to simulate data from blackbody x-ray spectra of interest, to determine the suitability of various unfolding parametersmore » (e.g., energy domain, energy partition, smoothing conditions, and basis functions), to interpolate the XRD signal traces, and to unfold experimental data. The simulation capabilities of the code were useful in understanding an anomalous feature in the unfolded spectra at low photon energies ({le} 100 eV). Uncertainties in the differential and energy-integrated unfolded spectra were estimated from uncertainties in the data. The time-history of the radiation temperature agreed well with independent calculations of the wall temperature in the hohlraum.« less
Fluorogenic RNA Mango aptamers for imaging small non-coding RNAs in mammalian cells.
Autour, Alexis; C Y Jeng, Sunny; D Cawte, Adam; Abdolahzadeh, Amir; Galli, Angela; Panchapakesan, Shanker S S; Rueda, David; Ryckelynck, Michael; Unrau, Peter J
2018-02-13
Despite having many key roles in cellular biology, directly imaging biologically important RNAs has been hindered by a lack of fluorescent tools equivalent to the fluorescent proteins available to study cellular proteins. Ideal RNA labelling systems must preserve biological function, have photophysical properties similar to existing fluorescent proteins, and be compatible with established live and fixed cell protein labelling strategies. Here, we report a microfluidics-based selection of three new high-affinity RNA Mango fluorogenic aptamers. Two of these are as bright or brighter than enhanced GFP when bound to TO1-Biotin. Furthermore, we show that the new Mangos can accurately image the subcellular localization of three small non-coding RNAs (5S, U6, and a box C/D scaRNA) in fixed and live mammalian cells. These new aptamers have many potential applications to study RNA function and dynamics both in vitro and in mammalian cells.
Mirzajani, N; Ciolini, R; Di Fulvio, A; Esposito, J; d'Errico, F
2014-06-01
Experimental activities are underway at INFN Legnaro National Laboratories (LNL) (Padua, Italy) and Pisa University aimed at angular-dependent neutron energy spectra measurements produced by the (9)Be(p,xn) reaction, under a 5MeV proton beam. This work has been performed in the framework of INFN TRASCO-BNCT project. Bonner Sphere Spectrometer (BSS), based on (6)LiI (Eu) scintillator, was used with the shadow-cone technique. Proper unfolding codes, coupled to BSS response function calculated by Monte Carlo code, were finally used. The main results are reported here. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
NASA Astrophysics Data System (ADS)
Yang, Cen; Zhang, Yong-liang
2018-04-01
In this paper we propose a two-buoy wave energy converter composed of a heaving semi-submerged cylindrical buoy, a fixed submerged cylindrical buoy and a power take-off (PTO) system, and investigate the effect of the fixed submerged buoy on the hydrodynamics of the heaving semi-submerged buoy based on the three-dimensional potential theory. And the dynamic response of the semi-submerged buoy and the wave energy conversion efficiency of the converter are analyzed. The difference of the hydrodynamics and the wave energy conversion efficiency of a semi-submerged buoy converter with and without a fixed submerged buoy is discussed. It is revealed that the influence of the fixed submerged buoy on the exciting wave force, the added mass, the radiation damping coefficient and the wave energy conversion efficiency can be significant with a considerable variation, depending on the vertical distance between the heaving semi-submerged buoy and the fixed submerged buoy, the diameter ratio of the fixed submerged buoy to the heaving semi-submerged buoy and the water depth.
A predator-prey model with generic birth and death rates for the predator.
Terry, Alan J
2014-02-01
We propose and study a predator-prey model in which the predator has a Holling type II functional response and generic per capita birth and death rates. Given that prey consumption provides the energy for predator activity, and that the predator functional response represents the prey consumption rate per predator, we assume that the per capita birth and death rates for the predator are, respectively, increasing and decreasing functions of the predator functional response. These functions are monotonic, but not necessarily strictly monotonic, for all values of the argument. In particular, we allow the possibility that the predator birth rate is zero for all sufficiently small values of the predator functional response, reflecting the idea that a certain level of energy intake is needed before a predator can reproduce. Our analysis reveals that the model exhibits the behaviours typically found in predator-prey models - extinction of the predator population, convergence to a periodic orbit, or convergence to a co-existence fixed point. For a specific example, in which the predator birth and death rates are constant for all sufficiently small or large values of the predator functional response, we corroborate our analysis with numerical simulations. In the unlikely case where these birth and death rates equal the same constant for all sufficiently large values of the predator functional response, the model is capable of structurally unstable behaviour, with a small change in the initial conditions leading to a more pronounced change in the long-term dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.
Studying the response of a plastic scintillator to gamma rays using the Geant4 Monte Carlo code.
Ghadiri, Rasoul; Khorsandi, Jamshid
2015-05-01
To determine the gamma ray response function of an NE-102 scintillator and to investigate the gamma spectra due to the transport of optical photons, we simulated an NE-102 scintillator using Geant4 code. The results of the simulation were compared with experimental data. Good consistency between the simulation and data was observed. In addition, the time and spatial distributions, along with the energy distribution and surface treatments of scintillation detectors, were calculated. This simulation makes us capable of optimizing the photomultiplier tube (or photodiodes) position to yield the best coupling to the detector. Copyright © 2015 Elsevier Ltd. All rights reserved.
The free-energy self: a predictive coding account of self-recognition.
Apps, Matthew A J; Tsakiris, Manos
2014-04-01
Recognising and representing one's self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one's body is processed in a Bayesian manner as the most likely to be "me". Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up "surprise" signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest. Copyright © 2013 Elsevier Ltd. All rights reserved.
The free-energy self: A predictive coding account of self-recognition
Apps, Matthew A.J.; Tsakiris, Manos
2013-01-01
Recognising and representing one’s self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one’s body is processed in a Bayesian manner as the most likely to be “me”. Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up “surprise” signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest. PMID:23416066
A Thermodynamically Consistent Damage Model for Advanced Composites
NASA Technical Reports Server (NTRS)
Maimi, Pere; Camanho, Pedro P.; Mayugo, Joan-Andreu; Davila, Carlos G.
2006-01-01
A continuum damage model for the prediction of damage onset and structural collapse of structures manufactured in fiber-reinforced plastic laminates is proposed. The principal damage mechanisms occurring in the longitudinal and transverse directions of a ply are represented by a damage tensor that is fixed in space. Crack closure under load reversal effects are taken into account using damage variables established as a function of the sign of the components of the stress tensor. Damage activation functions based on the LaRC04 failure criteria are used to predict the different damage mechanisms occurring at the ply level. The constitutive damage model is implemented in a finite element code. The objectivity of the numerical model is assured by regularizing the dissipated energy at a material point using Bazant's Crack Band Model. To verify the accuracy of the approach, analyses of coupon specimens were performed, and the numerical predictions were compared with experimental data.
The origin of neutron biological effectiveness as a function of energy.
Baiocco, G; Barbieri, S; Babini, G; Morini, J; Alloni, D; Friedland, W; Kundrát, P; Schmitt, E; Puchalska, M; Sihver, L; Ottolenghi, A
2016-09-22
The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data.
The origin of neutron biological effectiveness as a function of energy
NASA Astrophysics Data System (ADS)
Baiocco, G.; Barbieri, S.; Babini, G.; Morini, J.; Alloni, D.; Friedland, W.; Kundrát, P.; Schmitt, E.; Puchalska, M.; Sihver, L.; Ottolenghi, A.
2016-09-01
The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data.
The origin of neutron biological effectiveness as a function of energy
Baiocco, G.; Barbieri, S.; Babini, G.; Morini, J.; Alloni, D.; Friedland, W.; Kundrát, P.; Schmitt, E.; Puchalska, M.; Sihver, L.; Ottolenghi, A.
2016-01-01
The understanding of the impact of radiation quality in early and late responses of biological targets to ionizing radiation exposure necessarily grounds on the results of mechanistic studies starting from physical interactions. This is particularly true when, already at the physical stage, the radiation field is mixed, as it is the case for neutron exposure. Neutron Relative Biological Effectiveness (RBE) is energy dependent, maximal for energies ~1 MeV, varying significantly among different experiments. The aim of this work is to shed light on neutron biological effectiveness as a function of field characteristics, with a comprehensive modeling approach: this brings together transport calculations of neutrons through matter (with the code PHITS) and the predictive power of the biophysical track structure code PARTRAC in terms of DNA damage evaluation. Two different energy dependent neutron RBE models are proposed: the first is phenomenological and based only on the characterization of linear energy transfer on a microscopic scale; the second is purely ab-initio and based on the induction of complex DNA damage. Results for the two models are compared and found in good qualitative agreement with current standards for radiation protection factors, which are agreed upon on the basis of RBE data. PMID:27654349
GAMSOR: Gamma Source Preparation and DIF3D Flux Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M. A.; Lee, C. H.; Hill, R. N.
2017-06-28
Nuclear reactors that rely upon the fission reaction have two modes of thermal energy deposition in the reactor system: neutron absorption and gamma absorption. The gamma rays are typically generated by neutron capture reactions or during the fission process which means the primary driver of energy production is of course the neutron interaction. In conventional reactor physics methods, the gamma heating component is ignored such that the gamma absorption is forced to occur at the gamma emission site. For experimental reactor systems like EBR-II and FFTF, the placement of structural pins and assemblies internal to the core leads to problemsmore » with power heating predictions because there is no fission power source internal to the assembly to dictate a spatial distribution of the power. As part of the EBR-II support work in the 1980s, the GAMSOR code was developed to assist analysts in calculating the gamma heating. The GAMSOR code is a modified version of DIF3D and actually functions within a sequence of DIF3D calculations. The gamma flux in a conventional fission reactor system does not perturb the neutron flux and thus the gamma flux calculation can be cast as a fixed source problem given a solution to the steady state neutron flux equation. This leads to a sequence of DIF3D calculations, called the GAMSOR sequence, which involves solving the neutron flux, then the gamma flux, and then combining the results to do a summary edit. In this manuscript, we go over the GAMSOR code and detail how it is put together and functions. We also discuss how to setup the GAMSOR sequence and input for each DIF3D calculation in the GAMSOR sequence.« less
A new family of distribution functions for spherical galaxies
NASA Astrophysics Data System (ADS)
Gerhard, Ortwin E.
1991-06-01
The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.
PCC Framework for Program-Generators
NASA Technical Reports Server (NTRS)
Kong, Soonho; Choi, Wontae; Yi, Kwangkeun
2009-01-01
In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.
Study of fusion product effects in field-reversed mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driemeyer, D.E.
1980-01-01
The effect of fusion products (fps) on Field-Reversed Mirror (FRM) reactor concepts has been evaluated through the development of two new computer models. The first code (MCFRM) treats fps as test particles in a fixed background plasma, which is represented as a fluid. MCFRM includes a Monte Carlo treatment of Coulomb scattering and thus provides an accurate treatment of fp behavior even at lower energies where pitch-angle scattering becomes important. The second code (FRMOD) is a steady-state, globally averaged, two-fluid (ion and electron), point model of the FRM plasma that incorporates fp heating and ash buildup values which are consistentmore » with the MCFRM calculations. These codes have been used extensively in the development of an advanced-fuel FRM reactor design (SAFFIRE). A Catalyzed-D version of the plant is also discussed along with an investigation of the steady-state energy distribution of fps in the FRM. User guides for the two computer codes are also included.« less
NASA Astrophysics Data System (ADS)
Vella, A.; Munoz, Andre; Healy, Matthew J. F.; Lane, David; Lockley, D.
2017-08-01
The PENELOPE Monte Carlo simulation code was used to determine the optimum thickness and aperture diameter of a pinhole mask for X-ray backscatter imaging in a security application. The mask material needs to be thick enough to absorb most X-rays, and the pinhole must be wide enough for sufficient field of view whilst narrow enough for sufficient image spatial resolution. The model consisted of a fixed geometry test object, various masks with and without pinholes, and a 1040 x 1340 pixels' area detector inside a lead lined camera housing. The photon energy distribution incident upon masks was flat up to selected energy limits. This artificial source was used to avoid the optimisation being specific to any particular X-ray source technology. The pixelated detector was modelled by digitising the surface area represented by the PENELOPE phase space file and integrating the energies of the photons impacting within each pixel; a MATLAB code was written for this. The image contrast, signal to background ratio, spatial resolution, and collimation effect were calculated at the simulated detector as a function of pinhole diameter and various thicknesses of mask made of tungsten, tungsten/epoxy composite or bismuth alloy. A process of elimination was applied to identify suitable masks for a viable X-ray backscattering security application.
Componentry for lower extremity prostheses.
Friel, Karen
2005-09-01
Prosthetic components for both transtibial and transfemoral amputations are available for patients of every level of ambulation. Most current suspension systems, knees, foot/ankle assemblies, and shock absorbers use endoskeletal construction that emphasizes total contact and weight distribution between bony structures and soft tissues. Different components offer varying benefits to energy expenditure, activity level, balance, and proprioception. Less dynamic ambulators may use fixed-cadence knees and non-dynamic response feet; higher functioning walkers benefit from dynamic response feet and variable-cadence knees. In addition, specific considerations must be kept in mind when fitting a patient with peripheral vascular disease or diabetes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Chandler, G.A.; Biggs, F.
X-ray-producing hohlraums are being studied as indirect drives for inertial confinement fusion targets. In a 1994 target series on the PBFAII accelerator, cylindrical hohlraum targets were heated by an intense Li{sup +} ion beam and viewed by an array of 13 time-resolved, filtered x-ray detectors (XRDs). The unfold operator (UFO) code and its suite of auxiliary functions were used extensively in obtaining time-resolved x-ray spectra and radiation temperatures from this diagnostic. The UFO was also used to obtain fitted response functions from calibration data, to simulate data from blackbody x-ray spectra of interest, to determine the suitability of various unfoldingmore » parameters (e.g., energy domain, energy partition, smoothing conditions, and basis functions), to interpolate the XRD signal traces, and to unfold experimental data. The simulation capabilities of the code were useful in understanding an anomalous feature in the unfolded spectra at low photon energies ({le}100 eV). Uncertainties in the differential and energy-integrated unfolded spectra were estimated from uncertainties in the data. The time{endash}history of the radiation temperature agreed well with independent calculations of the wall temperature in the hohlraum. {copyright} {ital 1997 American Institute of Physics.}« less
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Applications of surface acoustic and shallow bulk acoustic wave devices
NASA Astrophysics Data System (ADS)
Campbell, Colin K.
1989-10-01
Surface acoustic wave (SAW) device coverage includes delay lines and filters operating at selected frequencies in the range from about 10 MHz to 11 GHz; modeling with single-crystal piezoelectrics and layered structures; resonators and low-loss filters; comb filters and multiplexers; antenna duplexers; harmonic devices; chirp filters for pulse compression; coding with fixed and programmable transversal filters; Barker and quadraphase coding; adaptive filters; acoustic and acoustoelectric convolvers and correlators for radar, spread spectrum, and packet radio; acoustooptic processors for Bragg modulation and spectrum analysis; real-time Fourier-transform and cepstrum processors for radar and sonar; compressive receivers; Nyquist filters for microwave digital radio; clock-recovery filters for fiber communications; fixed-, tunable-, and multimode oscillators and frequency synthesizers; acoustic charge transport; and other SAW devices for signal processing on gallium arsenide. Shallow bulk acoustic wave device applications include gigahertz delay lines, surface-transverse-wave resonators employing energy-trapping gratings, and oscillators with enhanced performance and capability.
NASA Astrophysics Data System (ADS)
Davidson, Asher; Tableman, Adam; Yu, Peicheng; An, Weiming; Tsung, Frank; Mori, Warren; Lu, Wei; Fonseca, Ricardo
2017-10-01
We examine scaling laws for LWFA in the regime nonlinear, self-guided regime in detail using the quasi-3D version of the particle-in-cell code OSIRIS. We find that the scaling laws continue to work well when we fix the normalized laser amplitude while reducing plasma density. It is further found that the energy gain for fixed laser energy can be improved by shortening the pulse length until self-guiding almost no longer occurs and that the energy gain can be optimized by using lasers with asymmetric longitudinal profiles. We find that when optimized, a 15 J laser may yield particle energies as high as 5.3 GeV without the need of any external guiding. Detailed studies for optimizing energy gains from 30 J and 100 J lasers will also presented which indicate that energies in excess of 10 GeV can be possible in the near term without the need for external guiding. This work is supported by the NSF and DOE.
GAMSOR: Gamma Source Preparation and DIF3D Flux Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M. A.; Lee, C. H.; Hill, R. N.
2016-12-15
Nuclear reactors that rely upon the fission reaction have two modes of thermal energy deposition in the reactor system: neutron absorption and gamma absorption. The gamma rays are typically generated by neutron absorption reactions or during the fission process which means the primary driver of energy production is of course the neutron interaction. In conventional reactor physics methods, the gamma heating component is ignored such that the gamma absorption is forced to occur at the gamma emission site. For experimental reactor systems like EBR-II and FFTF, the placement of structural pins and assemblies internal to the core leads to problemsmore » with power heating predictions because there is no fission power source internal to the assembly to dictate a spatial distribution of the power. As part of the EBR-II support work in the 1980s, the GAMSOR code was developed to assist analysts in calculating the gamma heating. The GAMSOR code is a modified version of DIF3D and actually functions within a sequence of DIF3D calculations. The gamma flux in a conventional fission reactor system does not perturb the neutron flux and thus the gamma flux calculation can be cast as a fixed source problem given a solution to the steady state neutron flux equation. This leads to a sequence of DIF3D calculations, called the GAMSOR sequence, which involves solving the neutron flux, then the gamma flux, then combining the results to do a summary edit. In this manuscript, we go over the GAMSOR code and detail how it is put together and functions. We also discuss how to setup the GAMSOR sequence and input for each DIF3D calculation in the GAMSOR sequence. With the GAMSOR capability, users can take any valid steady state DIF3D calculation and compute the power distribution due to neutron and gamma heating. The MC2-3 code is the preferable companion code to use for generating neutron and gamma cross section data, but the GAMSOR code can accept cross section data from other sources. To further this aspect, an additional utility code was created which demonstrates how to merge the neutron and gamma cross section data together to carry out a simultaneous solve of the two systems.« less
Lopez, F; Pereira, C
1985-03-01
Two experiments used response-restriction procedures in order to test the independence of the factors determining response rate and the factors determining the size of the postreinforcement pause on interval schedules. Responding was restricted by response-produced blackout or by retracting the lever. In Experiment 1 with a Conjunctive FR 1 FT schedule, the blackout procedure reduced the postreinforcement pause more than the lever-retraction procedure did, and both procedures produced shorter pauses than did the schedule without response restriction. In Experiment 2 the interreinforcement interval was also manipulated, and the size of the pause was an increasing function of the interreinforcement interval, but the rate of increase was lower than that produced by fixed interval schedules of comparable interval durations. The assumption of functional independence of the postreinforcement pause and terminal rate in fixed interval schedules is questioned since data suggest that pause reductions resulted from constraining variation in response number compared to equivalent periodic schedules in which response number was allowed to vary. Copyright © 1985. Published by Elsevier B.V.
Morphine Tolerance as a Function of Ratio Schedule: Response Requirement or Unit Price?
ERIC Educational Resources Information Center
Hughes, Christine; Sigmon, Stacey C.; Pitts, Raymond C.; Dykstra, Linda A.
2005-01-01
Key pecking by 3 pigeons was maintained by a multiple fixed-ratio 10, fixed-ratio 30, fixed-ratio 90 schedule of food presentation. Components differed with respect to amount of reinforcement, such that the unit price was 10 responses per 1-s access to food. Acute administration of morphine, "l"-methadone, and cocaine dose-dependently decreased…
18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...
18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...
18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...
18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...
18 CFR Table 1 to Part 301 - Functionalization and Escalation Codes
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Functionalization and Escalation Codes 1 Table 1 to Part 301 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST...
Locations of serial reach targets are coded in multiple reference frames.
Thompson, Aidan A; Henriques, Denise Y P
2010-12-01
Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5°. But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. Copyright © 2010 Elsevier Ltd. All rights reserved.
Itoga, Toshiro; Asano, Yoshihiro; Tanimura, Yoshihiko
2011-07-01
Superheated drop detectors are currently used for personal and environmental dosimetry and their characteristics such as response to neutrons and temperature dependency are well known. A new bubble counter based on the superheated drop technology has been developed by Framework Scientific. However, the response of this detector with the lead shell is not clear especially above several tens of MeV. In this study, the response has been measured with quasi-monoenergetic and monoenergetic neutron sources with and without a lead shell. The experimental results were compared with the results of the Monte Carlo calculations using the 'Event Generator Mode' in the PHITS code with the JENDL-HE/2007 data library to clarify the response of this detector with a lead shell in the entire energy range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport
NASA Technical Reports Server (NTRS)
Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.
2008-01-01
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.
SU-F-303-15: Ion Chamber Dose Response in Magnetic Fields as a Function of Incident Photon Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malkov, V. N.; Rogers, D. W. O.
2015-06-15
Purpose: In considering the continued development of synergetic MRI-radiation therapy machines, we seek to quantify the variability of ion chamber response per unit dose in the presence of magnetic fields of varying strength as a function of incident photon beam quality and geometric configuration. Methods: To account for the effect of magnetic fields on the trajectory of charged particles a new algorithm was introduced into the EGSnrc Monte Carlo code. In the egs-chamber user code the dose to the cavity of an NE2571 ion chamber is calculated in two configurations, in 0 to 2 T magnetic fields, with an incomingmore » parallel 10×10 cm{sup 2} photon beam with energies ranging between 0.5 MeV and 8 MeV. In the first, the photon beam is incident on the long-axis of the ion chamber (config-1), and in the second the beam is parallel to the long-axis and incident from the conical end of the chamber (config-2). For both, the magnetic field is perpendicular to the direction of the beam and the long axis of the chamber. Results: The ion chamber response per unit dose to water at the same point is determined as a function of magnetic field and is normalized to the 0T case for each of incoming photon energies. For both configurations, accurate modeling of the ion chamber yielded closer agreement with the experimental results obtained by Meijsing et. al (2009). Config-1 yields a gradual increase in response with increasing field strength to a maximum of 13.4% and 1.4% for 1 MeV and 8 MeV photon beams, respectively. Config-2 produced a decrease in response of up to 6% and 13% for 0.5 MeV and 8 MeV beams, respectively. Conclusion: These results provide further support for ion chamber calibration in MRI-radiotherapy coupled systems and demonstrates noticeable energy dependence for clinically relevant fields.« less
Supervised dictionary learning for inferring concurrent brain networks.
Zhao, Shijie; Han, Junwei; Lv, Jinglei; Jiang, Xi; Hu, Xintao; Zhao, Yu; Ge, Bao; Guo, Lei; Liu, Tianming
2015-10-01
Task-based fMRI (tfMRI) has been widely used to explore functional brain networks via predefined stimulus paradigm in the fMRI scan. Traditionally, the general linear model (GLM) has been a dominant approach to detect task-evoked networks. However, GLM focuses on task-evoked or event-evoked brain responses and possibly ignores the intrinsic brain functions. In comparison, dictionary learning and sparse coding methods have attracted much attention recently, and these methods have shown the promise of automatically and systematically decomposing fMRI signals into meaningful task-evoked and intrinsic concurrent networks. Nevertheless, two notable limitations of current data-driven dictionary learning method are that the prior knowledge of task paradigm is not sufficiently utilized and that the establishment of correspondences among dictionary atoms in different brains have been challenging. In this paper, we propose a novel supervised dictionary learning and sparse coding method for inferring functional networks from tfMRI data, which takes both of the advantages of model-driven method and data-driven method. The basic idea is to fix the task stimulus curves as predefined model-driven dictionary atoms and only optimize the other portion of data-driven dictionary atoms. Application of this novel methodology on the publicly available human connectome project (HCP) tfMRI datasets has achieved promising results.
System for measuring film thickness
Batishko, Charles R.; Kirihara, Leslie J.; Peters, Timothy J.; Rasmussen, Donald E.
1990-01-01
A system for determining the thicknesses of thin films of materials exhibiting fluorescence in response to exposure to excitation energy from a suitable source of such energy. A section of film is illuminated with a fixed level of excitation energy from a source such as an argon ion laser emitting blue-green light. The amount of fluorescent light produced by the film over a limited area within the section so illuminated is then measured using a detector such as a photomultiplier tube. Since the amount of fluorescent light produced is a function of the thicknesses of thin films, the thickness of a specific film can be determined by comparing the intensity of fluorescent light produced by this film with the intensity of light produced by similar films of known thicknesses in response to the same amount of excitation energy. The preferred embodiment of the invention uses fiber optic probes in measuring the thicknesses of oil films on the operational components of machinery which are ordinarily obscured from view.
Review of the inverse scattering problem at fixed energy in quantum mechanics
NASA Technical Reports Server (NTRS)
Sabatier, P. C.
1972-01-01
Methods of solution of the inverse scattering problem at fixed energy in quantum mechanics are presented. Scattering experiments of a beam of particles at a nonrelativisitic energy by a target made up of particles are analyzed. The Schroedinger equation is used to develop the quantum mechanical description of the system and one of several functions depending on the relative distance of the particles. The inverse problem is the construction of the potentials from experimental measurements.
Theis, C; Forkel-Wirth, D; Perrin, D; Roesler, S; Vincke, H
2005-01-01
Monitoring of the radiation environment is one of the key tasks in operating a high-energy accelerator such as the Large Hadron Collider (LHC). The radiation fields consist of neutrons, charged hadrons as well as photons and electrons with energy spectra extending from those of thermal neutrons up to several hundreds of GeV. The requirements for measuring the dose equivalent in such a field are different from standard uses and it is thus necessary to investigate the response of monitoring devices thoroughly before the implementation of a monitoring system can be conducted. For the LHC, it is currently foreseen to install argon- and hydrogen-filled high-pressure ionisation chambers as radiation monitors of mixed fields. So far their response to these fields was poorly understood and, therefore, further investigation was necessary to prove that they can serve their function well enough. In this study, ionisation chambers of type IG5 (Centronic Ltd) were characterised by simulating their response functions by means of detailed FLUKA calculations as well as by calibration measurements for photons and neutrons at fixed energies. The latter results were used to obtain a better understanding and validation of the FLUKA simulations. Tests were also conducted at the CERF facility at CERN in order to compare the results with simulations of the response in a mixed radiation field. It is demonstrated that these detectors can be characterised sufficiently enough to serve their function as radiation monitors for the LHC.
A long-term, integrated impact assessment of alternative building energy code scenarios in China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Eom, Jiyong; Evans, Meredydd
2014-04-01
China is the second largest building energy user in the world, ranking first and third in residential and commercial energy consumption. Beginning in the early 1980s, the Chinese government has developed a variety of building energy codes to improve building energy efficiency and reduce total energy demand. This paper studies the impact of building energy codes on energy use and CO2 emissions by using a detailed building energy model that represents four distinct climate zones each with three building types, nested in a long-term integrated assessment framework GCAM. An advanced building stock module, coupled with the building energy model, ismore » developed to reflect the characteristics of future building stock and its interaction with the development of building energy codes in China. This paper also evaluates the impacts of building codes on building energy demand in the presence of economy-wide carbon policy. We find that building energy codes would reduce Chinese building energy use by 13% - 22% depending on building code scenarios, with a similar effect preserved even under the carbon policy. The impact of building energy codes shows regional and sectoral variation due to regionally differentiated responses of heating and cooling services to shell efficiency improvement.« less
Wong, Del Pui-Lam; Chung, Joanne Wai-Yee; Chan, Albert Ping-Chuen; Wong, Francis Kwan-Wah; Yi, Wen
2014-11-01
This study aimed to (1) quantify the respective physical workloads of bar bending and fixing; and (2) compare the physiological and perceptual responses between bar benders and bar fixers. Field studies were conducted during the summer in Hong Kong from July 2011 to August 2011 over six construction sites. Synchronized physiological, perceptual, and environmental parameters were measured from construction rebar workers. The average duration of the 39 field measurements was 151.1 ± 22.4 min under hot environment (WBGT = 31.4 ± 2.2 °C), during which physiological, perceptual and environmental parameters were synchronized. Energy expenditure of overall rebar work, bar bending, and bar fixing were 2.57, 2.26 and 2.67 Kcal/min (179, 158 and 186 W), respectively. Bar fixing induced significantly higher physiological responses in heart rate (113.6 vs. 102.3 beat/min, p < 0.05), oxygen consumption (9.53 vs. 7.14 ml/min/kg, p < 0.05), and energy expenditure (2.67 vs. 2.26 Kcal/min, p < 0.05) (186 vs. 158 W, p < 0.05) as compared to bar bending. Perceptual response was higher in bar fixing but such difference was not statistically significant. Findings of this study enable the calculation of daily energy expenditure of rebar work. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
mRNA changes in nucleus accumbens related to methamphetamine addiction in mice
NASA Astrophysics Data System (ADS)
Zhu, Li; Li, Jiaqi; Dong, Nan; Guan, Fanglin; Liu, Yufeng; Ma, Dongliang; Goh, Eyleen L. K.; Chen, Teng
2016-11-01
Methamphetamine (METH) is a highly addictive psychostimulant that elicits aberrant changes in the expression of microRNAs (miRNAs) and long non-coding RNAs (lncRNAs) in the nucleus accumbens of mice, indicating a potential role of METH in post-transcriptional regulations. To decipher the potential consequences of these post-transcriptional regulations in response to METH, we performed strand-specific RNA sequencing (ssRNA-Seq) to identify alterations in mRNA expression and their alternative splicing in the nucleus accumbens of mice following exposure to METH. METH-mediated changes in mRNAs were analyzed and correlated with previously reported changes in non-coding RNAs (miRNAs and lncRNAs) to determine the potential functions of these mRNA changes observed here and how non-coding RNAs are involved. A total of 2171 mRNAs were differentially expressed in response to METH with functions involved in synaptic plasticity, mitochondrial energy metabolism and immune response. 309 and 589 of these mRNAs are potential targets of miRNAs and lncRNAs respectively. In addition, METH treatment decreases mRNA alternative splicing, and there are 818 METH-specific events not observed in saline-treated mice. Our results suggest that METH-mediated addiction could be attributed by changes in miRNAs and lncRNAs and consequently, changes in mRNA alternative splicing and expression. In conclusion, our study reported a methamphetamine-modified nucleus accumbens transcriptome and provided non-coding RNA-mRNA interaction networks possibly involved in METH addiction.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
On the importance of commodity and energy price shocks for the macroeconomy
NASA Astrophysics Data System (ADS)
Edelstein, Paul S.
Although higher commodity prices are commonly thought to presage higher rates of inflation, the existing literature suggests that the predictive power of commodity prices for inflation has waned since the 1980s. In the first chapter, I show that this result can be overturned using state-of-the-art forecast combination methods. Moreover, commodity prices are shown to contain predictive information not contained in the leading principal components of a broad set of macroeconomic and financial variables. These improved inflation forecasts are of little value, however, for predicting actual Fed policy decisions. The remaining two chapters study the effect of energy price shocks on U.S. consumer and business expenditures. In the second chapter, I show that there is no statistical support for the presence of asymmetries in the response of real consumption to energy price increases and decreases. This finding has important implications for empirical and theoretical models of the transmission of energy price shocks. I then quantify the direct effect on real consumption of (1) unanticipated changes in discretionary income, (2) shifts in precautionary savings, and (3) changes in the operating cost of energy-using durables. Finally, I trace the declining importance of energy price shocks relative to the 1970s to changes in the composition of U.S. automobile production and the declining overall importance of the U.S. automobile sector. An alternative source of asymmetry is the response of nonresidential fixed investment to energy price shocks. In the third chapter, I show that the apparent asymmetry in the estimated responses of business fixed investment in equipment and structures is largely an artifact (1) of the aggregation of mining-related expenditures by the oil, natural gas, and coal mining industry and all other expenditures, and (2) of ignoring an exogenous shift in investment caused by the 1986 Tax Reform Act. Once symmetry is imposed and miningrelated expenditures are excluded, the estimated response of business fixed investment in equipment and structures tends to be small and mostly statistically insignificant. Historical decompositions show that energy price shocks have played a minor role in driving fluctuations in nonresidential fixed investment other than investment in mining.
Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji
2012-01-01
Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.
NASA Astrophysics Data System (ADS)
Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.
2017-12-01
We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.
Calculation of the Frequency Distribution of the Energy Deposition in DNA Volumes by Heavy Ions
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cicinotta, Francis A.
2012-01-01
Radiation quality effects are largely determined by energy deposition in small volumes of characteristic sizes less than 10 nm representative of short-segments of DNA, the DNA nucleosome, or molecules initiating oxidative stress in the nucleus, mitochondria, or extra-cellular matrix. On this scale, qualitatively distinct types of molecular damage are possible for high linear energy transfer (LET) radiation such as heavy ions compared to low LET radiation. Unique types of DNA lesions or oxidative damages are the likely outcome of the energy deposition. The frequency distribution for energy imparted to 1-20 nm targets per unit dose or particle fluence is a useful descriptor and can be evaluated as a function of impact parameter from an ions track. In this work, the simulation of 1-Gy irradiation of a cubic volume of 5 micron by: 1) 450 (1)H(+) ions, 300 MeV; 2) 10 (12)C(6+) ions, 290 MeV/amu and 3) (56)Fe(26+) ions, 1000 MeV/amu was done with the Monte-Carlo simulation code RITRACKS. Cylindrical targets are generated in the irradiated volume, with random orientation. The frequency distribution curves of the energy deposited in the targets is obtained. For small targets (i.e. <25 nm size), the probability of an ion to hit a target is very small; therefore a large number of tracks and targets as well as a large number of histories are necessary to obtain statistically significant results. This simulation is very time-consuming and is difficult to perform by using the original version of RITRACKS. Consequently, the code RITRACKS was adapted to use multiple CPU on a workstation or on a computer cluster. To validate the simulation results, similar calculations were performed using targets with fixed position and orientation, for which experimental data are available [5]. Since the probability of single- and double-strand breaks in DNA as function of energy deposited is well know, the results that were obtained can be used to estimate the yield of DSB, and can be extended to include other targeted or non-target effects.
Verification of unfold error estimates in the UFO code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have anmore » imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.« less
Thyroid-adrenergic interactions: physiological and clinical implications.
Silva, J Enrique; Bianco, Suzy D C
2008-02-01
The sympathoadrenal system, including the sympathetic nervous system and the adrenal medulla, interacts with thyroid hormone (TH) at various levels. Both systems are evolutionary old and regulate independent functions, playing probably independent roles in poikilothermic species. With the advent of homeothermy, TH acquired a new role, which is to stimulate thermogenic mechanisms and synergize with the sympathoadrenal system to produce heat and maintain body temperature. An important part of this new function is mediated through coordinated and, most of the time, synergistic interactions with the sympathoadrenal system. Catecholamines can in turn activate TH in a tissue-specific manner, most notably in brown adipose tissue. Such interactions are of great adaptive value in cold adaptation and in states needing high-energy output. Conversely, in states of emergency where energy demand should be reduced, such as disease and starvation, both systems are turned down. In pathological states, where one of the systems is fixed at a high or a low level, coordination is lost with disruption of the physiology and development of symptoms. Exaggerated responses to catecholamines dominate the manifestations of thyrotoxicosis, while hypothyroidism is characterized by a narrowing of adaptive responses (e.g., thermogenic, cardiovascular, and lipolytic). Finally, emerging results suggest the possibility that disrupted interactions between the two systems contribute to explain metabolic variability, for example, fuel efficiency, energy expenditure, and lipolytic responses.
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.
This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less
Robertson, Amy N.; Wendt, Fabian; Jonkman, Jason M.; ...
2016-10-13
This paper summarizes the findings from Phase Ib of the Offshore Code Comparison, Collaboration, Continued with Correlation (OC5) project. OC5 is a project run under the International Energy Agency (IEA) Wind Research Task 30, and is focused on validating the tools used for modelling offshore wind systems through the comparison of simulated responses of select offshore wind systems (and components) to physical test data. For Phase Ib of the project, simulated hydrodynamic loads on a flexible cylinder fixed to a sloped bed were validated against test measurements made in the shallow water basin at the Danish Hydraulic Institute (DHI) withmore » support from the Technical University of Denmark (DTU). The first phase of OC5 examined two simple cylinder structures (Phase Ia and Ib) to focus on validation of hydrodynamic models used in the various tools before moving on to more complex offshore wind systems and the associated coupled physics. As a result, verification and validation activities such as these lead to improvement of offshore wind modelling tools, which will enable the development of more innovative and cost-effective offshore wind designs.« less
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
NASA Astrophysics Data System (ADS)
Meigo, S.
1997-02-01
For neutrons 25, 30 and 65 MeV, the response functions and detection efficiencies of an NE213 liquid scintillator were measured. Quasi-monoenergetic neutrons produced by the 7Li(p,N 0.1) reaction were employed for the measurement and the absolute flux of incident neutrons was determined within 4% accuracy using a proton recoil telescope. Response functions and detection efficiencies calculated with the Monte Carlo codes, CECIL and SCINFUL, were compared with the measured data. It was found that response functions calculated with SCINFUL agreed better with experimental ones than those with CECIL, however, the deuteron light output used in SCINFUL was too low. The response functions calculated with a revised SCINFUL agreed with the experimental ones quite well even for the deuteron bump and peak due to the C(n,d 0) reaction. It was confirmed that the detection efficiencies calculated with the original and the revised SCINFULs agreed with the experimental data within the experimental error, while those with CECIL were about 20% higher in the energy region above 30 MeV.
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Description of a Generalized Analytical Model for the Micro-dosimeter Response
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; Stewart-Sloan, Charlotte R.; Xapsos, Michael A.; Shinn, Judy L.; Wilson, John W.; Hunter, Abigail
2007-01-01
An analytical prediction capability for space radiation in Low Earth Orbit (LEO), correlated with the Space Transportation System (STS) Shuttle Tissue Equivalent Proportional Counter (TEPC) measurements, is presented. The model takes into consideration the energy loss straggling and chord length distribution of the TEPC detector, and is capable of predicting energy deposition fluctuations in a micro-volume by incoming ions through both direct and indirect ionic events. The charged particle transport calculations correlated with STS 56, 51, 110 and 114 flights are accomplished by utilizing the most recent version (2005) of the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport WZETRN), which has been extensively validated with laboratory beam measurements and available space flight data. The agreement between the TEPC model prediction (response function) and the TEPC measured differential and integral spectra in lineal energy (y) domain is promising.
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT
NASA Technical Reports Server (NTRS)
Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.
2015-01-01
This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.
32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas
Code of Federal Regulations, 2011 CFR
2011-07-01
...) intermediate/direct/general maintenance performed by fixed activities that are not designed for deployment to combat areas and that provide direct support of organizations performing or designed to perform combat... commercial activities that are especially designed and constructed for the low-cost and efficient storage and...
32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas
Code of Federal Regulations, 2010 CFR
2010-07-01
...) intermediate/direct/general maintenance performed by fixed activities that are not designed for deployment to combat areas and that provide direct support of organizations performing or designed to perform combat... commercial activities that are especially designed and constructed for the low-cost and efficient storage and...
32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas
Code of Federal Regulations, 2012 CFR
2012-07-01
...) intermediate/direct/general maintenance performed by fixed activities that are not designed for deployment to combat areas and that provide direct support of organizations performing or designed to perform combat... commercial activities that are especially designed and constructed for the low-cost and efficient storage and...
32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas
Code of Federal Regulations, 2014 CFR
2014-07-01
...) intermediate/direct/general maintenance performed by fixed activities that are not designed for deployment to combat areas and that provide direct support of organizations performing or designed to perform combat... commercial activities that are especially designed and constructed for the low-cost and efficient storage and...
32 CFR Appendix A to Part 169a - Codes and Definitions of Functional Areas
Code of Federal Regulations, 2013 CFR
2013-07-01
...) intermediate/direct/general maintenance performed by fixed activities that are not designed for deployment to combat areas and that provide direct support of organizations performing or designed to perform combat... commercial activities that are especially designed and constructed for the low-cost and efficient storage and...
Surface micromachined counter-meshing gears discrimination device
Polosky, Marc A.; Garcia, Ernest J.; Allen, James J.
2000-12-12
A surface micromachined Counter-Meshing Gears (CMG) discrimination device which functions as a mechanically coded lock. Each of two CMG has a first portion of its perimeter devoted to continuous driving teeth that mesh with respective pinion gears. Each EMG also has a second portion of its perimeter devoted to regularly spaced discrimination gear teeth that extend outwardly on at least one of three levels of the CMG. The discrimination gear teeth are designed so as to pass each other without interference only if the correct sequence of partial rotations of the CMG occurs in response to a coded series of rotations from the pinion gears. A 24 bit code is normally input to unlock the device. Once unlocked, the device provides a path for an energy or information signal to pass through the device. The device is designed to immediately lock up if any portion of the 24 bit code is incorrect.
Assessing the mechanism of response in the retrosplenial cortex of good and poor navigators☆
Auger, Stephen D.; Maguire, Eleanor A.
2013-01-01
The retrosplenial cortex (RSC) is consistently engaged by a range of tasks that examine episodic memory, imagining the future, spatial navigation, and scene processing. Despite this, an account of its exact contribution to these cognitive functions remains elusive. Here, using functional MRI (fMRI) and multi-voxel pattern analysis (MVPA) we found that the RSC coded for the specific number of permanent outdoor items that were in view, that is, items which are fixed and never change their location. Moreover, this effect was selective, and was not apparent for other item features such as size and visual salience. This detailed detection of the number of permanent items in view was echoed in the parahippocampal cortex (PHC), although the two brain structures diverged when participants were divided into good and poor navigators. There was no difference in the responsivity of the PHC between the two groups, while significantly better decoding of the number of permanent items in view was possible from patterns of activity in the RSC of good compared to poor navigators. Within good navigators, the RSC also facilitated significantly better prediction of item permanence than the PHC. Overall, these findings suggest that the RSC in particular is concerned with coding the presence of every permanent item that is in view. This mechanism may represent a key building block for spatial and scene representations that are central to episodic memories and imagining the future, and could also be a prerequisite for successful navigation. PMID:24012136
Quantum Monte Carlo for atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less
Isochronous (CW) Non-Scaling FFAGs: Design and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstone, C.; Berz, M.; Makino, K.
2010-11-04
The drive for higher beam power, high duty cycle, and reliable beams at reasonable cost has focused international attention and design effort on fixed field accelerators, notably Fixed-Field Alternating Gradient accelerators (FFAGs). High-intensity GeV proton drivers encounter duty cycle and space-charge limits in the synchrotron and machine size concerns in the weaker-focusing cyclotrons. A 10-20 MW proton driver is challenging, if even technically feasible, with conventional accelerators--with the possible exception of a SRF linac, which has a large associated cost and footprint. Recently, the concept of isochronous orbits has been explored and developed for nonscaling FFAGs using powerful new methodologiesmore » in FFAG accelerator design and simulation. The property of isochronous orbits enables the simplicity of fixed RF and, by tailoring a nonlinear radial field profile, the FFAG can remain isochronous beyond the energy reach of cyclotrons, well into the relativistic regime. With isochronous orbits, the machine proposed here has the high average current advantage and duty cycle of the cyclotron in combination with the strong focusing, smaller losses, and energy variability that are more typical of the synchrotron. This paper reports on these new advances in FFAG accelerator technology and presents advanced modeling tools for fixed-field accelerators unique to the code COSY INFINITY.« less
Rollet, S; Autischer, M; Beck, P; Latocha, M
2007-01-01
The response of a tissue equivalent proportional counter (TEPC) in a mixed radiation field with a neutron energy distribution similar to the radiation field at commercial flight altitudes has been studied. The measurements have been done at the CERN-EU High-Energy Reference Field (CERF) facility where a well-characterised radiation field is available for intercomparison. The TEPC instrument used by the ARC Seibersdorf Research is filled with pure propane gas at low pressure and can be used to determine the lineal energy distribution of the energy deposition in a mass of gas equivalent to a 2 microm diameter volume of unit density tissue, of similar size to the nuclei of biological cells. The linearity of the detector response was checked both in term of dose and dose rate. The effect of dead-time has been corrected. The influence of the detector exposure location and orientation in the radiation field on the dose distribution was also studied as a function of the total dose. The microdosimetric distribution of the absorbed dose as a function of the lineal energy has been obtained and compared with the same distribution simulated with the FLUKA Monte Carlo transport code. The dose equivalent was calculated by folding this distribution with the quality factor as a function of linear energy transfer. The comparison between the measured and simulated distributions show that they are in good agreement. As a result of this study the detector is well characterised, thanks also to the numerical simulations the instrument response is well understood, and it's currently being used onboard the aircrafts to evaluate the dose to aircraft crew caused by cosmic radiation.
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
Validation of a multi-layer Green's function code for ion beam transport
NASA Astrophysics Data System (ADS)
Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.
Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo
2018-01-01
Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. Summary We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding. PMID:29773979
Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo
2018-01-01
Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy costs and information maximization is a potential principle for cortical circuit networks. We conducted numerical simulations and mathematical analysis to examine the energy efficiency of neural information transmission in a recurrent network as a function of the ratio of excitatory and inhibitory synaptic connections. We obtained a general solution showing that there exists an optimal E/I synaptic ratio in a recurrent network at which the information transmission as well as the energy efficiency of this network achieves a global maximum. These results reflect general mechanisms for sensory coding processes, which may give insight into the energy efficiency of neural communication and coding.
Pitchiaya, Sethuramasundaram; Krishnan, Vishalakshi; Custer, Thomas C.; Walter, Nils G.
2013-01-01
Non-coding RNAs (ncRNAs) recently were discovered to outnumber their protein-coding counterparts, yet their diverse functions are still poorly understood. Here we report on a method for the intracellular Single-molecule High Resolution Localization and Counting (iSHiRLoC) of microRNAs (miRNAs), a conserved, ubiquitous class of regulatory ncRNAs that controls the expression of over 60% of all mammalian protein coding genes post-transcriptionally, by a mechanism shrouded by seemingly contradictory observations. We present protocols to execute single particle tracking (SPT) and single-molecule counting of functional microinjected, fluorophore-labeled miRNAs and thereby extract diffusion coefficients and molecular stoichiometries of micro-ribonucleoprotein (miRNP) complexes from living and fixed cells, respectively. This probing of miRNAs at the single molecule level sheds new light on the intracellular assembly/disassembly of miRNPs, thus beginning to unravel the dynamic nature of this important gene regulatory pathway and facilitating the development of a parsimonious model for their obscured mechanism of action. PMID:23820309
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
Uniaxial magnetic anisotropy energy of Fe wires embedded in carbon nanotubes.
Muñoz, Francisco; Mejía-López, Jose; Pérez-Acle, Tomas; Romero, Aldo H
2010-05-25
In this work, we analyze the magnetic anisotropy energy (MAE) of Fe cylinders embedded within zigzag carbon nanotubes, by means of ab initio calculations. To see the influence of the confinement, we fix the Fe cylinder diameter and we follow the changes of the MAE as a function of the diameter of the nanotube, which contains the Fe cylinder. We find that the easy axis changes from parallel to perpendicular, with respect to the cylinder axis. The orientation change depends quite strongly on the confinement, which indicates a nontrivial dependence of the magnetization direction as function of the nanotube diameter. We also find that the MAE is affected by where the Fe cylinder sits with respect to the carbon nanotube, and the coupling between these two structures could also dominate the magnetic response. We analyze the thermal stability of the magnetization orientation of the Fe cylinder close to room temperature.
Exposure calculation code module for reactor core analysis: BURNER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Cunningham, G.W.
1979-02-01
The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less
Reinforcer magnitude and demand under fixed-ratio schedules with domestic hens.
Grant, Amber A; Foster, T Mary; Temple, William; Jackson, Surrey; Kinloch, Jennifer; Poling, Alan
2014-03-01
This study compared three methods of normalizing demand functions to allow comparison of demand for different commodities and examined how varying reinforcer magnitudes affected these analyses. Hens responded under fixed-ratio schedules in 40-min sessions with response requirement doubling each session and with 2-s, 8-s, and 12-s access to wheat. Over the smaller fixed ratios overall response rates generally increased and were higher the shorter the magazine duration. The logarithms of the number of reinforcers obtained (consumption) and the fixed ratio (price) were well fitted by curvilinear demand functions (Hursh et al., 1988. Journal of the Experimental Analysis of Behavior 50, 419-440) that were inelastic (b negative) over small fixed-ratios. The fixed ratio with maximal response rate (Pmax) increased, and the rate of change of elasticity (a) and initial consumption (L) decreased with increased magazine duration. Normalizing consumption using measures of preference for various magazine durations (3-s vs. 3-s, 2-s vs. 8-s, and 2-s vs. 12-s), obtained using concurrent schedules, gave useful results as it removed the differences in L. Normalizing consumption and price (Hursh and Winger, 1995. Journal of the Experimental Analysis of Behavior 64, 373-384) unified the data functions as intended by that analysis. The exponential function (Hursh and Silberberg, 2008. Psychological Review, 115, 186-198) gave an essential value that increased (i.e., α decreased significantly) as magazine duration decreased. This was not as predicted, since α should be constant over variations in magazine duration, but is similar to previous findings using a similar procedure with different food qualities (hens) and food quantities (rats). Copyright © 2014 Elsevier B.V. All rights reserved.
MURI: Adaptive Waveform Design for Full Spectral Dominance
2011-03-11
a three- dimensional urban tracking model, based on the nonlinear measurement model (that uses the urban multipath geometry with different types of ... the time evolution of the scattering function with a high dimensional dynamic system; a multiple particle filter technique is used to sequentially...integration of space -time coding with a fixed set of beams. It complements the
Hu, Yu; Zylberberg, Joel; Shea-Brown, Eric
2014-01-01
Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all. PMID:24586128
del Val, Coral; Rivas, Elena; Torres-Quesada, Omar; Toro, Nicolás; Jiménez-Zurdo, José I
2007-01-01
Bacterial small non-coding RNAs (sRNAs) are being recognized as novel widespread regulators of gene expression in response to environmental signals. Here, we present the first search for sRNA-encoding genes in the nitrogen-fixing endosymbiont Sinorhizobium meliloti, performed by a genome-wide computational analysis of its intergenic regions. Comparative sequence data from eight related α-proteobacteria were obtained, and the interspecies pairwise alignments were scored with the programs eQRNA and RNAz as complementary predictive tools to identify conserved and stable secondary structures corresponding to putative non-coding RNAs. Northern experiments confirmed that eight of the predicted loci, selected among the original 32 candidates as most probable sRNA genes, expressed small transcripts. This result supports the combined use of eQRNA and RNAz as a robust strategy to identify novel sRNAs in bacteria. Furthermore, seven of the transcripts accumulated differentially in free-living and symbiotic conditions. Experimental mapping of the 5′-ends of the detected transcripts revealed that their encoding genes are organized in autonomous transcription units with recognizable promoter and, in most cases, termination signatures. These findings suggest novel regulatory functions for sRNAs related to the interactions of α-proteobacteria with their eukaryotic hosts. PMID:17971083
A new adjustable gains for second order sliding mode control of saturated DFIG-based wind turbine
NASA Astrophysics Data System (ADS)
Bounadja, E.; Djahbar, A.; Taleb, R.; Boudjema, Z.
2017-02-01
The control of Doubly-Fed induction generator (DFIG), used in wind energy conversion, has been given a great deal of interest. Frequently, this control has been dealt with ignoring the magnetic saturation effect in the DFIG model. The aim of the present work is twofold: firstly, the magnetic saturation effect is accounted in the control design model; secondly, a new second order sliding mode control scheme using adjustable-gains (AG-SOSMC) is proposed to control the DFIG via its rotor side converter. This scheme allows the independent control of the generated active and reactive power. Conventionally, the second order sliding mode control (SOSMC) applied to the DFIG, utilize the super-twisting algorithm with fixed gains. In the proposed AG-SOSMC, a simple means by which the controller can adjust its behavior is used. For that, a linear function is used to represent the variation in gain as a function of the absolute value of the discrepancy between the reference rotor current and its measured value. The transient DFIG speed response using the aforementioned characteristic is compared with the one determined by using the conventional SOSMC controller with fixed gains. Simulation results show, accurate dynamic performances, quicker transient response and more accurate control are achieved for different operating conditions.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
NASA Astrophysics Data System (ADS)
Lis, M.; Gómez-Ros, J. M.; Bedogni, R.; Delgado, A.
2008-01-01
The design of a neutron detector with spectrometric capability based on thermoluminescent (TL) 6LiF:Ti,Mg (TLD-600) dosimeters located along three perpendicular axis within a single polyethylene (PE) sphere has been analyzed. The neutron response functions have been calculated in the energy range from 10 -8 to 100 MeV with the Monte Carlo (MC) code MCNPX 2.5 and their shape and behaviour have been used to discuss a suitable configuration for an actual instrument. The feasibility of such a device has been preliminary evaluated by the simulation of exposure to 241Am-Be, bare 252Cf and Fe-PE moderated 252Cf sources. The expected accuracy in the evaluation of energy quantities has been evaluated using the unfolding code FRUIT. The obtained results together with additional calculations performed using MAXED and GRAVEL codes show the spectrometric capability of the proposed design for radiation protection applications, especially in the range 1 keV-20 MeV.
Physiologic and perceptual responses during treadmill running with ankle weights.
Bhambhani, Y N; Gomes, P S; Wheeler, G
1990-03-01
This study examined the effects of ankle weighting on physiologic and perceptual responses during treadmill running in seven healthy, female recreational runners with a mean maximal aerobic power of 48.4 +/- 4.0 ml/kg/min. Each subject completed four experimental one-mile runs at individually selected treadmill running speeds with 0, 1.6, 3.2 and 4.8 kg weights on their ankles. The subjects selected a speed at which they would run (train) if their objectives were to significantly improve cardiovascular function and induce weight loss. Metabolic and cardiovascular responses were continuously monitored, and ratings of perceived exertion were recorded near the end of the activity. During the unweighted run, the subjects selected a running speed of 6.87 +/- 0.63 mph which resulted in a net energy expenditure of 0.153 kcal/kg/min or 1.34 +/- 0.16 kcal/kg/mile. This corresponded to a training intensity of 76.3% +/- 5.1% of maximum oxygen consumption or 88.1% +/- 9.7% of maximum heart rate. Addition of weight to the ankles caused a significant decrease (p less than .05) in the running speed selected and, therefore, did not result in any significant changes (p greater than .05) in the rate of oxygen consumption, heart rate or ratings of perceived exertion when compared to the unweighted condition. These observations are in contrast to previous studies on ankle weighting which were conducted at fixed treadmill running speeds. However, the use of ankle weights did have a tendency to increase gross and net energy expenditure of running when values were expressed in kcal/mile because of slower self-selected running speeds under these conditions. This increase in energy expenditure could be of physiologic significance if running with ankle weights was performed on a regular basis at a fixed distance.
The Nuclear Energy Knowledge and Validation Center Summary of Activities Conducted in FY16
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans David
The Nuclear Energy Knowledge and Validation Center (NEKVaC) is a new initiative by the Department of Energy (DOE) and Idaho National Laboratory (INL) to coordinate and focus the resources and expertise that exist with the DOE toward solving issues in modern nuclear code validation and knowledge management. In time, code owners, users, and developers will view the NEKVaC as a partner and essential resource for acquiring the best practices and latest techniques for validating codes, providing guidance in planning and executing experiments, facilitating access to and maximizing the usefulness of existing data, and preserving knowledge for continual use by nuclearmore » professionals and organizations for their own validation needs. The scope of the NEKVaC covers many interrelated activities that will need to be cultivated carefully in the near term and managed properly once the NEKVaC is fully functional. Three areas comprise the principal mission: (1) identify and prioritize projects that extend the field of validation science and its application to modern codes, (2) develop and disseminate best practices and guidelines for high-fidelity multiphysics/multiscale analysis code development and associated experiment design, and (3) define protocols for data acquisition and knowledge preservation and provide a portal for access to databases currently scattered among numerous organizations. These mission areas, while each having a unique focus, are interdependent and complementary. Likewise, all activities supported by the NEKVaC, both near term and long term, must possess elements supporting all three areas. This cross cutting nature is essential to ensuring that activities and supporting personnel do not become “stove piped” (i.e., focused a specific function that the activity itself becomes the objective rather than achieving the larger vision). This report begins with a description of the mission areas; specifically, the role played by each major committee and the types of activities for which they are responsible. It then lists and describes the proposed near term tasks upon which future efforts can build.« less
10 CFR 73.46 - Fixed site physical protection systems, subsystems, components, and procedures.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., components, and procedures. 73.46 Section 73.46 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) PHYSICAL... Energy couriers engaged in the transport of special nuclear material. The search function for detection... of Energy vehicles engaged in transporting special nuclear material and emergency vehicles under...
10 CFR 73.46 - Fixed site physical protection systems, subsystems, components, and procedures.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., components, and procedures. 73.46 Section 73.46 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) PHYSICAL... Energy couriers engaged in the transport of special nuclear material. The search function for detection... of Energy vehicles engaged in transporting special nuclear material and emergency vehicles under...
Effects from the Reduction of Air Leakage on Energy and Durability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hun, Diana E.; Childs, Phillip W.; Atchley, Jerald Allen
2014-01-01
Buildings are responsible for approximately 40% of the energy used in the US. Codes have been increasing building envelope requirements, and in particular those related to improving airtightness, in order to reduce energy consumption. The main goal of this research was to evaluate the effects from reductions in air leakage on energy loads and material durability. To this end, we focused on the airtightness and thermal resistance criteria set by the 2012 International Energy Conservation Code (IECC).
ERIC Educational Resources Information Center
Gordon, Wanda; Sork, Thomas J.
2001-01-01
Replicating an Indiana study, 261 responses from British Columbia adult educators revealed a high degree of support for codes of ethics and identified ethical dilemmas in practice. Half currently operated under a code. Responses to whether codes should have a regulatory function were mixed. (Contains 44 references.) (SK)
Verification of unfold error estimates in the unfold operator code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
NASA Astrophysics Data System (ADS)
Gilchrist, S. A.; Braun, D. C.; Barnes, G.
2016-12-01
Magnetohydrostatic models of the solar atmosphere are often based on idealized analytic solutions because the underlying equations are too difficult to solve in full generality. Numerical approaches, too, are often limited in scope and have tended to focus on the two-dimensional problem. In this article we develop a numerical method for solving the nonlinear magnetohydrostatic equations in three dimensions. Our method is a fixed-point iteration scheme that extends the method of Grad and Rubin ( Proc. 2nd Int. Conf. on Peaceful Uses of Atomic Energy 31, 190, 1958) to include a finite gravity force. We apply the method to a test case to demonstrate the method in general and our implementation in code in particular.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less
Hutsell, Blake A.; Newland, M. Christopher
2013-01-01
Previous studies of inbred mouse strains have shown reinforcer-strain interactions that may potentially mask differences among strains in memory performance. The present research examined the effects of two qualitatively different reinforcers (heterogeneous mix of flavored pellets and sweetened-condensed milk) on responding maintained by fixed-ratio schedules of reinforcement in three inbred strains of mice (BALB/c, C57BL/6, & DBA/2). Responses rates for all strains were a bitonic (inverted U) function of the size of the fixed-ratio schedule and were generally higher when responding was maintained by milk. For the DBA/2 and C57BL/6 and to a lesser extent the BALB/c, milk primarily increased response rates at moderate fixed ratios, but not at the largest fixed ratios tested. A formal model of ratio-schedule performance, Mathematical Principles of Reinforcement (MPR), was applied to the response rate functions of individual mice. According to MPR, the differences in response rates maintained by pellets and milk were mostly due to changes in motoric processes as indicated by changes in the minimum response time (δ) produced by each reinforcer type and not specific activation (a), a model term that represents value and is correlated with reinforcer magnitude and the break point obtained under progressive ratio schedules. In addition, MPR also revealed that, although affected by reinforcer type, a parameter interpreted as the rate of saturation of working memory (λ), differed among the strains. PMID:23357283
Resurrection of DNA Function In Vivo from an Extinct Genome
Pask, Andrew J.; Behringer, Richard R.; Renfree, Marilyn B.
2008-01-01
There is a burgeoning repository of information available from ancient DNA that can be used to understand how genomes have evolved and to determine the genetic features that defined a particular species. To assess the functional consequences of changes to a genome, a variety of methods are needed to examine extinct DNA function. We isolated a transcriptional enhancer element from the genome of an extinct marsupial, the Tasmanian tiger (Thylacinus cynocephalus or thylacine), obtained from 100 year-old ethanol-fixed tissues from museum collections. We then examined the function of the enhancer in vivo. Using a transgenic approach, it was possible to resurrect DNA function in transgenic mice. The results demonstrate that the thylacine Col2A1 enhancer directed chondrocyte-specific expression in this extinct mammalian species in the same way as its orthologue does in mice. While other studies have examined extinct coding DNA function in vitro, this is the first example of the restoration of extinct non-coding DNA and examination of its function in vivo. Our method using transgenesis can be used to explore the function of regulatory and protein-coding sequences obtained from any extinct species in an in vivo model system, providing important insights into gene evolution and diversity. PMID:18493600
Analytical response function for planar Ge detectors
NASA Astrophysics Data System (ADS)
García-Alvarez, Juan A.; Maidana, Nora L.; Vanin, Vito R.; Fernández-Varea, José M.
2016-04-01
We model the response function (RF) of planar HPGe x-ray spectrometers for photon energies between around 10 keV and 100 keV. The RF is based on the proposal of Seltzer [1981. Nucl. Instrum. Methods 188, 133-151] and takes into account the full-energy absorption in the Ge active volume, the escape of Ge Kα and Kβ x-rays and the escape of photons after one Compton interaction. The relativistic impulse approximation is employed instead of the Klein-Nishina formula to describe incoherent photon scattering in the Ge crystal. We also incorporate a simple model for the continuous component of the spectrum produced by the escape of photo-electrons from the active volume. In our calculations we include external interaction contributions to the RF: (i) the incoherent scattering effects caused by the detector's Be window and (ii) the spectrum produced by photo-electrons emitted in the Ge dead layer that reach the active volume. The analytical RF model is compared with pulse-height spectra simulated using the PENELOPE Monte Carlo code.
Understanding Coronal Heating through Time-Series Analysis and Nanoflare Modeling
NASA Astrophysics Data System (ADS)
Romich, Kristine; Viall, Nicholeen
2018-01-01
Periodic intensity fluctuations in coronal loops, a signature of temperature evolution, have been observed using the Atmospheric Imaging Assembly (AIA) aboard NASA’s Solar Dynamics Observatory (SDO) spacecraft. We examine the proposal that nanoflares, or impulsive bursts of energy release in the solar atmosphere, are responsible for the intensity fluctuations as well as the megakelvin-scale temperatures observed in the corona. Drawing on the work of Cargill (2014) and Bradshaw & Viall (2016), we develop a computer model of the energy released by a sequence of nanoflare events in a single magnetic flux tube. We then use EBTEL (Enthalpy-Based Thermal Evolution of Loops), a hydrodynamic model of plasma response to energy input, to simulate intensity as a function of time across the coronal AIA channels. We test the EBTEL output for periodicities using a spectral code based on Mann and Lees’ (1996) multitaper method and present preliminary results here. Our ultimate goal is to establish whether quasi-continuous or impulsive energy bursts better approximate the original SDO data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, S.; Kroposki, B.; Kramer, W.
Integrating renewable energy and distributed generations into the Smart Grid architecture requires power electronic (PE) for energy conversion. The key to reaching successful Smart Grid implementation is to develop interoperable, intelligent, and advanced PE technology that improves and accelerates the use of distributed energy resource systems. This report describes the simulation, design, and testing of a single-phase DC-to-AC inverter developed to operate in both islanded and utility-connected mode. It provides results on both the simulations and the experiments conducted, demonstrating the ability of the inverter to provide advanced control functions such as power flow and VAR/voltage regulation. This report alsomore » analyzes two different techniques used for digital signal processor (DSP) code generation. Initially, the DSP code was written in C programming language using Texas Instrument's Code Composer Studio. In a later stage of the research, the Simulink DSP toolbox was used to self-generate code for the DSP. The successful tests using Simulink self-generated DSP codes show promise for fast prototyping of PE controls.« less
Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA
de Souza, Alisson C. D.; Fernandes, Marcelo A. C.
2014-01-01
This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918
Discrete geometric analysis of message passing algorithm on graphs
NASA Astrophysics Data System (ADS)
Watanabe, Yusuke
2010-04-01
We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.
Gaupels, Frank; Sarioglu, Hakan; Beckmann, Manfred; Hause, Bettina; Spannagl, Manuel; Draper, John; Lindermayr, Christian; Durner, Jörg
2012-01-01
In cucurbits, phloem latex exudes from cut sieve tubes of the extrafascicular phloem (EFP), serving in defense against herbivores. We analyzed inducible defense mechanisms in the EFP of pumpkin (Cucurbita maxima) after leaf damage. As an early systemic response, wounding elicited transient accumulation of jasmonates and a decrease in exudation probably due to partial sieve tube occlusion by callose. The energy status of the EFP was enhanced as indicated by increased levels of ATP, phosphate, and intermediates of the citric acid cycle. Gas chromatography coupled to mass spectrometry also revealed that sucrose transport, gluconeogenesis/glycolysis, and amino acid metabolism were up-regulated after wounding. Combining ProteoMiner technology for the enrichment of low-abundance proteins with stable isotope-coded protein labeling, we identified 51 wound-regulated phloem proteins. Two Sucrose-Nonfermenting1-related protein kinases and a 32-kD 14-3-3 protein are candidate central regulators of stress metabolism in the EFP. Other proteins, such as the Silverleaf Whitefly-Induced Protein1, Mitogen Activated Protein Kinase6, and Heat Shock Protein81, have known defensive functions. Isotope-coded protein labeling and western-blot analyses indicated that Cyclophilin18 is a reliable marker for stress responses of the EFP. As a hint toward the induction of redox signaling, we have observed delayed oxidation-triggered polymerization of the major Phloem Protein1 (PP1) and PP2, which correlated with a decline in carbonylation of PP2. In sum, wounding triggered transient sieve tube occlusion, enhanced energy metabolism, and accumulation of defense-related proteins in the pumpkin EFP. The systemic wound response was mediated by jasmonate and redox signaling. PMID:23085839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, R. Navarro; Schunck, N.; Lasseri, R.
2017-03-09
HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the nuclear energy Density Functional Theory (DFT), where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton densities. In HFBTHO, the energy density derives either from the zero-range Dkyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-Bogoliubov (HFB) approximation, and axial-symmetry of the nuclear shape is assumed. This version is the 3rd release ofmore » the program; the two previous versions were published in Computer Physics Communications [1,2]. The previous version was released at LLNL under GPL 3 Open Source License and was given release code LLNL-CODE-573953.« less
Universality of fast quenches from the conformal perturbation theory
NASA Astrophysics Data System (ADS)
Dymarsky, Anatoly; Smolkin, Michael
2018-01-01
We consider global quantum quenches, a protocol when a continuous field theoretic system in the ground state is driven by a homogeneous time-dependent external interaction. When the typical inverse time scale of the interaction is much larger than all relevant scales except for the UV-cutoff the system's response exhibits universal scaling behavior. We provide both qualitative and quantitative explanations of this universality and argue that physics of the response during and shortly after the quench is governed by the conformal perturbation theory around the UV fixed point. We proceed to calculate the response of one and two-point correlation functions confirming and generalizing universal scalings found previously. Finally, we discuss late time behavior after the quench and argue that all local quantities will equilibrate to their thermal values specified by an excess energy acquired by the system during the quench.
NASA Technical Reports Server (NTRS)
Chutjian, A.
1979-01-01
Geometries and focal properties are given for two types of electron-lens system commonly needed in electron scattering. One is an electron gun that focuses electrons from a thermionic emitter onto a fixed point (target) over a wide range of final energies. The other is an electron analyzer system that focuses scattered electrons of variable energy onto a fixed position (e.g., the entrance plane of an analyzer) at fixed energy with a zero final beam angle. Analyzer-system focusing properties are given for superelastically, elastically, and inelastically scattered electrons. Computer calculations incorporating recent accurate tube-lens focal properties are used to compute lens voltages, locations and diameters of all pupils and windows, filling factors, and asymptotic rays throughout each lens system. Focus voltages as a function of electron energy and energy change are given, and limits of operation of each system discussed. Both lens systems have been in routine use for several years, and good agreement has been consistently found between calculated and operating lens voltages.
Nested polynomial trends for the improvement of Gaussian process-based predictors
NASA Astrophysics Data System (ADS)
Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.
2017-10-01
The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.
State estimation for networked control systems using fixed data rates
NASA Astrophysics Data System (ADS)
Liu, Qing-Quan; Jin, Fang
2017-07-01
This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.
Calibration of imaging plate detectors to mono-energetic protons in the range 1-200 MeV
NASA Astrophysics Data System (ADS)
Rabhi, N.; Batani, D.; Boutoux, G.; Ducret, J.-E.; Jakubowska, K.; Lantuejoul-Thfoin, I.; Nauraye, C.; Patriarca, A.; Saïd, A.; Semsoum, A.; Serani, L.; Thomas, B.; Vauzour, B.
2017-11-01
Responses of Fuji Imaging Plates (IPs) to proton have been measured in the range 1-200 MeV. Mono-energetic protons were produced with the 15 MV ALTO-Tandem accelerator of the Institute of Nuclear Physics (Orsay, France) and, at higher energies, with the 200-MeV isochronous cyclotron of the Institut Curie—Centre de Protonthérapie d'Orsay (Orsay, France). The experimental setups are described and the measured photo-stimulated luminescence responses for MS, SR, and TR IPs are presented and compared to existing data. For the interpretation of the results, a sensitivity model based on the Monte Carlo GEANT4 code has been developed. It enables the calculation of the response functions in a large energy range, from 0.1 to 200 MeV. Finally, we show that our model reproduces accurately the response of more complex detectors, i.e., stack of high-Z filters and IPs, which could be of great interest for diagnostics of Petawatt laser accelerated particles.
77 FR 11517 - Rapid Response Team for Transmission
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
...: Office of Electricity Delivery and Energy Reliability, Department of Energy, DoE. ACTION: Request for information. SUMMARY: The Department of Energy's Office of Electricity Delivery and Energy Reliability is... Electricity Delivery and Energy Reliability, Mail Code: OE-20, U.S. Department of Energy, 1000 Independence...
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
Three dimensional δf simulations of beams in the SSC
NASA Astrophysics Data System (ADS)
Koga, J.; Tajima, T.; Machida, S.
1993-12-01
A three dimensional δf strong-strong algorithm has been developed to apply to the study of such effects as space charge and beam-beam interaction phenomena in the Superconducting Super Collider (SSC). The algorithm is obtained from the merging of the particle tracking code Simpsons used for 3 dimensional space charge effects and a δf code. The δf method is used to follow the evolution of the non-gaussian part of the beam distribution. The advantages of this method are twofold. First, the Simpsons code utilizes a realistic accelerator model including synchrotron oscillations and energy ramping in 6 dimensional phase space with electromagnetic fields of the beams calculated using a realistic 3 dimensional field solver. Second, the beams are evolving in the fully self-consistent strong-strong sense with finite particle fluctuation noise is greatly reduced as opposed to the weak-strong models where one beam is fixed.
A regularity result for fixed points, with applications to linear response
NASA Astrophysics Data System (ADS)
Sedro, Julien
2018-04-01
In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.
Track structure: time evolution from physics to chemistry.
Dingfelder, M
2006-01-01
This review discusses interaction cross sections of charged particles (electrons, protons, light ions) with atoms and molecules. The focus is on biological relevant targets like liquid water which serves as a substitute of soft tissue in most Monte Carlo codes. The spatial distribution of energy deposition patterns by different radiation qualities and their importance to the time evolution from the physical to the chemical stage or radiation response is discussed. The determination of inelastic interaction cross sections for charged particles in condensed matter is discussed within the relativistic plane-wave Born approximation and semi-empirical models. The dielectric-response-function of liquid water is discussed.
Nonperturbative methods in HZE ion transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.
Few-cycle attosecond pulse chirp effects on asymmetries in ionized electron momentum distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng Liangyou; Tan Fang; Gong Qihuang
2009-07-15
The momentum distributions of electrons ionized from H atoms by chirped few-cycle attosecond pulses are investigated by numerically solving the time-dependent Schroedinger equation. The central carrier frequency of the pulse is chosen to be 25 eV, which is well above the ionization threshold. The asymmetry (or difference) in the yield of electrons ionized along and opposite to the direction of linear laser polarization is found to be very sensitive to the pulse chirp (for pulses with fixed carrier-envelope phase), both for a fixed electron energy and for the energy-integrated yield. In particular, the larger the pulse chirp, the larger themore » number of times the asymmetry changes sign as a function of ionized electron energy. For a fixed chirp, the ionized electron asymmetry is found to be sensitive also to the carrier-envelope phase of the few-cycle pulse.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Sunniva J.; Zeiser, Fabio; Wilson, J. N.
Prompt-fission γ rays are responsible for approximately 5% of the total energy released in fission, and therefore important to understand when modeling nuclear reactors. In this work we present prompt γ-ray emission characteristics in fission as a function of the nuclear excitation energy of the fissioning system. Emitted γ-ray spectra were measured, and γ-ray multiplicities and average and total γ energies per fission were determined for the 233U(d,pf) reaction for excitation energies between 4.8 and 10 MeV, and for the 239Pu(d,pf) reaction between 4.5 and 9 MeV. The spectral characteristics show no significant change as a function of excitation energymore » above the fission barrier, despite the fact that an extra ~5 MeV of energy is potentially available in the excited fragments for γ decay. The measured results are compared with model calculations made for prompt γ-ray emission with the fission model code gef. In conclusion, further comparison with previously obtained results from thermal neutron induced fission is made to characterize possible differences arising from using the surrogate (d,p) reaction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akyol, Bora A.; Allwardt, Craig H.; Beech, Zachary W.
VOLTTRON is a flexible, reliable, and scalable platform for distributed control and sensing. VOLTTRON serves in four primary roles: •A reference platform for researchers to quickly develop control applications for transactive energy. •A reference platform with flexible data store support for energy analytics applications either in academia or in commercial enterprise. •A platform from which commercial enterprise can develop products without license issues and easily integrate into their product line. •An accelerator to drive industry adoption of transactive energy and advanced building energy analytics. Pacific Northwest National Laboratory, with funding from the U.S. Department of Energy’s Building Technologies Office, developedmore » and maintains VOLTTRON as an open-source community project. VOLTTRON source code includes agent execution software; agents that perform critical services that enable and enhance VOLTTRON functionality; and numerous agents that utilize the platform to perform a specific function (fault detection, demand response, etc.). The platform supports energy, operational, and financial transactions between networked entities (equipment, organizations, buildings, grid, etc.) and enhance the control infrastructure of existing buildings through the use of open-source device communication, control protocols, and integrated analytics.« less
Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.
2005-01-01
In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.
HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics
NASA Astrophysics Data System (ADS)
Wiebusch, Martin
2015-10-01
This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.
GSE, data management system programmers/User' manual
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Dolerhie, B. D., Jr.; Ghiglieri, F. J.
1974-01-01
The GSE data management system is a computerized program which provides for a central storage source for key data associated with the mechanical ground support equipment (MGSE). Eight major sort modes can be requested by the user. Attributes that are printed automatically with each sort include the GSE end item number, description, class code, functional code, fluid media, use location, design responsibility, weight, cost, quantity, dimensions, and applicable documents. Multiple subsorts are available for the class code, functional code, fluid media, use location, design responsibility, and applicable document categories. These sorts and how to use them are described. The program and GSE data bank may be easily updated and expanded.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zehtabian, M; Zaker, N; Sina, S
2015-06-15
Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkar, Arijit; Koch, Donald L., E-mail: dlk15@cornell.edu
2015-11-15
The soft glassy rheology (SGR) model has successfully described the time dependent simple shear rheology of a broad class of complex fluids including foams, concentrated emulsions, colloidal glasses, and solvent-free nanoparticle-organic hybrid materials (NOHMs). The model considers a distribution of mesoscopic fluid elements that hop from trap to trap at a rate which is enhanced by the work done to strain the fluid element. While an SGR fluid has a broad exponential distribution of trap energies, the rheology of NOHMs is better described by a narrower energy distribution and we consider both types of trap energy distributions in this study.more » We introduce a tensorial version of these models with a hopping rate that depends on the orientation of the element relative to the mean stress field, allowing a range of relative strengths of the extensional and simple shear responses of the fluid. As an application of these models we consider the flow of a soft glassy material through a dilute fixed bed of fibers. The dilute fixed bed exhibits a range of local linear flows which alternate in a chaotic manner with time in a Lagrangian reference frame. It is amenable to an analytical treatment and has been used to characterize the strong flow response of many complex fluids including fiber suspensions, dilute polymer solutions and emulsions. We show that the accumulated strain in the fluid elements has an abrupt nonlinear growth at a Deborah number of order one in a manner similar to that observed for polymer solutions. The exponential dependence of the hopping rate on strain leads to a fluid element deformation that grows logarithmically with Deborah number at high Deborah numbers. SGR fluids having a broad range of trap energies flowing through fixed beds can exhibit a range of rheological behaviors at small Deborah numbers ranging from a yield stress, to a power law response and finally to Newtonian behavior.« less
Infant differential behavioral responding to discrete emotions.
Walle, Eric A; Reschke, Peter J; Camras, Linda A; Campos, Joseph J
2017-10-01
Emotional communication regulates the behaviors of social partners. Research on individuals' responding to others' emotions typically compares responses to a single negative emotion compared with responses to a neutral or positive emotion. Furthermore, coding of such responses routinely measure surface level features of the behavior (e.g., approach vs. avoidance) rather than its underlying function (e.g., the goal of the approach or avoidant behavior). This investigation examined infants' responding to others' emotional displays across 5 discrete emotions: joy, sadness, fear, anger, and disgust. Specifically, 16-, 19-, and 24-month-old infants observed an adult communicate a discrete emotion toward a stimulus during a naturalistic interaction. Infants' responses were coded to capture the function of their behaviors (e.g., exploration, prosocial behavior, and security seeking). The results revealed a number of instances indicating that infants use different functional behaviors in response to discrete emotions. Differences in behaviors across emotions were clearest in the 24-month-old infants, though younger infants also demonstrated some differential use of behaviors in response to discrete emotions. This is the first comprehensive study to identify differences in how infants respond with goal-directed behaviors to discrete emotions. Additionally, the inclusion of a function-based coding scheme and interpersonal paradigms may be informative for future emotion research with children and adults. Possible developmental accounts for the observed behaviors and the benefits of coding techniques emphasizing the function of social behavior over their form are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Extensions to the integral line-beam method for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.
1995-08-01
A computationally simple method for estimating gamma-ray skyshine dose rates has been developed on the basis of the line-beam response function. Both Monte Carlo and pointkernel calculations that account for both annihilation and bremsstrahlung were used in the generation of line beam response functions (LBRF) for gamma-ray energies between 10 and 100 MeV. The LBRF is approximated by a three-parameter formula. By combining results with those obtained in an earlier study for gamma energies below 10 MeV, LBRF values are readily and accurately evaluated for source energies between 0.02 and 100 MeV, for source-to-detector distances between 1 and 3000 m,more » and beam angles as great as 180 degrees. Tables of the parameters for the approximate LBRF are presented. The new response functions are then applied to three simple skyshine geometries, an open silo geometry, an infinite wall, and a rectangular four-wall building. Results are compared to those of previous calculations and to benchmark measurements. A new approach is introduced to account for overhead shielding of the skyshine source and compared to the simplistic exponential-attenuation method used in earlier studies. The effect of the air-ground interface, usually neglected in gamma skyshine studies, is also examined and an empirical correction factor is introduced. Finally, a revised code based on the improved LBRF approximations and the treatment of the overhead shielding is presented, and results shown for several benchmark problems.« less
GRAYSKY-A new gamma-ray skyshine code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witts, D.J.; Twardowski, T.; Watmough, M.H.
1993-01-01
This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less
47 CFR 15.214 - Cordless telephones.
Code of Federal Regulations, 2010 CFR
2010-10-01
... discrete digital codes. Factory-set codes must be continuously varied over at least 256 possible codes as... readily select from among at least 256 possible discrete digital codes. The cordless telephone shall be... fixed code that is continuously varied among at least 256 discrete digital codes as each telephone is...
Numerical optimization of perturbative coils for tokamaks
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Park, Jong-Kyu; Logan, Nikolas; Boozer, Allen; NSTX-U Research Team
2014-10-01
Numerical optimization of coils which apply three dimensional (3D) perturbative fields to tokamaks is presented. The application of perturbative 3D magnetic fields in tokamaks is now commonplace for control of error fields, resistive wall modes, resonant field drive, and neoclassical toroidal viscosity (NTV) torques. The design of such systems has focused on control of toroidal mode number, with coil shapes based on simple window-pane designs. In this work, a numerical optimization suite based on the STELLOPT 3D equilibrium optimization code is presented. The new code, IPECOPT, replaces the VMEC equilibrium code with the IPEC perturbed equilibrium code, and targets NTV torque by coupling to the PENT code. Fixed boundary optimizations of the 3D fields for the NSTX-U experiment are underway. Initial results suggest NTV torques can be driven by normal field spectrums which are not pitch-resonant with the magnetic field lines. Work has focused on driving core torque with n = 1 and edge torques with n = 3 fields. Optimizations of the coil currents for the planned NSTX-U NCC coils highlight the code's free boundary capability. This manuscript has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the U.S. Department of Energy.
Ramirez, Lisa Marie S; He, Muhan; Mailloux, Shay; George, Justin; Wang, Jun
2016-06-01
Microparticles carrying quick response (QR) barcodes are fabricated by J. Wang and co-workers on page 3259, using a massive coding of dissociated elements (MiCODE) technology. Each microparticle can bear a special custom-designed QR code that enables encryption or tagging with unlimited multiplexity, and the QR code can be easily read by cellphone applications. The utility of MiCODE particles in multiplexed DNA detection and microtagging for anti-counterfeiting is explored. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pitteri, Marco; Marchetti, Mauro; Priftis, Konstantinos; Grassi, Massimo
2017-01-01
Pitch-height is often labeled spatially (i.e., low or high) as a function of the fundamental frequency of the tone. This correspondence is highlighted by the so-called Spatial-Musical Association of Response Codes (SMARC) effect. However, the literature suggests that the brightness of the tone's timbre might contribute to this spatial association. We investigated the SMARC effect in a group of non-musicians by disentangling the role of pitch-height and the role of tone-brightness. In three experimental conditions, participants were asked to judge whether the tone they were listening to was (or was not) modulated in amplitude (i.e., vibrato). Participants were required to make their response in both the horizontal and the vertical axes. In a first condition, tones varied coherently in pitch (i.e., manipulation of the tone's F0) and brightness (i.e., manipulation of the tone's spectral centroid); in a second condition, pitch-height varied whereas brightness was fixed; in a third condition, pitch-height was fixed whereas brightness varied. We found the SMARC effect only in the first condition and only in the vertical axis. In contrast, we did not observe the effect in any of the remaining conditions. The present results suggest that, in non-musicians, the SMARC effect is not due to the manipulation of the pitch-height alone, but arises because of a coherent change of pitch-height and brightness; this effect emerges along the vertical axis only.
Manzhos, Sergei; Carrington, Tucker
2016-12-14
We demonstrate that it is possible to use basis functions that depend on curvilinear internal coordinates to compute vibrational energy levels without deriving a kinetic energy operator (KEO) and without numerically computing coefficients of a KEO. This is done by using a space-fixed KEO and computing KEO matrix elements numerically. Whenever one has an excellent basis, more accurate solutions to the Schrödinger equation can be obtained by computing the KEO, potential, and overlap matrix elements numerically. Using a Gaussian basis and bond coordinates, we compute vibrational energy levels of formaldehyde. We show, for the first time, that it is possible with a Gaussian basis to solve a six-dimensional vibrational Schrödinger equation. For the zero-point energy (ZPE) and the lowest 50 vibrational transitions of H 2 CO, we obtain a mean absolute error of less than 1 cm -1 ; with 200 000 collocation points and 40 000 basis functions, most errors are less than 0.4 cm -1 .
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Carrington, Tucker
2016-12-01
We demonstrate that it is possible to use basis functions that depend on curvilinear internal coordinates to compute vibrational energy levels without deriving a kinetic energy operator (KEO) and without numerically computing coefficients of a KEO. This is done by using a space-fixed KEO and computing KEO matrix elements numerically. Whenever one has an excellent basis, more accurate solutions to the Schrödinger equation can be obtained by computing the KEO, potential, and overlap matrix elements numerically. Using a Gaussian basis and bond coordinates, we compute vibrational energy levels of formaldehyde. We show, for the first time, that it is possible with a Gaussian basis to solve a six-dimensional vibrational Schrödinger equation. For the zero-point energy (ZPE) and the lowest 50 vibrational transitions of H2CO, we obtain a mean absolute error of less than 1 cm-1; with 200 000 collocation points and 40 000 basis functions, most errors are less than 0.4 cm-1.
NASA Technical Reports Server (NTRS)
Wu, R. W. H.; Stagliano, T. R.; Witmer, E. A.; Spilker, R. L.
1978-01-01
These structural ring deflections lie essentially in one plane and, hence, are called two-dimensional (2-d). The structural rings may be complete or partial; the former may be regarded as representing a fragment containment ring while the latter may be viewed as a 2-d fragment-deflector structure. These two types of rings may be either free or supported in various ways (pinned-fixed, locally clamped, elastic-foundation supported, mounting-bracket supported, etc.). The initial geometry of each ring may be circular or arbitrarily curved; uniform-thickness or variable-thickness rings may be analyzed. Strain-hardening and strain-rate effects of initially-isotropic material are taken into account. An approximate analysis utilizing kinetic energy and momentum conservation relations is used to predict the after-impact velocities of each fragment and of the impact-affected region of the ring; this procedure is termed the collision-imparted velocity method (CIVM) and is used in the CIVM-JET 5 B program. This imparted-velocity information is used in conjunction with a finite-element structural response computation code to predict the transient, large-deflection, elastic-plastic responses of the ring. Similarly, the equations of motion of each fragment are solved in small steps in time. Provisions are made in the CIVM-JET 5B code to analyze structural ring response to impact attack by from 1 to 3 fragments, each with its own size, mass, translational velocity components, and rotational velocity. The effects of friction between each fragment and the impacted ring are included.
Phonological coding during reading.
Leinenger, Mallorie
2014-11-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
Pritoni, Marco; Ford, Rebecca; Karlin, Beth; Sanguinetti, Angela
2018-02-01
Policymakers worldwide are currently discussing whether to include home energy management (HEM) products in their portfolio of technologies to reduce carbon emissions and improve grid reliability. However, very little data is available about these products. Here we present the results of an extensive review including 308 HEM products available on the US market in 2015-2016. We gathered these data from publicly available sources such as vendor websites, online marketplaces and other vendor documents. A coding guide was developed iteratively during the data collection and utilized to classify the devices. Each product was coded based on 96 distinct attributes, grouped into 11 categories: Identifying information, Product components, Hardware, Communication, Software, Information - feedback, Information - feedforward, Control, Utility interaction, Additional benefits and Usability. The codes describe product features and functionalities, user interaction and interoperability with other devices. A mix of binary attributes and more descriptive codes allow to sort and group data without losing important qualitative information. The information is stored in a large spreadsheet included with this article, along with an explanatory coding guide. This dataset is analyzed and described in a research article entitled "Categories and functionality of smart home technology for energy management" (Ford et al., 2017) [1].
NASA Astrophysics Data System (ADS)
Dugave, Maxime; Göhmann, Frank; Kozlowski, Karol Kajetan
2014-04-01
We establish several properties of the solutions to the linear integral equations describing the infinite volume properties of the XXZ spin-1/2 chain in the disordered regime. In particular, we obtain lower and upper bounds for the dressed energy, dressed charge and density of Bethe roots. Furthermore, we establish that given a fixed external magnetic field (or a fixed magnetization) there exists a unique value of the boundary of the Fermi zone.
Domonkos, Ágota; Kovács, Szilárd; Gombár, Anikó; Kiss, Ernő; Horváth, Beatrix; Kováts, Gyöngyi Z.; Farkas, Attila; Tóth, Mónika T.; Ayaydin, Ferhan; Bóka, Károly; Fodor, Lili; Endre, Gabriella; Kaló, Péter
2017-01-01
Legumes form endosymbiotic interaction with host compatible rhizobia, resulting in the development of nitrogen-fixing root nodules. Within symbiotic nodules, rhizobia are intracellularly accommodated in plant-derived membrane compartments, termed symbiosomes. In mature nodule, the massively colonized cells tolerate the existence of rhizobia without manifestation of visible defense responses, indicating the suppression of plant immunity in the nodule in the favur of the symbiotic partner. Medicago truncatula DNF2 (defective in nitrogen fixation 2) and NAD1 (nodules with activated defense 1) genes are essential for the control of plant defense during the colonization of the nitrogen-fixing nodule and are required for bacteroid persistence. The previously identified nodule-specific NAD1 gene encodes a protein of unknown function. Herein, we present the analysis of novel NAD1 mutant alleles to better understand the function of NAD1 in the repression of immune responses in symbiotic nodules. By exploiting the advantage of plant double and rhizobial mutants defective in establishing nitrogen-fixing symbiotic interaction, we show that NAD1 functions following the release of rhizobia from the infection threads and colonization of nodule cells. The suppression of plant defense is self-dependent of the differentiation status of the rhizobia. The corresponding phenotype of nad1 and dnf2 mutants and the similarity in the induction of defense-associated genes in both mutants suggest that NAD1 and DNF2 operate close together in the same pathway controlling defense responses in symbiotic nodules. PMID:29240711
Domonkos, Ágota; Kovács, Szilárd; Gombár, Anikó; Kiss, Ernő; Horváth, Beatrix; Kováts, Gyöngyi Z; Farkas, Attila; Tóth, Mónika T; Ayaydin, Ferhan; Bóka, Károly; Fodor, Lili; Ratet, Pascal; Kereszt, Attila; Endre, Gabriella; Kaló, Péter
2017-12-14
Legumes form endosymbiotic interaction with host compatible rhizobia, resulting in the development of nitrogen-fixing root nodules. Within symbiotic nodules, rhizobia are intracellularly accommodated in plant-derived membrane compartments, termed symbiosomes. In mature nodule, the massively colonized cells tolerate the existence of rhizobia without manifestation of visible defense responses, indicating the suppression of plant immunity in the nodule in the favur of the symbiotic partner. Medicago truncatula DNF2 (defective in nitrogen fixation 2) and NAD1 (nodules with activated defense 1) genes are essential for the control of plant defense during the colonization of the nitrogen-fixing nodule and are required for bacteroid persistence. The previously identified nodule-specific NAD1 gene encodes a protein of unknown function. Herein, we present the analysis of novel NAD1 mutant alleles to better understand the function of NAD1 in the repression of immune responses in symbiotic nodules. By exploiting the advantage of plant double and rhizobial mutants defective in establishing nitrogen-fixing symbiotic interaction, we show that NAD1 functions following the release of rhizobia from the infection threads and colonization of nodule cells. The suppression of plant defense is self-dependent of the differentiation status of the rhizobia. The corresponding phenotype of nad1 and dnf2 mutants and the similarity in the induction of defense-associated genes in both mutants suggest that NAD1 and DNF2 operate close together in the same pathway controlling defense responses in symbiotic nodules.
Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERM code GUI, as well as providing training applications.
NASA Astrophysics Data System (ADS)
Horst, Felix; Fehrenbacher, Georg; Radon, Torsten; Kozlova, Ekaterina; Rosmej, Olga; Czarnecki, Damian; Schrenk, Oliver; Breckow, Joachim; Zink, Klemens
2015-05-01
This work presents a thermoluminescence dosimetry based method for the measurement of bremsstrahlung spectra in the energy range from 30 keV to 100 MeV, resolved in ten different energy intervals and for the photon ambient dosimetry in ultrashort pulsed radiation fields as e.g. generated during operation of the PHELIX laser at the GSI Helmholtzzentrum für Schwerionenforschung. The method is a routine-oriented development by application of a multi-filter technique. The data analysis takes around 1 h. The spectral information is obtained by the unfolding of the response of ten thermoluminescence dosimeters with absorbers of different materials and thicknesses arranged as a stack each with a different response function to photon radiation. These response functions were simulated by the use of the Monte Carlo code FLUKA. An algorithm was developed to unfold bremsstrahlung spectra from the readings of the ten dosimeters. The method has been validated by measurements at a clinical electron linear accelerator (6 MV and 18 MV bremsstrahlung). First measurements at the PHELIX laser system were carried out in December 2013 and January 2014. Spectra with photon energies up to 10 MeV and mean energies up to 420 keV were observed at laser-intensities around 1019 W /cm2 on a titanium foil target. The measurement results imply that the steel walls of the target chamber might be an additional bright x-ray source.
Robledo, Marta; Peregrina, Alexandra; Millán, Vicenta; García-Tomsig, Natalia I; Torres-Quesada, Omar; Mateos, Pedro F; Becker, Anke; Jiménez-Zurdo, José I
2017-07-01
Small non-coding RNAs (sRNAs) are expected to have pivotal roles in the adaptive responses underlying symbiosis of nitrogen-fixing rhizobia with legumes. Here, we provide primary insights into the function and activity mechanism of the Sinorhizobium meliloti trans-sRNA NfeR1 (Nodule Formation Efficiency RNA). Northern blot probing and transcription tracking with fluorescent promoter-reporter fusions unveiled high nfeR1 expression in response to salt stress and throughout the symbiotic interaction. The strength and differential regulation of nfeR1 transcription are conferred by a motif, which is conserved in nfeR1 promoter regions in α-proteobacteria. NfeR1 loss-of-function compromised osmoadaptation of free-living bacteria, whilst causing misregulation of salt-responsive genes related to stress adaptation, osmolytes catabolism and membrane trafficking. Nodulation tests revealed that lack of NfeR1 affected competitiveness, infectivity, nodule development and symbiotic efficiency of S. meliloti on alfalfa roots. Comparative computer predictions and a genetic reporter assay evidenced a redundant role of three identical NfeR1 unpaired anti Shine-Dalgarno motifs for targeting and downregulation of translation of multiple mRNAs from transporter genes. Our data provide genetic evidence of the hyperosmotic conditions of the endosymbiotic compartments. NfeR1-mediated gene regulation in response to this cue could contribute to coordinate nutrient uptake with the metabolic reprogramming concomitant to symbiotic transitions. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bdzil, John Bohdan
The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver,more » and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-set function, phi, which did not rely on the gradient of the level-set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver, and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-set function, phi, which did not rely on the gradient of the level-set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.« less
Holographic non-Fermi-liquid fixed points.
Faulkner, Tom; Iqbal, Nabil; Liu, Hong; McGreevy, John; Vegh, David
2011-04-28
Techniques arising from string theory can be used to study assemblies of strongly interacting fermions. Via this 'holographic duality', various strongly coupled many-body systems are solved using an auxiliary theory of gravity. Simple holographic realizations of finite density exhibit single-particle spectral functions with sharp Fermi surfaces, of a form distinct from those of the Landau theory. The self-energy is given by a correlation function in an infrared (IR) fixed-point theory that is represented by a two-dimensional anti de Sitter space (AdS(2)) region in the dual gravitational description. Here, we describe in detail the gravity calculation of this IR correlation function.
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
On the effect of updated MCNP photon cross section data on the simulated response of the HPA TLD.
Eakins, Jonathan
2009-02-01
The relative response of the new Health Protection Agency thermoluminescence dosimeter (TLD) has been calculated for Narrow Series X-ray distribution and (137)Cs photon sources using the Monte Carlo code MCNP5, and the results compared with those obtained during its design stage using the predecessor code, MCNP4c2. The results agreed at intermediate energies (approximately 0.1 MeV to (137)Cs), but differed at low energies (<0.1 MeV) by up to approximately 10%. This disparity has been ascribed to differences in the default photon interaction data used by the two codes, and derives ultimately from the effect on absorbed dose of the recent updates to the photoelectric cross sections. The sources of these data have been reviewed.
One-dimensional MHD simulations of MTF systems with compact toroid targets and spherical liners
NASA Astrophysics Data System (ADS)
Khalzov, Ivan; Zindler, Ryan; Barsky, Sandra; Delage, Michael; Laberge, Michel
2017-10-01
One-dimensional (1D) MHD code is developed in General Fusion (GF) for coupled plasma-liner simulations in magnetized target fusion (MTF) systems. The main goal of these simulations is to search for optimal parameters of MTF reactor, in which spherical liquid metal liner compresses compact toroid plasma. The code uses Lagrangian description for both liner and plasma. The liner is represented as a set of spherical shells with fixed masses while plasma is discretized as a set of nested tori with circular cross sections and fixed number of particles between them. All physical fields are 1D functions of either spherical (liner) or small toroidal (plasma) radius. Motion of liner and plasma shells is calculated self-consistently based on applied forces and equations of state. Magnetic field is determined by 1D profiles of poloidal and toroidal fluxes - they are advected with shells and diffuse according to local resistivity, this also accounts for flux leakage into the liner. Different plasma transport models are implemented, this allows for comparison with ongoing GF experiments. Fusion power calculation is included into the code. We performed a series of parameter scans in order to establish the underlying dependencies of the MTF system and find the optimal reactor design point.
Spectral densities for Frenkel exciton dynamics in molecular crystals: A TD-DFTB approach
NASA Astrophysics Data System (ADS)
Plötz, Per-Arno; Megow, Jörg; Niehaus, Thomas; Kühn, Oliver
2017-02-01
Effects of thermal fluctuations on the electronic excitation energies and intermonomeric Coulomb couplings are investigated for a perylene-tetracarboxylic-diimide crystal. To this end, time dependent density functional theory based tight binding (TD-DFTB) in the linear response formulation is used in combination with electronic ground state classical molecular dynamics. As a result, a parametrized Frenkel exciton Hamiltonian is obtained, with the effect of exciton-vibrational coupling being described by spectral densities. Employing dynamically defined normal modes, these spectral densities are analyzed in great detail, thus providing insight into the effect of specific intramolecular motions on excitation energies and Coulomb couplings. This distinguishes the present method from approaches using fixed transition densities. The efficiency by which intramolecular contributions to the spectral density can be calculated is a clear advantage of this method as compared with standard TD-DFT.
Rose, Sunniva J.; Zeiser, Fabio; Wilson, J. N.; ...
2017-07-05
Prompt-fission γ rays are responsible for approximately 5% of the total energy released in fission, and therefore important to understand when modeling nuclear reactors. In this work we present prompt γ-ray emission characteristics in fission as a function of the nuclear excitation energy of the fissioning system. Emitted γ-ray spectra were measured, and γ-ray multiplicities and average and total γ energies per fission were determined for the 233U(d,pf) reaction for excitation energies between 4.8 and 10 MeV, and for the 239Pu(d,pf) reaction between 4.5 and 9 MeV. The spectral characteristics show no significant change as a function of excitation energymore » above the fission barrier, despite the fact that an extra ~5 MeV of energy is potentially available in the excited fragments for γ decay. The measured results are compared with model calculations made for prompt γ-ray emission with the fission model code gef. In conclusion, further comparison with previously obtained results from thermal neutron induced fission is made to characterize possible differences arising from using the surrogate (d,p) reaction.« less
Neutron spectroscopy with scintillation detectors using wavelets
NASA Astrophysics Data System (ADS)
Hartman, Jessica
The purpose of this research was to study neutron spectroscopy using the EJ-299-33A plastic scintillator. This scintillator material provided a novel means of detection for fast neutrons, without the disadvantages of traditional liquid scintillation materials. EJ-299-33A provided a more durable option to these materials, making it less likely to be damaged during handling. Unlike liquid scintillators, this plastic scintillator was manufactured from a non-toxic material, making it safer to use, as well as easier to design detectors. The material was also manufactured with inherent pulse shape discrimination abilities, making it suitable for use in neutron detection. The neutron spectral unfolding technique was developed in two stages. Initial detector response function modeling was carried out through the use of the MCNPX Monte Carlo code. The response functions were developed for a monoenergetic neutron flux. Wavelets were then applied to smooth the response function. The spectral unfolding technique was applied through polynomial fitting and optimization techniques in MATLAB. Verification of the unfolding technique was carried out through the use of experimentally determined response functions. These were measured on the neutron source based on the Van de Graff accelerator at the University of Kentucky. This machine provided a range of monoenergetic neutron beams between 0.1 MeV and 24 MeV, making it possible to measure the set of response functions of the EJ-299-33A plastic scintillator detector to neutrons of specific energies. The response of a plutonium-beryllium (PuBe) source was measured using the source available at the University of Nevada, Las Vegas. The neutron spectrum reconstruction was carried out using the experimentally measured response functions. Experimental data was collected in the list mode of the waveform digitizer. Post processing of this data focused on the pulse shape discrimination analysis of the recorded response functions to remove the effects of photons and allow for source characterization based solely on the neutron response. The unfolding technique was performed through polynomial fitting and optimization techniques in MATLAB, and provided an energy spectrum for the PuBe source.
Description and availability of the SMARTS spectral model for photovoltaic applications
NASA Astrophysics Data System (ADS)
Myers, Daryl R.; Gueymard, Christian A.
2004-11-01
Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
A Study of the Errors of the Fixed-Node Approximation in Diffusion Monte Carlo
NASA Astrophysics Data System (ADS)
Rasch, Kevin M.
Quantum Monte Carlo techniques stochastically evaluate integrals to solve the many-body Schrodinger equation. QMC algorithms scale favorably in the number of particles simulated and enjoy applicability to a wide range of quantum systems. Advances in the core algorithms of the method and their implementations paired with the steady development of computational assets have carried the applicability of QMC beyond analytically treatable systems, such as the Homogeneous Electron Gas, and have extended QMC's domain to treat atoms, molecules, and solids containing as many as several hundred electrons. FN-DMC projects out the ground state of a wave function subject to constraints imposed by our ansatz to the problem. The constraints imposed by the fixed-node Approximation are poorly understood. One key step in developing any scientific theory or method is to qualify where the theory is inaccurate and to quantify how erroneous it is under these circumstances. I investigate the fixed-node errors as they evolve over changing charge density, system size, and effective core potentials. I begin by studying a simple system for which the nodes of the trial wave function can be solved almost exactly. By comparing two trial wave functions, a single determinant wave function flawed in a known way and a nearly exact wave function, I show that the fixed-node error increases when the charge density is increased. Next, I investigate a sequence of Lithium systems increasing in size from a single atom, to small molecules, up to the bulk metal form. Over these systems, FN-DMC calculations consistently recover 95% or more of the correlation energy of the system. Given this accuracy, I make a prediction for the binding energy of Li4 molecule. Last, I turn to analyzing the fixed-node error in first and second row atoms and their molecules. With the appropriate pseudo-potentials, these systems are iso-electronic, show similar geometries and states. One would expect with identical number of particles involved in the calculation, errors in the respective total energies of the two iso-electronic species would be quite similar. I observe, instead, that the first row atoms and their molecules have errors larger by twice or more in size. I identify a cause for this difference in iso-electronic species. The fixed-node errors in all of these cases are calculated by careful comparison to experimental results, showing that FN-DMC to be a robust tool for understanding quantum systems and also a method for new investigations into the nature of many-body effects.
Aeroelastic stability of wind turbine blade/aileron systems
NASA Technical Reports Server (NTRS)
Strain, J. C.; Mirandy, L.
1995-01-01
Aeroelastic stability analyses have been performed for the MOD-5A blade/aileron system. Various configurations having different aileron torsional stiffness, mass unbalance, and control system damping have been investigated. The analysis was conducted using a code recently developed by the General Electric Company - AILSTAB. The code extracts eigenvalues for a three degree of freedom system, consisting of: (1) a blade flapwise mode; (2) a blade torsional mode; and (3) an aileron torsional mode. Mode shapes are supplied as input and the aileron can be specified over an arbitrary length of the blade span. Quasi-steady aerodynamic strip theory is used to compute aerodynamic derivatives of the wing-aileron combination as a function of spanwise position. Equations of motion are summarized herein. The program provides rotating blade stability boundaries for torsional divergence, classical flutter (bending/torsion) and wing/aileron flutter. It has been checked out against fixed-wing results published by Theodorsen and Garrick. The MOD-5A system is stable with respect to divergence and classical flutter for all practical rotor speeds. Aileron torsional stiffness must exceed a minimum critical value to prevent aileron flutter. The nominal control system stiffness greatly exceeds this minimum during normal operation. The basic system, however, is unstable for the case of a free (or floating) aileron. The instability can be removed either by the addition of torsional damping or mass-balancing the ailerons. The MOD-5A design was performed by the General Electric Company, Advanced Energy Program Department under Contract DEN3-153 with NASA Lewis Research Center and sponsored by the Department of Energy.
Validation of the SINDA/FLUINT code using several analytical solutions
NASA Technical Reports Server (NTRS)
Keller, John R.
1995-01-01
The Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) code has often been used to determine the transient and steady-state response of various thermal and fluid flow networks. While this code is an often used design and analysis tool, the validation of this program has been limited to a few simple studies. For the current study, the SINDA/FLUINT code was compared to four different analytical solutions. The thermal analyzer portion of the code (conduction and radiative heat transfer, SINDA portion) was first compared to two separate solutions. The first comparison examined a semi-infinite slab with a periodic surface temperature boundary condition. Next, a small, uniform temperature object (lumped capacitance) was allowed to radiate to a fixed temperature sink. The fluid portion of the code (FLUINT) was also compared to two different analytical solutions. The first study examined a tank filling process by an ideal gas in which there is both control volume work and heat transfer. The final comparison considered the flow in a pipe joining two infinite reservoirs of pressure. The results of all these studies showed that for the situations examined here, the SINDA/FLUINT code was able to match the results of the analytical solutions.
NASA Astrophysics Data System (ADS)
White, Justin; Olson, Britton; Morgan, Brandon; McFarland, Jacob; Lawrence Livermore National Laboratory Team; University of Missouri-Columbia Team
2015-11-01
This work presents results from a large eddy simulation of a high Reynolds number Rayleigh-Taylor instability and Richtmyer-Meshkov instability. A tenth-order compact differencing scheme on a fixed Eulerian mesh is utilized within the Ares code developed at Lawrence Livermore National Laboratory. (LLNL) We explore the self-similar limit of the mixing layer growth in order to evaluate the k-L-a Reynolds Averaged Navier Stokes (RANS) model (Morgan and Wickett, Phys. Rev. E, 2015). Furthermore, profiles of turbulent kinetic energy, turbulent length scale, mass flux velocity, and density-specific-volume correlation are extracted in order to aid the creation a high fidelity LES data set for RANS modeling. Prepared by LLNL under Contract DE-AC52-07NA27344.
An Energy Model of Place Cell Network in Three Dimensional Space.
Wang, Yihong; Xu, Xuying; Wang, Rubin
2018-01-01
Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
Efficiency as a function of MEQ-CWT for large area germanium detectors using LLNL phantom.
Rajaram, S; Brindha, J Thulasi; Sreedevi, K R; Hegde, A G
2012-01-01
The lung counting system at Kalpakkam, India, used for the estimation of transuranics deposited in the lungs of occupational workers, consists of an array of three large area germanium detectors fixed in a single assembly. The efficiency calibration for low energy photons was carried out using ²⁴¹Am and ²³²Th lung sets of Lawrence Livermore National Laboratory phantom. The muscle equivalent chest wall thickness (MEQ-CWT) was derived for the three energies 59.5, 75.95 (average energy of ²³²Th) and 238.9 keV for the series of overlay plates made of different adipose mass ratios. Efficiency as a function of MEQ-CWT was calculated for individual detectors for the three energies. Variation of MEQ-CWT from 16 to 40 mm resulted in an efficiency variation of around 40 % for all the three energies. The array efficiency for different MEQ-CWT ranged from 1.4×10⁻³ to 3.2×10⁻³, 1.5×10⁻³ to 3.3×10⁻³ and 1.1×10⁻³ to 2.3×10⁻³ for 59.5, 75.95 and 238.9 keV, respectively. In the energy response, efficiency was observed to be maximum for 75.95 keV compared with 59.5 and 238.9 keV.
Nikezic, D; Shahmohammadi Beni, Mehrdad; Krstic, D; Yu, K N
2016-01-01
Monte Carlo method has been used to determine the efficiency for proton production and to study the energy and angular distributions of the generated protons. The ENDF library of cross sections is used to simulate the interactions between the neutrons and the atoms in a polyethylene (PE) layer, while the ranges of protons with different energies in PE are determined using the Stopping and Range of Ions in Matter (SRIM) computer code. The efficiency of proton production increases with the PE layer thickness. However the proton escaping from a certain polyethylene volume is highly dependent on the neutron energy and target thickness, except for a very thin PE layer. The energy and angular distributions of protons are also estimated in the present paper, showing that, for the range of energy and thickness considered, the proton flux escaping is dependent on the PE layer thickness, with the presence of an optimal thickness for a fixed primary neutron energy.
Nikezic, D.; Shahmohammadi Beni, Mehrdad; Krstic, D.; Yu, K. N.
2016-01-01
Monte Carlo method has been used to determine the efficiency for proton production and to study the energy and angular distributions of the generated protons. The ENDF library of cross sections is used to simulate the interactions between the neutrons and the atoms in a polyethylene (PE) layer, while the ranges of protons with different energies in PE are determined using the Stopping and Range of Ions in Matter (SRIM) computer code. The efficiency of proton production increases with the PE layer thickness. However the proton escaping from a certain polyethylene volume is highly dependent on the neutron energy and target thickness, except for a very thin PE layer. The energy and angular distributions of protons are also estimated in the present paper, showing that, for the range of energy and thickness considered, the proton flux escaping is dependent on the PE layer thickness, with the presence of an optimal thickness for a fixed primary neutron energy. PMID:27362656
Characteristic evaluation of a Lithium-6 loaded neutron coincidence spectrometer.
Hayashi, M; Kaku, D; Watanabe, Y; Sagara, K
2007-01-01
Characteristics of a (6)Li-loaded neutron coincidence spectrometer were investigated from both measurements and Monte Carlo simulations. The spectrometer consists of three (6)Li-glass scintillators embedded in a liquid organic scintillator BC-501A, which can detect selectively neutrons that deposit the total energy in the BC-501A using a coincidence signal generated from the capture event of thermalised neutrons in the (6)Li-glass scintillators. The relative efficiency and the energy response were measured using 4.7, 7.2 and 9.0 MeV monoenergetic neutrons. The measured ones were compared with the Monte Carlo calculations performed by combining the neutron transport code PHITS and the scintillator response calculation code SCINFUL. The experimental light output spectra were in good agreement with the calculated ones in shape. The energy dependence of the detection efficiency was reproduced by the calculation. The response matrices for 1-10 MeV neutrons were finally obtained.
Effect of Surface Nonequilibrium Thermochemistry in Simulation of Carbon Based Ablators
NASA Technical Reports Server (NTRS)
Chen, Yih-Kang; Gokcen, Tahir
2012-01-01
This study demonstrates that coupling of a material thermal response code and a flow solver using finite-rate gas/surface interaction model provides time-accurate solutions for multidimensional ablation of carbon based charring ablators. The material thermal response code used in this study is the Two-dimensional Implicit Thermal Response and Ablation Program (TITAN), which predicts charring material thermal response and shape change on hypersonic space vehicles. Its governing equations include total energy balance, pyrolysis gas momentum conservation, and a three-component decomposition model. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation (DPLR) method. Loose coupling between material response and flow codes is performed by solving the surface mass balance in DPLR and the surface energy balance in TITAN. Thus, the material surface recession is predicted by finite-rate gas/surface interaction boundary conditions implemented in DPLR, and the surface temperature and pyrolysis gas injection rate are computed in TITAN. Two sets of gas/surface interaction chemistry between air and carbon surface developed by Park and Zhluktov, respectively, are studied. Coupled fluid-material response analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities are considered. The ablating material used in these arc-jet tests was a Phenolic Impregnated Carbon Ablator (PICA). Computational predictions of in-depth material thermal response and surface recession are compared with the experimental measurements for stagnation cold wall heat flux ranging from 107 to 1100 Watts per square centimeter.
Effect of Non-Equilibrium Surface Thermochemistry in Simulation of Carbon Based Ablators
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Gokcen, Tahir
2012-01-01
This study demonstrates that coupling of a material thermal response code and a flow solver using non-equilibrium gas/surface interaction model provides time-accurate solutions for the multidimensional ablation of carbon based charring ablators. The material thermal response code used in this study is the Two-dimensional Implicit Thermal-response and AblatioN Program (TITAN), which predicts charring material thermal response and shape change on hypersonic space vehicles. Its governing equations include total energy balance, pyrolysis gas mass conservation, and a three-component decomposition model. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation (DPLR) method. Loose coupling between the material response and flow codes is performed by solving the surface mass balance in DPLR and the surface energy balance in TITAN. Thus, the material surface recession is predicted by finite-rate gas/surface interaction boundary conditions implemented in DPLR, and the surface temperature and pyrolysis gas injection rate are computed in TITAN. Two sets of nonequilibrium gas/surface interaction chemistry between air and the carbon surface developed by Park and Zhluktov, respectively, are studied. Coupled fluid-material response analyses of stagnation tests conducted in NASA Ames Research Center arc-jet facilities are considered. The ablating material used in these arc-jet tests was Phenolic Impregnated Carbon Ablator (PICA). Computational predictions of in-depth material thermal response and surface recession are compared with the experimental measurements for stagnation cold wall heat flux ranging from 107 to 1100 Watts per square centimeter.
Delamuta, Jakeline Renata Marçon; Ribeiro, Renan Augusto; Gomes, Douglas Fabiano; Souza, Renata Carolina; Chueire, Ligia Maria Oliveira
2015-01-01
Bradyrhizobium pachyrhizi PAC48T has been isolated from a jicama nodule in Costa Rica. The draft genome indicates high similarity with that of Bradyrhizobium elkanii. Several coding sequences (CDSs) of the stress response might help in survival in the tropics. PAC48T carries nodD1 and nodK, similar to Bradyrhizobium (Parasponia) ANU 289 and a particular nodD2 gene. PMID:26383651
NASA Astrophysics Data System (ADS)
Pintado, O. I.; Santillán, L.; Marquetti, M. E.
All images obtained with a telescope are distorted by the instrument. This distorsion is known as instrumental profile or instrumental broadening. The deformations in the spectra could introduce large errors in the determination of different parameters, especially in those dependent on the spectral lines shapes, such as chemical abundances, winds, microturbulence, etc. To correct this distortion, in some cases, the spectral lines are convolved with a Gaussian function and in others the lines are widened with a fixed value. Some codes used to calculate synthetic spectra, as SYNTHE, include this corrections. We present results obtained for the spectrograph REOSC and EBASIM of CASLEO.
Dynamic quality of service differentiation using fixed code weight in optical CDMA networks
NASA Astrophysics Data System (ADS)
Kakaee, Majid H.; Essa, Shawnim I.; Abd, Thanaa H.; Seyedzadeh, Saleh
2015-11-01
The emergence of network-driven applications, such as internet, video conferencing, and online gaming, brings in the need for a network the environments with capability of providing diverse Quality of Services (QoS). In this paper, a new code family of novel spreading sequences, called a Multi-Service (MS) code, has been constructed to support multiple services in Optical- Code Division Multiple Access (CDMA) system. The proposed method uses fixed weight for all services, however reducing the interfering codewords for the users requiring higher QoS. The performance of the proposed code is demonstrated using mathematical analysis. It shown that the total number of served users with satisfactory BER of 10-9 using NB=2 is 82, while they are only 36 and 10 when NB=3 and 4 respectively. The developed MS code is compared with variable-weight codes such as Variable Weight-Khazani Syed (VW-KS) and Multi-Weight-Random Diagonal (MW-RD). Different numbers of basic users (NB) are used to support triple-play services (audio, data and video) with different QoS requirements. Furthermore, reference to the BER of 10-12, 10-9, and 10-3 for video, data and audio, respectively, the system can support up to 45 total users. Hence, results show that the technique can clearly provide a relative QoS differentiation with lower value of basic users can support larger number of subscribers as well as better performance in terms of acceptable BER of 10-9 at fixed code weight.
A meta-analysis of context-dependency in plant response to inoculation with mycorrhizal fungi.
Hoeksema, Jason D; Chaudhary, V Bala; Gehring, Catherine A; Johnson, Nancy Collins; Karst, Justine; Koide, Roger T; Pringle, Anne; Zabinski, Catherine; Bever, James D; Moore, John C; Wilson, Gail W T; Klironomos, John N; Umbanhowar, James
2010-03-01
Ecology Letters (2010) 13: 394-407 Abstract Mycorrhizal fungi influence plant growth, local biodiversity and ecosystem function. Effects of the symbiosis on plants span the continuum from mutualism to parasitism. We sought to understand this variation in symbiotic function using meta-analysis with information theory-based model selection to assess the relative importance of factors in five categories: (1) identity of the host plant and its functional characteristics, (2) identity and type of mycorrhizal fungi (arbuscular mycorrhizal vs. ectomycorrhizal), (3) soil fertility, (4) biotic complexity of the soil and (5) experimental location (laboratory vs. field). Across most subsets of the data, host plant functional group and N-fertilization were surprisingly much more important in predicting plant responses to mycorrhizal inoculation ('plant response') than other factors. Non-N-fixing forbs and woody plants and C(4) grasses responded more positively to mycorrhizal inoculation than plants with N-fixing bacterial symbionts and C(3) grasses. In laboratory studies of the arbuscular mycorrhizal symbiosis, plant response was more positive when the soil community was more complex. Univariate analyses supported the hypothesis that plant response is most positive when plants are P-limited rather than N-limited. These results emphasize that mycorrhizal function depends on both abiotic and biotic context, and have implications for plant community theory and restoration ecology.
NASA Technical Reports Server (NTRS)
Teske, M. E.
1984-01-01
This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.
A parallel and modular deformable cell Car-Parrinello code
NASA Astrophysics Data System (ADS)
Cavazzoni, Carlo; Chiarotti, Guido L.
1999-12-01
We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.
Teaching an Old Dog an Old Trick: FREE-FIX and Free-Boundary Axisymmetric MHD Equilibrium
NASA Astrophysics Data System (ADS)
Guazzotto, Luca
2015-11-01
A common task in plasma physics research is the calculation of an axisymmetric equilibrium for tokamak modeling. The main unknown of the problem is the magnetic poloidal flux ψ. The easiest approach is to assign the shape of the plasma and only solve the equilibrium problem in the plasma / closed-field-lines region (the ``fixed-boundary approach''). Often, one may also need the vacuum fields, i.e. the equilibrium in the open-field-lines region, requiring either coil currents or ψ on some closed curve outside the plasma to be assigned (the ``free-boundary approach''). Going from one approach to the other is a textbook problem, involving the calculation of Green's functions and surface integrals in the plasma. However, no tools are readily available to perform this task. Here we present a code (FREE-FIX) to compute a boundary condition for a free-boundary equilibrium given only the corresponding fixed-boundary equilibrium. An improvement to the standard solution method, allowing for much faster calculations, is presented. Applications are discussed. PPPL fund 245139 and DOE grant G00009102.
Measurements of charge distributions of the fragments in the low energy fission reaction
NASA Astrophysics Data System (ADS)
Wang, Taofeng; Han, Hongyin; Meng, Qinghua; Wang, Liming; Zhu, Liping; Xia, Haihong
2013-01-01
The measurement for charge distributions of fragments in spontaneous fission 252Cf has been performed by using a unique style of detector setup consisting of a typical grid ionization chamber and a ΔΕ-Ε particle telescope, in which a thin grid ionization chamber served as the ΔΕ-section and the E-section was an Au-Si surface barrier detector. The typical physical quantities of fragments, such as mass number and kinetic energies as well as the deposition in the gas ΔΕ detector and E detector were derived from the coincident measurement data. The charge distributions of the light fragments for the fixed mass number A2* and total kinetic energy (TKE) were obtained by the least-squares fits for the response functions of the ΔΕ detector with multi-Gaussian functions representing the different elements. The results of the charge distributions for some typical fragments are shown in this article which indicates that this detection setup has the charge distribution capability of Ζ:ΔΖ>40:1. The experimental method developed in this work for determining the charge distributions of fragments is expected to be employed in the neutron induced fissions of 232Th and 238U or other low energy fission reactions.
Rubel, Elisa Terumi; Raittz, Roberto Tadeu; Coimbra, Nilson Antonio da Rocha; Gehlen, Michelly Alves Coutinho; Pedrosa, Fábio de Oliveira
2016-12-15
Azopirillum brasilense is a plant-growth promoting nitrogen-fixing bacteria that is used as bio-fertilizer in agriculture. Since nitrogen fixation has a high-energy demand, the reduction of N 2 to NH 4 + by nitrogenase occurs only under limiting conditions of NH 4 + and O 2 . Moreover, the synthesis and activity of nitrogenase is highly regulated to prevent energy waste. In A. brasilense nitrogenase activity is regulated by the products of draG and draT. The product of the draB gene, located downstream in the draTGB operon, may be involved in the regulation of nitrogenase activity by an, as yet, unknown mechanism. A deep in silico analysis of the product of draB was undertaken aiming at suggesting its possible function and involvement with DraT and DraG in the regulation of nitrogenase activity in A. brasilense. In this work, we present a new artificial intelligence strategy for protein classification, named ProClaT. The features used by the pattern recognition model were derived from the primary structure of the DraB homologous proteins, calculated by a ProClaT internal algorithm. ProClaT was applied to this case study and the results revealed that the A. brasilense draB gene codes for a protein highly similar to the nitrogenase associated NifO protein of Azotobacter vinelandii. This tool allowed the reclassification of DraB/NifO homologous proteins, hypothetical, conserved hypothetical and those annotated as putative arsenate reductase, ArsC, as NifO-like. An analysis of co-occurrence of draB, draT, draG and of other nif genes was performed, suggesting the involvement of draB (nifO) in nitrogen fixation, however, without the definition of a specific function.
Dynamic Forms. Part 1: Functions
NASA Technical Reports Server (NTRS)
Meyer, George; Smith, G. Allan
1993-01-01
The formalism of dynamic forms is developed as a means for organizing and systematizing the design control systems. The formalism allows the designer to easily compute derivatives to various orders of large composite functions that occur in flight-control design. Such functions involve many function-of-a-function calls that may be nested to many levels. The component functions may be multiaxis, nonlinear, and they may include rotation transformations. A dynamic form is defined as a variable together with its time derivatives up to some fixed but arbitrary order. The variable may be a scalar, a vector, a matrix, a direction cosine matrix, Euler angles, or Euler parameters. Algorithms for standard elementary functions and operations of scalar dynamic forms are developed first. Then vector and matrix operations and transformations between parameterization of rotations are developed in the next level in the hierarchy. Commonly occurring algorithms in control-system design, including inversion of pure feedback systems, are developed in the third level. A large-angle, three-axis attitude servo and other examples are included to illustrate the effectiveness of the developed formalism. All algorithms were implemented in FORTRAN code. Practical experience shows that the proposed formalism may significantly improve the productivity of the design and coding process.
SoyNet: a database of co-functional networks for soybean Glycine max.
Kim, Eiru; Hwang, Sohyun; Lee, Insuk
2017-01-04
Soybean (Glycine max) is a legume crop with substantial economic value, providing a source of oil and protein for humans and livestock. More than 50% of edible oils consumed globally are derived from this crop. Soybean plants are also important for soil fertility, as they fix atmospheric nitrogen by symbiosis with microorganisms. The latest soybean genome annotation (version 2.0) lists 56 044 coding genes, yet their functional contributions to crop traits remain mostly unknown. Co-functional networks have proven useful for identifying genes that are involved in a particular pathway or phenotype with various network algorithms. Here, we present SoyNet (available at www.inetbio.org/soynet), a database of co-functional networks for G. max and a companion web server for network-based functional predictions. SoyNet maps 1 940 284 co-functional links between 40 812 soybean genes (72.8% of the coding genome), which were inferred from 21 distinct types of genomics data including 734 microarrays and 290 RNA-seq samples from soybean. SoyNet provides a new route to functional investigation of the soybean genome, elucidating genes and pathways of agricultural importance. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
On the Profitability of Variable Speed Pump-Storage-Power in Frequency Restoration Reserve
NASA Astrophysics Data System (ADS)
Filipe, Jorge; Bessa, Ricardo; Moreira, Carlos; Silva, Bernardo
2017-04-01
The increase penetration of renewable energy sources (RES) into the European power system has introduced a significant amount of variability and uncertainty in the generation profiles raising the needs for ancillary services as well as other tools like demand response, improved generation forecasting techniques and changes to the market design. While RES is able to replace energy produced by the traditional centralized generation, it cannot displace its capacity in terms of ancillary services provided. Therefore, centralized generation capacity must be retained to perform this function leading to over-capacity issues and underutilisation of the assets. Large-scale reversible hydro power plants represent the majority of the storage solution installed in the power system. This technology comes with high investments costs, hence the constant search for methods to increase and diversify the sources of revenue. Traditional fixed speed pump storage units typically operate in the day-ahead market to perform price arbitrage and, in some specific cases, provide downward replacement reserve (RR). Variable speed pump storage can not only participate in RR but also contribute to FRR, given their ability to control its operating point in pumping mode. This work does an extended analysis of a complete bidding strategy for Pumped Storage Power, enhancing the economic advantages of variable speed pump units in comparison with fixed ones.
Linear fixed-field multipass arcs for recirculating linear accelerators
Morozov, V. S.; Bogacz, S. A.; Roblin, Y. R.; ...
2012-06-14
Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting themore » dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. Finally, we present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.« less
The Nuclear Energy Knowledge and Validation Center – Summary of Activities Conducted in FY15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans David; Hong, Bonnie Colleen
2016-05-01
The Nuclear Energy Knowledge and Validation Center (NEKVaC) is a new initiative by the Department of Energy and the Idaho National Laboratory to coordinate and focus the resources and expertise that exist with the DOE Complex toward solving issues in modern nuclear code validation. In time, code owners, users, and developers will view the Center as a partner and essential resource for acquiring the best practices and latest techniques for validating codes, for guidance in planning and executing experiments, for facilitating access to, and maximizing the usefulness of, existing data, and for preserving knowledge for continual use by nuclear professionalsmore » and organizations for their own validation needs. The scope of the center covers many inter-related activities which will need to be cultivated carefully in the near-term and managed properly once the Center is fully functional. Three areas comprise the principal mission: 1) identification and prioritization of projects that extend the field of validation science and its application to modern codes, 2) adapt or develop best practices and guidelines for high fidelity multiphysics/multiscale analysis code development and associated experiment design, and 3) define protocols for data acquisition and knowledge preservation and provide a portal for access to databases currently scattered among numerous organizations. These mission areas, while each having a unique focus, are inter-dependent and complementary. Likewise, all activities supported by the NEKVaC, both near-term and long-term), must possess elements supporting all three. This cross-cutting nature is essential to ensuring that activities and supporting personnel do not become ‘stove-piped’, i.e. focused so much on a specific function that the activity itself becomes the objective rather than the achieving the larger vision. Achieving the broader vision will require a healthy and accountable level of activity in each of the areas. This will take time and significant DOE support. Growing too fast (budget-wise) will not allow ideas to mature, lessons to be learned, and taxpayer money to be spent responsibly. The process should be initiated with a small set of tasks, executed over a short but reasonable term, that will exercise most if not all aspects of the Center’s potential operation. The initial activities described in this report have a high potential for near-term success in demonstrating Center objectives but also to work out some of the issues in task execution, communication between functional elements, and the ability to raise awareness of the Center and cement stakeholder buy-in. This report begins with a description of the Mission areas; specifically the role played by each and the types of activities for which they are responsible. It then lists and describes the proposed near-term tasks upon which future efforts can build.« less
Calculation of wakefields in 2D rectangular structures
Zagorodnov, I.; Bane, K. L. F.; Stupakov, G.
2015-10-19
We consider the calculation of electromagnetic fields generated by an electron bunch passing through a vacuum chamber structure that, in general, consists of an entry pipe, followed by some kind of transition or cavity, and ending in an exit pipe. We limit our study to structures having rectangular cross section, where the height can vary as function of longitudinal coordinate but the width and side walls remain fixed. For such structures, we derive a Fourier representation of the wake potentials through one-dimensional functions. A new numerical approach for calculating the wakes in such structures is proposed and implemented in themore » computer code echo(2d). The computation resource requirements for this approach are moderate and comparable to those for finding the wakes in 2D rotationally symmetric structures. Finally, we present numerical examples obtained with the new numerical code.« less
Electro-actuated hydrogel walkers with dual responsive legs.
Morales, Daniel; Palleau, Etienne; Dickey, Michael D; Velev, Orlin D
2014-03-07
Stimuli responsive polyelectrolyte hydrogels may be useful for soft robotics because of their ability to transform chemical energy into mechanical motion without the use of external mechanical input. Composed of soft and biocompatible materials, gel robots can easily bend and fold, interface and manipulate biological components and transport cargo in aqueous solutions. Electrical fields in aqueous solutions offer repeatable and controllable stimuli, which induce actuation by the re-distribution of ions in the system. Electrical fields applied to polyelectrolyte-doped gels submerged in ionic solution distribute the mobile ions asymmetrically to create osmotic pressure differences that swell and deform the gels. The sign of the fixed charges on the polyelectrolyte network determines the direction of bending, which we harness to control the motion of the gel legs in opposing directions as a response to electrical fields. We present and analyze a walking gel actuator comprised of cationic and anionic gel legs made of copolymer networks of acrylamide (AAm)/sodium acrylate (NaAc) and acrylamide/quaternized dimethylaminoethyl methacrylate (DMAEMA Q), respectively. The anionic and cationic legs were attached by electric field-promoted polyion complexation. We characterize the electro-actuated response of the sodium acrylate hydrogel as a function of charge density and external salt concentration. We demonstrate that "osmotically passive" fixed charges play an important role in controlling the bending magnitude of the gel networks. The gel walkers achieve unidirectional motion on flat elastomer substrates and exemplify a simple way to move and manipulate soft matter devices and robots in aqueous solutions.
Modification of codes NUALGAM and BREMRAD, Volume 1
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Huang, R.; Firstenberg, H.
1971-01-01
The NUGAM2 code predicts forward and backward angular energy differential and integrated distributions for gamma photons and fluorescent radiation emerging from finite laminar transport media. It determines buildup and albedo data for scientific research and engineering purposes; it also predicts the emission characteristics of finite radioisotope sources. The results are shown to be in very good agreement with available published data. The code predicts data for many situations in which no published data is available in the energy range up to 5 MeV. The NUGAM3 code predicts the pulse height response of inorganic (NaI and CsI) scintillation detectors to gamma photons. Because it allows the scintillator to be clad and mounted on a photomultiplier as in the experimental or industrial application, it is a more practical and thus useful code than others previously reported. Results are in excellent agreement with published Monte Carlo and experimental data in the energy range up to 4.5 MeV.
Construction, classification and parametrization of complex Hadamard matrices
NASA Astrophysics Data System (ADS)
Szöllősi, Ferenc
To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.
Advances in Nonlinear Non-Scaling FFAGs
NASA Astrophysics Data System (ADS)
Johnstone, C.; Berz, M.; Makino, K.; Koscielniak, S.; Snopok, P.
Accelerators are playing increasingly important roles in basic science, technology, and medicine. Ultra high-intensity and high-energy (GeV) proton drivers are a critical technology for accelerator-driven sub-critical reactors (ADS) and many HEP programs (Muon Collider) but remain particularly challenging, encountering duty cycle and space-charge limits in the synchrotron and machine size concerns in the weaker-focusing cyclotrons; a 10-20 MW proton driver is not presently considered technically achievable with conventional re-circulating accelerators. One, as-yet, unexplored re-circulating accelerator, the Fixed-field Alternating Gradient or FFAG, is an attractive alternative to the other approaches to a high-power beam source. Its strong focusing optics can mitigate space charge effects and achieve higher bunch charges than are possible in a cyclotron, and a recent innovation in design has coupled stable tunes with isochronous orbits, making the FFAG capable of fixed-frequency, CW acceleration, as in the classical cyclotron but beyond their energy reach, well into the relativistic regime. This new concept has been advanced in non-scaling nonlinear FFAGs using powerful new methodologies developed for FFAG accelerator design and simulation. The machine described here has the high average current advantage and duty cycle of the cyclotron (without using broadband RF frequencies) in combination with the strong focusing, smaller losses, and energy variability that are more typical of the synchrotron. The current industrial and medical standard is a cyclotron, but a competing CW FFAG could promote a shift in this baseline. This paper reports on these new advances in FFAG accelerator technology and presents advanced modeling tools for fixed-field accelerators unique to the code COSY INFINITY.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
Predictors of Eating Behavior in Middle Childhood: A Hybrid Fixed Effects Model
ERIC Educational Resources Information Center
Bjørklund, Oda; Belsky, Jay; Wichstrøm, Lars; Steinsbekk, Silje
2018-01-01
Children's eating behavior influences energy intake and thus weight through choices of type and amount of food. One type of eating behavior, food responsiveness, defined as eating in response to external cues such as the sight and smell of food, is particularly related to increased caloric intake and weight. Because little is known about the…
The square lattice Ising model on the rectangle II: finite-size scaling limit
NASA Astrophysics Data System (ADS)
Hucht, Alfred
2017-06-01
Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.
Light transport feature for SCINFUL.
Etaati, G R; Ghal-Eh, N
2008-03-01
An extended version of the scintillator response function prediction code SCINFUL has been developed by incorporating PHOTRACK, a Monte Carlo light transport code. Comparisons of calculated and experimental results for organic scintillators exposed to neutrons show that the extended code improves the predictive capability of SCINFUL.
Fixed forced detection for fast SPECT Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Cajgfinger, T.; Rit, S.; Létang, J. M.; Halty, A.; Sarrut, D.
2018-03-01
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Fixed forced detection for fast SPECT Monte-Carlo simulation.
Cajgfinger, T; Rit, S; Létang, J M; Halty, A; Sarrut, D
2018-03-02
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on fixed forced detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to five orders of magnitude. Source code and examples are available in the Gate V8.0 release.
Elliott, R; Agnew, Z; Deakin, J F W
2008-05-01
Functional imaging studies in recent years have confirmed the involvement of orbitofrontal cortex (OFC) in human reward processing and have suggested that OFC responses are context-dependent. A seminal electrophysiological experiment in primates taught animals to associate abstract visual stimuli with differently valuable food rewards. Subsequently, pairs of these learned abstract stimuli were presented and firing of OFC neurons to the medium-value stimulus was measured. OFC firing was shown to depend on the relative value context. In this study, we developed a human analogue of this paradigm and scanned subjects using functional magnetic resonance imaging. The analysis compared neuronal responses to two superficially identical events, which differed only in terms of the preceding context. Medial OFC response to the same perceptual stimulus was greater when the stimulus predicted the more valuable of two rewards than when it predicted the less valuable. Additional responses were observed in other components of reward circuitry, the amygdala and ventral striatum. The central finding is consistent with the primate results and suggests that OFC neurons code relative rather than absolute reward value. Amygdala and striatal involvement in coding reward value is also consistent with recent functional imaging data. By using a simpler and less confounded paradigm than many functional imaging studies, we are able to demonstrate that relative financial reward value per se is coded in distinct subregions of an extended reward and decision-making network.
A deep learning-based reconstruction of cosmic ray-induced air showers
NASA Astrophysics Data System (ADS)
Erdmann, M.; Glombitza, J.; Walz, D.
2018-01-01
We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.
Coral-zooxanthellae meta-transcriptomics reveals integrated response to pollutant stress.
Gust, Kurt A; Najar, Fares Z; Habib, Tanwir; Lotufo, Guilherme R; Piggot, Alan M; Fouke, Bruce W; Laird, Jennifer G; Wilbanks, Mitchell S; Rawat, Arun; Indest, Karl J; Roe, Bruce A; Perkins, Edward J
2014-07-12
Corals represent symbiotic meta-organisms that require harmonization among the coral animal, photosynthetic zooxanthellae and associated microbes to survive environmental stresses. We investigated integrated-responses among coral and zooxanthellae in the scleractinian coral Acropora formosa in response to an emerging marine pollutant, the munitions constituent, 1,3,5-trinitro-1,3,5 triazine (RDX; 5 day exposures to 0 (control), 0.5, 0.9, 1.8, 3.7, and 7.2 mg/L, measured in seawater). RDX accumulated readily in coral soft tissues with bioconcentration factors ranging from 1.1 to 1.5. Next-generation sequencing of a normalized meta-transcriptomic library developed for the eukaryotic components of the A. formosa coral holobiont was leveraged to conduct microarray-based global transcript expression analysis of integrated coral/zooxanthellae responses to the RDX exposure. Total differentially expressed transcripts (DET) increased with increasing RDX exposure concentrations as did the proportion of zooxanthellae DET relative to the coral animal. Transcriptional responses in the coral demonstrated higher sensitivity to RDX compared to zooxanthellae where increased expression of gene transcripts coding xenobiotic detoxification mechanisms (i.e. cytochrome P450 and UDP glucuronosyltransferase 2 family) were initiated at the lowest exposure concentration. Increased expression of these detoxification mechanisms was sustained at higher RDX concentrations as well as production of a physical barrier to exposure through a 40% increase in mucocyte density at the maximum RDX exposure. At and above the 1.8 mg/L exposure concentration, DET coding for genes involved in central energy metabolism, including photosynthesis, glycolysis and electron-transport functions, were decreased in zooxanthellae although preliminary data indicated that zooxanthellae densities were not affected. In contrast, significantly increased transcript expression for genes involved in cellular energy production including glycolysis and electron-transport pathways was observed in the coral animal. Transcriptional network analysis for central energy metabolism demonstrated highly correlated responses to RDX among the coral animal and zooxanthellae indicative of potential compensatory responses to lost photosynthetic potential within the holobiont. These observations underscore the potential for complex integrated responses to RDX exposure among species comprising the coral holobiont and highlight the need to understand holobiont-species interactions to accurately assess pollutant impacts.
Massof, Robert W
2014-10-01
A simple theoretical framework explains patient responses to items in rating scale questionnaires. Fixed latent variables position each patient and each item on the same linear scale. Item responses are governed by a set of fixed category thresholds, one for each ordinal response category. A patient's item responses are magnitude estimates of the difference between the patient variable and the patient's estimate of the item variable, relative to his/her personally defined response category thresholds. Differences between patients in their personal estimates of the item variable and in their personal choices of category thresholds are represented by random variables added to the corresponding fixed variables. Effects of intervention correspond to changes in the patient variable, the patient's response bias, and/or latent item variables for a subset of items. Intervention effects on patients' item responses were simulated by assuming the random variables are normally distributed with a constant scalar covariance matrix. Rasch analysis was used to estimate latent variables from the simulated responses. The simulations demonstrate that changes in the patient variable and changes in response bias produce indistinguishable effects on item responses and manifest as changes only in the estimated patient variable. Changes in a subset of item variables manifest as intervention-specific differential item functioning and as changes in the estimated person variable that equals the average of changes in the item variables. Simulations demonstrate that intervention-specific differential item functioning produces inefficiencies and inaccuracies in computer adaptive testing. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Error suppression for Hamiltonian quantum computing in Markovian environments
NASA Astrophysics Data System (ADS)
Marvian, Milad; Lidar, Daniel A.
2017-03-01
Hamiltonian quantum computing, such as the adiabatic and holonomic models, can be protected against decoherence using an encoding into stabilizer subspace codes for error detection and the addition of energy penalty terms. This method has been widely studied since it was first introduced by Jordan, Farhi, and Shor (JFS) in the context of adiabatic quantum computing. Here, we extend the original result to general Markovian environments, not necessarily in Lindblad form. We show that the main conclusion of the original JFS study holds under these general circumstances: Assuming a physically reasonable bath model, it is possible to suppress the initial decay out of the encoded ground state with an energy penalty strength that grows only logarithmically in the system size, at a fixed temperature.
Delamuta, Jakeline Renata Marçon; Ribeiro, Renan Augusto; Gomes, Douglas Fabiano; Souza, Renata Carolina; Chueire, Ligia Maria Oliveira; Hungria, Mariangela
2015-09-17
Bradyrhizobium pachyrhizi PAC48(T) has been isolated from a jicama nodule in Costa Rica. The draft genome indicates high similarity with that of Bradyrhizobium elkanii. Several coding sequences (CDSs) of the stress response might help in survival in the tropics. PAC48(T) carries nodD1 and nodK, similar to Bradyrhizobium (Parasponia) ANU 289 and a particular nodD2 gene. Copyright © 2015 Delamuta et al.
NASA Technical Reports Server (NTRS)
Youngblut, C.
1984-01-01
Orography and geographically fixed heat sources which force a zonally asymmetric motion field are examined. An extensive space-time spectral analysis of the GLAS climate model (D130) response and observations are compared. An updated version of the model (D150) showed a remarkable improvement in the simulation of the standing waves. The main differences in the model code are an improved boundary layer flux computation and a more realistic specification of the global boundary conditions.
Taheri-Garavand, Amin; Karimi, Fatemeh; Karimi, Mahmoud; Lotfi, Valiullah; Khoobbakht, Golmohammad
2018-06-01
The aim of the study is to fit models for predicting surfaces using the response surface methodology and the artificial neural network to optimize for obtaining the maximum acceptability using desirability functions methodology in a hot air drying process of banana slices. The drying air temperature, air velocity, and drying time were chosen as independent factors and moisture content, drying rate, energy efficiency, and exergy efficiency were dependent variables or responses in the mentioned drying process. A rotatable central composite design as an adequate method was used to develop models for the responses in the response surface methodology. Moreover, isoresponse contour plots were useful to predict the results by performing only a limited set of experiments. The optimum operating conditions obtained from the artificial neural network models were moisture content 0.14 g/g, drying rate 1.03 g water/g h, energy efficiency 0.61, and exergy efficiency 0.91, when the air temperature, air velocity, and drying time values were equal to -0.42 (74.2 ℃), 1.00 (1.50 m/s), and -0.17 (2.50 h) in the coded units, respectively.
26 CFR 1.6151-1 - Time and place for paying tax shown on returns.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the internal revenue officer with whom the return is filed at the time fixed for filing the return... later than the date fixed for filing the return. (c) Date fixed for payment of tax. In any case in which... within a certain period, any reference in subtitle A or F of the Code to the date fixed for payment of...
PEST reduces bias in forced choice psychophysics.
Taylor, M M; Forbes, S M; Creelman, C D
1983-11-01
Observers performed several different detection tasks using both the PEST adaptive psychophysical procedure and a fixed-level (method of constant stimuli) psychophysical procedure. In two experiments, PEST runs targeted at P (C) = 0.80 were immediately followed by fixed-level detection runs presented at the difficulty level resulting from the PEST run. The fixed-level runs yielded P (C) about 0.75. During the fixed-level runs, the probability of a correct response was greater when the preceding response was correct than when it was wrong. Observers, even highly trained ones, perform in a nonstationary manner. The sequential dependency data can be used to determine a lower bound for the observer's "true" capability when performing optimally; this lower bound is close to the PEST target, and well above the forced choice P (C). The observer's "true" capability is the measure used by most theories of detection performance. A further experiment compared psychometric functions obtained from a set of PEST runs using different targets with those obtained from blocks of fixed-level trials at different levels. PEST results were more stable across observers, performance at all but the highest signal levels was better with PEST, and the PEST psychometric functions had shallower slopes. We hypothesize that PEST permits the observer to keep track of what he is trying to detect, whereas in the fixed-level method performance is disrupted by memory failure. Some recently suggested "more virulent" versions of PEST may be subject to biases similar to those of the fixed-level procedures.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Sun, A. Y.; Lu, J.; Hovorka, S. D.; Freifeld, B. M.; Islam, A.
2015-12-01
Monitoring techniques capable of deep subsurface detection are desirable for early warning and leakage pathway identification in geologic carbon storage formations. This work investigates the feasibility of a leakage detection technique based on pulse testing, which is a traditional hydrogeological characterization tool. In pulse testing, the monitoring reservoir is stimulated at a fixed frequency and the acquired pressure perturbation signals are analyzed in the frequency domain to detect potential deviations in the reservoir's frequency domain response function. Unlike traditional time-domain analyses, the frequency-domain analysis aims to minimize the interference of reservoir noise by imposing coded injection patterns such that the reservoir responses to injection can be uniquely determined. We have established the theoretical basis of the approach in previous work. Recently, field validation of this pressure-based, leakage detection technique was conducted at a CO2-EOR site located in Mississippi, USA. During the demonstration, two sets of experiments were performed using 90-min and 150-min pulsing periods, for both with and without leak scenarios. Because of the lack of pre-existing leakage pathways, artificial leakage CO2 was simulated by rate-controlled venting from one of the monitoring wells. Our results show that leakage events caused a significant deviation in the amplitude of the frequency response function, indicating that pulse testing may be used as a cost-effective monitoring technique with a strong potential for automation.
Deformed supersymmetric quantum mechanics with spin variables
NASA Astrophysics Data System (ADS)
Fedoruk, Sergey; Ivanov, Evgeny; Sidorov, Stepan
2018-01-01
We quantize the one-particle model of the SU(2|1) supersymmetric multiparticle mechanics with the additional semi-dynamical spin degrees of freedom. We find the relevant energy spectrum and the full set of physical states as functions of the mass-dimension deformation parameter m and SU(2) spin q\\in (Z_{>0,}1/2+Z_{≥0}) . It is found that the states at the fixed energy level form irreducible multiplets of the supergroup SU(2|1). Also, the hidden superconformal symmetry OSp(4|2) of the model is revealed in the classical and quantum cases. We calculate the OSp(4|2) Casimir operators and demonstrate that the full set of the physical states belonging to different energy levels at fixed q are unified into an irreducible OSp(4|2) multiplet.
Reaction path of energetic materials using THOR code
NASA Astrophysics Data System (ADS)
Duraes, L.; Campos, J.; Portugal, A.
1997-07-01
The method of predicting reaction path, using a thermochemical computer code, named THOR, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using a thermal equation of state (EoS). The used HL EoS is a new EoS developed in previous works. HL EoS is supported by a Boltzmann EoS, taking α =13.5 to the exponent of the intermolecular potential and θ=1.4 to the adimensional temperature. This code allows now the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, described, calculated and discussed - Ammonium Nitrate based explosives and Nitromethane - because they are very known explosives and their equivalence ratio is respectively near and greater than the stoicheiometry. Predictions of detonation properties of other condensed explosives, as a function of energy release, present results in good correlation with experimental values.
NASA Technical Reports Server (NTRS)
Nola, F. J. (Inventor)
1977-01-01
A tachometer in which sine and cosine signals responsive to the angular position of a shaft as it rotates are each multiplied by like, sine or cosine, functions of a carrier signal, the products summed, and the resulting frequency signal converted to fixed height, fixed width pulses of a like frequency. These pulses are then integrated, and the resulting dc output is an indication of shaft speed.
Towers of generalized divisible quantum codes
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
Safety and health in the construction of fixed offshore installations in the petroleum industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1981-01-01
A meeting convened by the ILO (International Labor Office) on safety problems in the offshore petroleum industry recommended the preparation of a code of practice setting out standards for safety and health during the construction of fixed offshore installations. Such a code, to be prepared by the ILO in co-operation with other bodies, including the Inter-Governmental Maritime Consultative Organisation (IMCO), was to take into consideration existing standards applicable to offshore construction activities and to supplement the ILO codes of practice on safety and health in building and civil engineering work, shipbuilding and ship repairing. (Copyright (c) International Labour Organisation 1981.)
Alternative Line Coding Scheme with Fixed Dimming for Visible Light Communication
NASA Astrophysics Data System (ADS)
Niaz, M. T.; Imdad, F.; Kim, H. S.
2017-01-01
An alternative line coding scheme called fixed-dimming on/off keying (FD-OOK) is proposed for visible-light communication (VLC). FD-OOK reduces the flickering caused by a VLC transmitter and can maintain a 50% dimming level. Simple encoder and decoder are proposed which generates codes where the number of bits representing one is same as the number of bits representing zero. By keeping the number of ones and zeros equal the change in the brightness of lighting may be minimized and kept constant at 50%, thereby reducing the flickering in VLC. The performance of FD-OOK is analysed with two parameters: the spectral efficiency and power requirement.
Short-term memory for responses: the "choose-small" effect.
Fetterman, J G; MacEwen, D
1989-01-01
Pigeons' short-term memory for fixed-ratio requirements was assessed using a delayed symbolic matching-to-sample procedure. Different choices were reinforced after fixed-ratio 10 and fixed-ratio 40 requirements, and delays of 0, 5, or 20 s were sometimes placed between sample ratios and choice. All birds made disproportionate numbers of responses to the small-ratio choice alternative when delays were interposed between ratios and choice, and this bias increased as a function of delay. Preference for the small fixed-ratio alternative was also observed on "no-sample" trials, during which the choice alternatives were presented without a prior sample ratio. This "choose-small" bias is analogous to results obtained by Spetch and Wilkie (1983) with event duration as the discriminative stimulus. The choose-small bias was attenuated when the houselight was turned on during delays, but overall accuracy was not influenced systematically by the houselight manipulation. PMID:2584917
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sha; Tan, Qing; Evans, Meredydd
India is expected to add 40 billion m2 of new buildings till 2050. Buildings are responsible for one third of India’s total energy consumption today and building energy use is expected to continue growing driven by rapid income and population growth. The implementation of the Energy Conservation Building Code (ECBC) is one of the measures to improve building energy efficiency. Using the Global Change Assessment Model, this study assesses growth in the buildings sector and impacts of building energy policies in Gujarat, which would help the state adopt ECBC and expand building energy efficiency programs. Without building energy policies, buildingmore » energy use in Gujarat would grow by 15 times in commercial buildings and 4 times in urban residential buildings between 2010 and 2050. ECBC improves energy efficiency in commercial buildings and could reduce building electricity use in Gujarat by 20% in 2050, compared to the no policy scenario. Having energy codes for both commercial and residential buildings could result in additional 10% savings in electricity use. To achieve these intended savings, it is critical to build capacity and institution for robust code implementation.« less
Schulz, Volker; Guenther, Margarita; Gerlach, Gerald; Magda, Jules J.; Tathireddy, Prashant; Rieth, Loren; Solzbacher, Florian
2010-01-01
Environmental responsive or smart hydrogels show a volume phase transition due to changes of external stimuli such as pH or ionic strength of an ambient solution. Thus, they are able to convert reversibly chemical energy into mechanical energy and therefore they are suitable as sensitive material for integration in biochemical microsensors and MEMS devices. In this work, micro-fabricated silicon pressure sensor chips with integrated piezoresistors were used as transducers for the conversion of mechanical work into an appropriate electrical output signal due to the deflection of a thin silicon bending plate. Within this work two different sensor designs have been studied. The biocompatible poly(hydroxypropyl methacrylate-N,N-dimethylaminoethyl methacrylate-tetra-ethyleneglycol dimethacrylate) (HPMA-DMA-TEGDMA) was used as an environmental sensitive element in piezoresistive biochemical sensors. This polyelectrolytic hydrogel shows a very sharp volume phase transition at pH values below about 7.4 which is in the range of the physiological pH. The sensor's characteristic response was measured in-vitro for changes in pH of PBS buffer solution at fixed ionic strength. The experimental data was applied to the Hill equation and the sensor sensitivity as a function of pH was calculated out of it. The time-dependent sensor response was measured for small changes in pH, whereas different time constants have been observed. The same sensor principal was used for sensing of ionic strength. The time-dependent electrical sensor signal of both sensors was measured for variations in ionic strength at fixed pH value using PBS buffer solution. Both sensor types showed an asymmetric swelling behavior between the swelling and the deswelling cycle as well as different time constants, which was attributed to the different nature of mechanical hydrogel-confinement inside the sensor. PMID:21152365
Suzuki, T; George, F R; Meisch, R A
1988-04-01
Oral ethanol self-administration was investigated systematically in two inbred strains of rats, Fischer 344 CDF (F-344)/CRLBR (F344) and Lewis LEW/CRLBR (LEW). For both strains ethanol maintained higher response rates and was consumed in larger volumes than the water vehicle. In addition, blood ethanol levels increased with increases in ethanol concentration. However, LEW rats drank substantially more ethanol than F344 rats. The typical inverted U-shaped function between ethanol concentration and number of deliveries was observed for the LEW rats, whereas for the F344 rats much smaller differences were seen between ethanol and water maintained responding. For the LEW strain, as the fixed-ratio size was increased, the number of responses increased almost in direct proportion to the fixed-ratio size increase, so that at least at the lower fixed-ratio values the rats were obtaining similar numbers of deliveries at different fixed-ratio sizes. However, a decrease in ethanol deliveries and blood ethanol levels was observed at higher fixed-ratio sizes. Similar results were obtained in F344 rats, but the amount of responding was lower and less consistent. LEW rats showed significantly higher response rates, numbers of ethanol deliveries and blood ethanol levels. Ethanol-induced behavioral activation also was observed in LEW rats, but not in F344 rats. These results support the conclusion that ethanol serves as a strong positive reinforcer for LEW rats and as a weak positive reinforcer for F344 rats, and that genotype is a determinant of the degree to which ethanol functions as a reinforcer.
Facile and High-Throughput Synthesis of Functional Microparticles with Quick Response Codes.
Ramirez, Lisa Marie S; He, Muhan; Mailloux, Shay; George, Justin; Wang, Jun
2016-06-01
Encoded microparticles are high demand in multiplexed assays and labeling. However, the current methods for the synthesis and coding of microparticles either lack robustness and reliability, or possess limited coding capacity. Here, a massive coding of dissociated elements (MiCODE) technology based on innovation of a chemically reactive off-stoichimetry thiol-allyl photocurable polymer and standard lithography to produce a large number of quick response (QR) code microparticles is introduced. The coding process is performed by photobleaching the QR code patterns on microparticles when fluorophores are incorporated into the prepolymer formulation. The fabricated encoded microparticles can be released from a substrate without changing their features. Excess thiol functionality on the microparticle surface allows for grafting of amine groups and further DNA probes. A multiplexed assay is demonstrated using the DNA-grafted QR code microparticles. The MiCODE technology is further characterized by showing the incorporation of BODIPY-maleimide (BDP-M) and Nile Red fluorophores for coding and the use of microcontact printing for immobilizing DNA probes on microparticle surfaces. This versatile technology leverages mature lithography facilities for fabrication and thus is amenable to scale-up in the future, with potential applications in bioassays and in labeling consumer products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Titarenko, Yu. E.; Batyaev, V. F.; Pavlov, K. V.; Titarenko, A. Yu.; Zhivun, V. M.; Chauzova, M. V.; Balyuk, S. A.; Bebenin, P. V.; Ignatyuk, A. V.; Mashnik, S. G.; Leray, S.; Boudard, A.; David, J. C.; Mancusi, D.; Cugnon, J.; Yariv, Y.; Nishihara, K.; Matsuda, N.; Kumawat, H.; Stankovskiy, A. Yu.
2016-06-01
The paper presents the measured cumulative yields of 44Ti for natCr, 56Fe, natNi and 93Nb samples irradiated by protons at the energy range 0.04-2.6 GeV. The obtained excitation functions are compared with calculations of the well-known codes: ISABEL, Bertini, INCL4.2+ABLA, INCL4.5+ABLA07, PHITS, CASCADE07 and CEM03.02. The predictive power of these codes regarding the studied nuclides is analyzed.
Responsive Image Inline Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Ian
2016-10-20
RIIF is a contributed module for the Drupal php web application framework (drupal.org). It is written as a helper or sub-module of other code which is part of version 8 "core Drupal" and is intended to extend its functionality. It allows Drupal to resize images uploaded through the user-facing text editor within the Drupal GUI (a.k.a. "inline images") for various browser widths. This resizing is already done foe other images through the parent "Responsive Image" core module. This code extends that functionality to inline images.
NASA Astrophysics Data System (ADS)
Gómez-Ros, J. M.; Bedogni, R.; Moraleda, M.; Delgado, A.; Romero, A.; Esposito, A.
2010-01-01
This communication describes an improved design for a neutron spectrometer consisting of 6Li thermoluminescent dosemeters located at selected positions within a single moderating polyethylene sphere. The spatial arrangement of the dosemeters has been designed using the MCNPX Monte Carlo code to calculate the response matrix for 56 log-equidistant energies from 10 -9 to 100 MeV, looking for a configuration that permits to obtain a nearly isotropic response for neutrons in the energy range from thermal to 20 MeV. The feasibility of the proposed spectrometer and the isotropy of its response have been evaluated by simulating exposures to different reference and workplace neutron fields. The FRUIT code has been used for unfolding purposes. The results of the simulations as well as the experimental tests confirm the suitability of the prototype for environmental and workplace monitoring applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Robert Cameron; Steiner, Don
2004-06-15
The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate themore » interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able to calculate the temperature distribution, phase change, and surface erosion successfully.« less
Ultraviolet optical absorptions of semiconducting copper phosphate glasses
NASA Technical Reports Server (NTRS)
Bae, Byeong-Soo; Weinberg, Michael C.
1993-01-01
Results are presented of a quantitative investigation of the change in UV optical absorption in semiconducting copper phosphate glasses with batch compositions of 40, 50, and 55 percent CuO, as a function of the Cu(2+)/Cu(total) ratio in the glasses for each glass composition. It was found that optical energy gap, E(opt), of copper phosphate glass is a function of both glass composition and Cu(2+)/Cu(total) ratio in the glass. E(opt) increases as the CuO content for fixed Cu(2+)/Cu(total) ratio and the Cu(2+)/Cu(total) ratio for fixed glass composition are reduced.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony
2014-02-01
GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space
NASA Astrophysics Data System (ADS)
Ruggeri, Michele; Moroni, Saverio; Holzmann, Markus
2018-05-01
We show that the recently introduced iterative backflow wave function can be interpreted as a general neural network in continuum space with nonlinear functions in the hidden units. Using this wave function in variational Monte Carlo simulations of liquid 4He in two and three dimensions, we typically find a tenfold increase in accuracy over currently used wave functions. Furthermore, subsequent stages of the iteration procedure define a set of increasingly good wave functions, each with its own variational energy and variance of the local energy: extrapolation to zero variance gives energies in close agreement with the exact values. For two dimensional 4He, we also show that the iterative backflow wave function can describe both the liquid and the solid phase with the same functional form—a feature shared with the shadow wave function, but now joined by much higher accuracy. We also achieve significant progress for liquid 3He in three dimensions, improving previous variational and fixed-node energies.
Rise time of proton cut-off energy in 2D and 3D PIC simulations
NASA Astrophysics Data System (ADS)
Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.
2017-04-01
The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboredo, Fernando A.
The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009), Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. A recursive optimization algorithm is derived from the time evolution of the mixed probability density. The mixed probability density is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weigh allows the amplitude of the fix-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC andmore » the fixed-phase approximation [Ortiz, Ceperley and Martin Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it improves simultaneously the node and phase. The algorithm is demonstrated to converge to the nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The method is also applied to obtain low energy excitations with magnetic field or periodic boundary conditions. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.« less
Trellis Coding of Non-coherent Multiple Symbol Full Response M-ary CPFSK with Modulation Index 1/M
NASA Technical Reports Server (NTRS)
Lee, H.; Divsalar, D.; Weber, C.
1994-01-01
This paper introduces a trellis coded modulation (TCM) scheme for non-coherent multiple full response M-ary CPFSK with modulation index 1/M. A proper branch metric for the trellis decoder is obtained by employing a simple approximation of the modified Bessel function for large signal to noise ratio (SNR). Pairwise error probability of coded sequences is evaluated by applying a linear approximation to the Rician random variable.
Development of a diverse epiphyte community in response to phosphorus fertilization.
Benner, Jon W; Vitousek, Peter M
2007-07-01
The role of terrestrial soil nutrient supply in determining the composition and productivity of epiphyte communities has been little investigated. In a montane Hawaiian rainforest, we documented dramatic increases in the abundance and species richness of canopy epiphytes in a forest that had been fertilized annually with phosphorus (P) for 15 years; there was no response in forest that had been fertilized with nitrogen (N) or other nutrients. The response of N-fixing lichens to P fertilization was particularly strong, although mosses and non-N-fixing lichens also increased in abundance and diversity. We show that enhancement of canopy P availability is the most likely factor driving the bloom in epiphytes. These results provide strong evidence that terrestrial soil fertility may structure epiphyte communities, and in particular that the abundance of N-fixing lichens--a functionally important epiphyte group--may be particularly sensitive to ecosystem P availability.
Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Fu, Bina; Zhang, Dong H.
2014-04-01
We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitation and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.
Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn
We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitationmore » and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.« less
Decorrelation dynamics and spectra in drift-Alfven turbulence
NASA Astrophysics Data System (ADS)
Fernandez Garcia, Eduardo
Motivated by the inability of one-fluid magnetohydrodynamics (MHD) to explain key turbulence characteristics in systems ranging from the solar wind and interstellar medium to fusion devices like the reversed field pinch, this thesis studies magnetic turbulence using a drift-Alfven model that extends MHD by including electron density dynamics. Electron effects play a significant role in the dynamics by changing the structure of turbulent decorrelation in the Alfvenic regime (where fast Alfvenic propagation provides the fastest decorrelation of the system): besides the familiar counter-propagating Alfvenic branches of MHD, an additional branch tied to the diamagnetic and eddy-turn- over rates enters in the turbulent response. This kinematic branch gives hydrodynamic features to turbulence that is otherwise strongly magnetic. Magnetic features are observed in the RMS frequency, energy partitions, cross-field energy transfer and in the turbulent response, whereas hydrodynamic features appear in the average frequency, self-field transfer, turbulent response and finally the wavenumber spectrum. These features are studied via renormalized closure theory and numerical simulation. The closure calculation naturally incorporates the eigenmode structure of the turbulent response in specifying spectral energy balance equations for the magnetic, kinetic and internal (density) energies. Alfvenic terms proportional to cross correlations and involved in cross field transfer compete with eddy-turn-over, self transfer, auto-correlation terms. In the steady state, the kinematic terms dominate the energy balances and yield a 5/3 Kolmogorov spectrum (as observed in the interstellar medium) for the three field energies in the strong turbulence, long wavelength limit. Alfvenic terms establish equipartition of kinetic and magnetic energies. In the limit where wavelengths are short compared to the gyroradius, the Alfvenic terms equipartition the internal and magnetic energies resulting in a steep (-2) spectrum fall-off for those energies while the largely uncoupled kinetic modes still obey a 5/3 law. From the numerical simulations, the response function of drift-Alfven turbulence is measured. Here, a statistical ensemble is constructed from small perturbations of the turbulent amplitudes at fixed wavenumber. The decorrelation structure born out of the eigenmode calculation is verified in the numerical measurement.
Neural-like computing with populations of superparamagnetic basis functions.
Mizrahi, Alice; Hirtzlin, Tifenn; Fukushima, Akio; Kubota, Hitoshi; Yuasa, Shinji; Grollier, Julie; Querlioz, Damien
2018-04-18
In neuroscience, population coding theory demonstrates that neural assemblies can achieve fault-tolerant information processing. Mapped to nanoelectronics, this strategy could allow for reliable computing with scaled-down, noisy, imperfect devices. Doing so requires that the population components form a set of basis functions in terms of their response functions to inputs, offering a physical substrate for computing. Such a population can be implemented with CMOS technology, but the corresponding circuits have high area or energy requirements. Here, we show that nanoscale magnetic tunnel junctions can instead be assembled to meet these requirements. We demonstrate experimentally that a population of nine junctions can implement a basis set of functions, providing the data to achieve, for example, the generation of cursive letters. We design hybrid magnetic-CMOS systems based on interlinked populations of junctions and show that they can learn to realize non-linear variability-resilient transformations with a low imprint area and low power.
Fluctuating observation time ensembles in the thermodynamics of trajectories
NASA Astrophysics Data System (ADS)
Budini, Adrián A.; Turner, Robert M.; Garrahan, Juan P.
2014-03-01
The dynamics of stochastic systems, both classical and quantum, can be studied by analysing the statistical properties of dynamical trajectories. The properties of ensembles of such trajectories for long, but fixed, times are described by large-deviation (LD) rate functions. These LD functions play the role of dynamical free energies: they are cumulant generating functions for time-integrated observables, and their analytic structure encodes dynamical phase behaviour. This ‘thermodynamics of trajectories’ approach is to trajectories and dynamics what the equilibrium ensemble method of statistical mechanics is to configurations and statics. Here we show that, just like in the static case, there are a variety of alternative ensembles of trajectories, each defined by their global constraints, with that of trajectories of fixed total time being just one of these. We show how the LD functions that describe an ensemble of trajectories where some time-extensive quantity is constant (and large) but where total observation time fluctuates can be mapped to those of the fixed-time ensemble. We discuss how the correspondence between generalized ensembles can be exploited in path sampling schemes for generating rare dynamical trajectories.
CHEMO/mechanical energy conversiona via supramolecular self-assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynn, David G.; Conticello, Vincent
With the assembly codes for protein/peptide self-assembly sufficiently developed to control these phases, we are positioned to address critical requirements for generating unique self-propagating functional assemblies such as chemical batteries and engines that can be used to extend the capability of living cells. These integrative functional assemblies can then be used within cells to create new functions that will address the world’s energy challenges.
Nearly ideal binary communication in squeezed channels
NASA Astrophysics Data System (ADS)
Paris, Matteo G.
2001-07-01
We analyze the effect of squeezing the channel in binary communication based on Gaussian states. We show that for coding on pure states, squeezing increases the detection probability at fixed size of the strategy, actually saturating the optimal bound already for moderate signal energy. Using Neyman-Pearson lemma for fuzzy hypothesis testing we are able to analyze also the case of mixed states, and to find the optimal amount of squeezing that can be effectively employed. It results that optimally squeezed channels are robust against signal mixing, and largely improve the strategy power by comparison with coherent ones.
Predicting materials for sustainable energy sources: The key role of density functional theory
NASA Astrophysics Data System (ADS)
Galli, Giulia
Climate change and the related need for sustainable energy sources replacing fossil fuels are pressing societal problems. The development of advanced materials is widely recognized as one of the key elements for new technologies that are required to achieve a sustainable environment and provide clean and adequate energy for our planet. We discuss the key role played by Density Functional Theory, and its implementations in high performance computer codes, in understanding, predicting and designing materials for energy applications.
Energy Response Function of CALET Gamma Ray Burst Monitor
NASA Astrophysics Data System (ADS)
Yamada, Y.; Sakamoto, T.; Yoshida, A.; Calet Collaboration
2016-10-01
We will explain the development of the CGBM energy response function. We will also show the spectral analysis results of CGBM using our developed energy response function for simultaneously detected bright GRBs by other GRB detectors.
Unsteady Analysis of Inlet-Compressor Acoustic Interactions Using Coupled 3-D and 1-D CFD Codes
NASA Technical Reports Server (NTRS)
Suresh, A.; Cole, G. L.
2000-01-01
It is well known that the dynamic response of a mixed compression supersonic inlet is very sensitive to the boundary condition imposed at the subsonic exit (engine face) of the inlet. In previous work, a 3-D computational fluid dynamics (CFD) inlet code (NPARC) was coupled at the engine face to a 3-D turbomachinery code (ADPAC) simulating an isolated rotor and the coupled simulation used to study the unsteady response of the inlet. The main problem with this approach is that the high fidelity turbomachinery simulation becomes prohibitively expensive as more stages are included in the simulation. In this paper, an alternative approach is explored, wherein the inlet code is coupled to a lesser fidelity 1-D transient compressor code (DYNTECC) which simulates the whole compressor. The specific application chosen for this evaluation is the collapsing bump experiment performed at the University of Cincinnati, wherein reflections of a large-amplitude acoustic pulse from a compressor were measured. The metrics for comparison are the pulse strength (time integral of the pulse amplitude) and wave form (shape). When the compressor is modeled by stage characteristics the computed strength is about ten percent greater than that for the experiment, but the wave shapes are in poor agreement. An alternate approach that uses a fixed rise in duct total pressure and temperature (so-called 'lossy' duct) to simulate a compressor gives good pulse shapes but the strength is about 30 percent low.
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
CHARACTERIZATION OF A THIN SILICON SENSOR FOR ACTIVE NEUTRON PERSONAL DOSEMETERS.
Takada, M; Nunomiya, T; Nakamura, T; Matsumoto, T; Masuda, A
2016-09-01
A thin silicon sensor has been developed for active neutron personal dosemeters for use by aircrews and first responders. This thin silicon sensor is not affected by the funneling effect, which causes detection of cosmic protons and over-response to cosmic neutrons. There are several advantages to the thin silicon sensor: a decrease in sensitivity to gamma rays, an improvement of the energy detection limit for neutrons down to 0.8 MeV and an increase in the sensitivity to fast neutrons. Neutron response functions were experimentally obtained using 2.5 and 5 MeV monoenergy neutron beams and a (252)Cf neutron source. Simulation results using the Monte Carlo N-Particle transport code agree quite well with the experimental ones when an energy deposition region shaped like a circular truncated cone is used in place of a cylindrical region. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
Parton distributions with small- x resummation: evidence for BFKL dynamics in HERA data
NASA Astrophysics Data System (ADS)
Ball, Richard D.; Bertone, Valerio; Bonvini, Marco; Marzani, Simone; Rojo, Juan; Rottoli, Luca
2018-04-01
We present a determination of the parton distribution functions of the proton in which NLO and NNLO fixed-order calculations are supplemented by NLL x small- x resummation. Deep-inelastic structure functions are computed consistently at NLO+NLLx or NNLO+NLLx, while for hadronic processes small- x resummation is included only in the PDF evolution, with kinematic cuts introduced to ensure the fitted data lie in a region where the fixed-order calculation of the hard cross-sections is reliable. In all other respects, the fits use the same methodology and are based on the same global dataset as the recent NNPDF3.1 analysis. We demonstrate that the inclusion of small- x resummation leads to a quantitative improvement in the perturbative description of the HERA inclusive and charm-production reduced cross-sections in the small x region. The impact of the resummation in our fits is greater at NNLO than at NLO, because fixed-order calculations have a perturbative instability at small x due to large logarithms that can be cured by resummation. We explore the phenomenological implications of PDF sets with small- x resummation for the longitudinal structure function F_L at HERA, for parton luminosities and LHC benchmark cross-sections, for ultra-high-energy neutrino-nucleus cross-sections, and for future high-energy lepton-proton colliders such as the LHeC.
Observation of Droplet Size Oscillations in a Two Phase Fluid under Shear Flow
NASA Astrophysics Data System (ADS)
Courbin, Laurent; Panizza, Pascal
2004-11-01
It is well known that complex fluids exhibit strong couplings between their microstructure and the flow field. Such couplings may lead to unusual non linear rheological behavior. Because energy is constantly brought to the system, richer dynamic behavior such as non linear oscillatory or chaotic response is expected. We report on the observation of droplet size oscillations at fixed shear rate. At low shear rates, we observe two steady states for which the droplet size results from a balance between capillary and viscous stress. For intermediate shear rates, the droplet size becomes a periodic function of time. We propose a phenomenological model to account for the observed phenomenon and compare numerical results to experimental data.
NASA Technical Reports Server (NTRS)
Mcalister, K. W.
1981-01-01
A procedure is described for visualizing nonsteady fluid flow patterns over a wide velocity range using discrete nonluminous particles. The paramount element responsible for this capability is a pulse-forming network with variable inductance that is used to modulate the discharge of a fixed amount of electrical energy through a xenon flashtube. The selectable duration of the resultant light emission functions as a variable shutter so that particle path images of constant length can be recorded. The particles employed as flow markers are hydrogen bubbles that are generated by electrolysis in a water tunnel. Data are presented which document the characteristics of the electrical circuit and establish the relation of particle velocity to both section inductance and film exposure.
Djordjevic, Ivan B
2011-08-15
In addition to capacity, the future high-speed optical transport networks will also be constrained by energy consumption. In order to solve the capacity and energy constraints simultaneously, in this paper we propose the use of energy-efficient hybrid D-dimensional signaling (D>4) by employing all available degrees of freedom for conveyance of the information over a single carrier including amplitude, phase, polarization and orbital angular momentum (OAM). Given the fact that the OAM eigenstates, associated with the azimuthal phase dependence of the complex electric field, are orthogonal, they can be used as basis functions for multidimensional signaling. Since the information capacity is a linear function of number of dimensions, through D-dimensional signal constellations we can significantly improve the overall optical channel capacity. The energy-efficiency problem is solved, in this paper, by properly designing the D-dimensional signal constellation such that the mutual information is maximized, while taking the energy constraint into account. We demonstrate high-potential of proposed energy-efficient hybrid D-dimensional coded-modulation scheme by Monte Carlo simulations. © 2011 Optical Society of America
Wang, Leimin; Zeng, Zhigang; Hu, Junhao; Wang, Xiaoping
2017-03-01
This paper addresses the controller design problem for global fixed-time synchronization of delayed neural networks (DNNs) with discontinuous activations. To solve this problem, adaptive control and state feedback control laws are designed. Then based on the two controllers and two lemmas, the error system is proved to be globally asymptotically stable and even fixed-time stable. Moreover, some sufficient and easy checked conditions are derived to guarantee the global synchronization of drive and response systems in fixed time. It is noted that the settling time functional for fixed-time synchronization is independent on initial conditions. Our fixed-time synchronization results contain the finite-time results as the special cases by choosing different values of the two controllers. Finally, theoretical results are supported by numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fuzzy Energy and Reserve Co-optimization With High Penetration of Renewable Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Botterud, Audun; Zhou, Zhi
In this study, we propose a fuzzy-based energy and reserve co-optimization model with consideration of high penetration of renewable energy. Under the assumption of a fixed uncertainty set of renewables, a two-stage robust model is proposed for clearing energy and reserves in the first stage and checking the feasibility and robustness of re-dispatches in the second stage. Fuzzy sets and their membership functions are introduced into the optimization model to represent the satisfaction degree of the variable uncertainty sets. The lower bound of the uncertainty set is expressed as fuzzy membership functions. The solutions are obtained by transforming the fuzzymore » mathematical programming formulation into traditional mixed integer linear programming problems.« less
Fuzzy Energy and Reserve Co-optimization With High Penetration of Renewable Energy
Liu, Cong; Botterud, Audun; Zhou, Zhi; ...
2016-10-21
In this study, we propose a fuzzy-based energy and reserve co-optimization model with consideration of high penetration of renewable energy. Under the assumption of a fixed uncertainty set of renewables, a two-stage robust model is proposed for clearing energy and reserves in the first stage and checking the feasibility and robustness of re-dispatches in the second stage. Fuzzy sets and their membership functions are introduced into the optimization model to represent the satisfaction degree of the variable uncertainty sets. The lower bound of the uncertainty set is expressed as fuzzy membership functions. The solutions are obtained by transforming the fuzzymore » mathematical programming formulation into traditional mixed integer linear programming problems.« less
ERIC Educational Resources Information Center
Buckley, Scott D.; Newchok, Debra K.
2005-01-01
We investigated the effects of response effort on the use of mands during functional communication training (FCT) in a participant with autism. The number of links in a picture exchange response chain determined two levels of response effort. Each level was paired with a fixed ratio (FR3) schedule of reinforcement for aggression in a reversal…
López-Tarifa, P; Liguori, Nicoletta; van den Heuvel, Naudin; Croce, Roberta; Visscher, Lucas
2017-07-19
The light harvesting complex II (LHCII), is a pigment-protein complex responsible for most of the light harvesting in plants. LHCII harvests sunlight and transfers excitation energy to the reaction centre of the photo-system, where the water oxidation process takes place. The energetics of LHCII can be modulated by means of conformational changes allowing a switch from a harvesting to a quenched state. In this state, the excitation energy is no longer transferred but converted into thermal energy to prevent photooxidation. Based on molecular dynamics simulations at the microsecond time scale, we have recently proposed that the switch between different fluorescent states can be probed by correlating shifts in the chromophore-chromophore Coulomb interactions to particular protein movements. However, these findings are based upon calculations in the ideal point dipole approximation (IDA) where the Coulomb couplings are simplified as first order dipole-dipole interactions, also assuming that the chromophore transition dipole moments lay in particular directions of space with constant moduli (FIX-IDA). In this work, we challenge this approximation using the time-dependent density functional theory (TDDFT) combined with the frozen density embedding (FDE) approach. Our aim is to establish up to which limit FIX-IDA can be applied and which chromophore types are better described under this approximation. For that purpose, we use the classical trajectories of solubilised light harvesting complex II (LHCII) we have recently reported [Liguori et al., Sci. Rep., 2015, 5, 15661] and selected three pairs of chromophores containing chlorophyll and carotenoids (Chl and Car): Chla611-Chla612, Chlb606-Chlb607 and Chla612-Lut620. Using the FDE in the Tamm-Dancoff approximation (FDEc-TDA), we show that IDA is accurate enough for predicting Chl-Chl Coulomb couplings. However, the FIX-IDA largely overestimates Chl-Car interactions mainly because the transition dipole for the Cars is not trivially oriented on the polyene chain.
Protecting quantum memories using coherent parity check codes
NASA Astrophysics Data System (ADS)
Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv
2018-07-01
Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2
Identification of positive selection in disease response genes within members of the Poaceae.
Rech, Gabriel E; Vargas, Walter A; Sukno, Serenella A; Thon, Michael R
2012-12-01
Millions of years of coevolution between plants and pathogens can leave footprints on their genomes and genes involved on this interaction are expected to show patterns of positive selection in which novel, beneficial alleles are rapidly fixed within the population. Using information about upregulated genes in maize during Colletotrichum graminicola infection and resources available in the Phytozome database, we looked for evidence of positive selection in the Poaceae lineage, acting on protein coding sequences related with plant defense. We found six genes with evidence of positive selection and another eight with sites showing episodic selection. Some of them have already been described as evolving under positive selection, but others are reported here for the first time including genes encoding isocitrate lyase, dehydrogenases, a multidrug transporter, a protein containing a putative leucine-rich repeat and other proteins with unknown functions. Mapping positively selected residues onto the predicted 3-D structure of proteins showed that most of them are located on the surface, where proteins are in contact with other molecules. We present here a set of Poaceae genes that are likely to be involved in plant defense mechanisms and have evidence of positive selection. These genes are excellent candidates for future functional validation.
Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas
2017-04-01
We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.
Johari, Karim; Behroozmand, Roozbeh
2017-05-01
The predictive coding model suggests that neural processing of sensory information is facilitated for temporally-predictable stimuli. This study investigated how temporal processing of visually-presented sensory cues modulates movement reaction time and neural activities in speech and hand motor systems. Event-related potentials (ERPs) were recorded in 13 subjects while they were visually-cued to prepare to produce a steady vocalization of a vowel sound or press a button in a randomized order, and to initiate the cued movement following the onset of a go signal on the screen. Experiment was conducted in two counterbalanced blocks in which the time interval between visual cue and go signal was temporally-predictable (fixed delay at 1000 ms) or unpredictable (variable between 1000 and 2000 ms). Results of the behavioral response analysis indicated that movement reaction time was significantly decreased for temporally-predictable stimuli in both speech and hand modalities. We identified premotor ERP activities with a left-lateralized parietal distribution for hand and a frontocentral distribution for speech that were significantly suppressed in response to temporally-predictable compared with unpredictable stimuli. The premotor ERPs were elicited approximately -100 ms before movement and were significantly correlated with speech and hand motor reaction times only in response to temporally-predictable stimuli. These findings suggest that the motor system establishes a predictive code to facilitate movement in response to temporally-predictable sensory stimuli. Our data suggest that the premotor ERP activities are robust neurophysiological biomarkers of such predictive coding mechanisms. These findings provide novel insights into the temporal processing mechanisms of speech and hand motor systems.
Moreno, Andrea; Jego, Pierrick; de la Cruz, Feliberto; Canals, Santiago
2013-01-01
Complete understanding of the mechanisms that coordinate work and energy supply of the brain, the so called neurovascular coupling, is fundamental to interpreting brain energetics and their influence on neuronal coding strategies, but also to interpreting signals obtained from brain imaging techniques such as functional magnetic resonance imaging. Interactions between neuronal activity and cerebral blood flow regulation are largely compartmentalized. First, there exists a functional compartmentalization in which glutamatergic peri-synaptic activity and its electrophysiological events occur in close proximity to vascular responses. Second, the metabolic processes that fuel peri-synaptic activity are partially segregated between glycolytic and oxidative compartments. Finally, there is cellular segregation between astrocytic and neuronal compartments, which has potentially important implications on neurovascular coupling. Experimental data is progressively showing a tight interaction between the products of energy consumption and neurotransmission-driven signaling molecules that regulate blood flow. Here, we review some of these issues in light of recent findings with special attention to the neuron-glia interplay on the generation of neuroimaging signals. PMID:23543907
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
Solving free-plasma-boundary problems with the SIESTA MHD code
NASA Astrophysics Data System (ADS)
Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.
2017-10-01
SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.
Development and preliminary verification of the 3D core neutronic code: COCO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, H.; Mo, K.; Li, W.
As the recent blooming economic growth and following environmental concerns (China)) is proactively pushing forward nuclear power development and encouraging the tapping of clean energy. Under this situation, CGNPC, as one of the largest energy enterprises in China, is planning to develop its own nuclear related technology in order to support more and more nuclear plants either under construction or being operation. This paper introduces the recent progress in software development for CGNPC. The focus is placed on the physical models and preliminary verification results during the recent development of the 3D Core Neutronic Code: COCO. In the COCO code,more » the non-linear Green's function method is employed to calculate the neutron flux. In order to use the discontinuity factor, the Neumann (second kind) boundary condition is utilized in the Green's function nodal method. Additionally, the COCO code also includes the necessary physical models, e.g. single-channel thermal-hydraulic module, burnup module, pin power reconstruction module and cross-section interpolation module. The preliminary verification result shows that the COCO code is sufficient for reactor core design and analysis for pressurized water reactor (PWR). (authors)« less
Baiocco, G; Alloni, D; Babini, G; Mariotti, L; Ottolenghi, A
2015-09-01
Neutron relative biological effectiveness (RBE) is found to be energy dependent, being maximal for energies ∼1 MeV. This is reflected in the choice of radiation weighting factors wR for radiation protection purposes. In order to trace back the physical origin of this behaviour, a detailed study of energy deposition processes with their full dependences is necessary. In this work, the Monte Carlo transport code PHITS was used to characterise main secondary products responsible for energy deposition in a 'human-sized' soft tissue spherical phantom, irradiated by monoenergetic neutrons with energies around the maximal RBE/wR. Thereafter, results on the microdosimetric characterisation of secondary protons were used as an input to track structure calculations performed with PARTRAC, thus evaluating the corresponding DNA damage induction. Within the proposed simplified approach, evidence is suggested for a relevant role of secondary protons in inducing the maximal biological effectiveness for 1 MeV neutrons. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A post-processing method to simulate the generalized RF sheath boundary condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myra, James R.; Kohno, Haruhiko
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
A post-processing method to simulate the generalized RF sheath boundary condition
Myra, James R.; Kohno, Haruhiko
2017-10-23
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
Geothermal energy conversion system
NASA Astrophysics Data System (ADS)
Goldstein, David
1991-04-01
A generator having a tubular gear made of shape memory alloy in sheet-form floatingly supported for rotation about an axis fixedly spaced from the rotational axis of a roller gear presented. The tubular gear is sequentially deformed by exposure to a geothermal heat source and meshing engagement with the roller gear. Such sequential deformation of the tubular gear is controlled by a temperature differential to induce and sustain rotation of the gears in response to which the heat energy is converted into electrical energy.
UltraPse: A Universal and Extensible Software Platform for Representing Biological Sequences.
Du, Pu-Feng; Zhao, Wei; Miao, Yang-Yang; Wei, Le-Yi; Wang, Likun
2017-11-14
With the avalanche of biological sequences in public databases, one of the most challenging problems in computational biology is to predict their biological functions and cellular attributes. Most of the existing prediction algorithms can only handle fixed-length numerical vectors. Therefore, it is important to be able to represent biological sequences with various lengths using fixed-length numerical vectors. Although several algorithms, as well as software implementations, have been developed to address this problem, these existing programs can only provide a fixed number of representation modes. Every time a new sequence representation mode is developed, a new program will be needed. In this paper, we propose the UltraPse as a universal software platform for this problem. The function of the UltraPse is not only to generate various existing sequence representation modes, but also to simplify all future programming works in developing novel representation modes. The extensibility of UltraPse is particularly enhanced. It allows the users to define their own representation mode, their own physicochemical properties, or even their own types of biological sequences. Moreover, UltraPse is also the fastest software of its kind. The source code package, as well as the executables for both Linux and Windows platforms, can be downloaded from the GitHub repository.
Variable Camber Morphing Wings
2016-02-02
devices and wind turbines , but also it is present in fixed wings, which can vibrate at high frequencies during flight, and in the most fascinating...way to delay dynamic stall to control periodic vortex generation and improve the performance of rotorcrafts and wind turbines (McCroskey, 1982). As...Axis Wind Turbines , Sandia National Laboratories Energy Report, SAND80-2114. 7. RESPONSIBILITY NOTICE “The authors are the only responsible for
78 FR 29063 - Survey of Urban Rates for Fixed Voice and Fixed Broadband Residential Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-17
... in alternative formats (computer diskette, large print, audio record, and Braille). Persons with... Company Name: Provider FRN (used on MONTH DAY, YEAR Form 477): Provider Study Area Code (if current USF...
DiMauro, Salvatore
2006-11-01
Our understanding of mitochondrial diseases (defined restrictively as defects of the mitochondrial respiratory chain) is expanding rapidly. In this review, I will give the latest information on disorders affecting predominantly or exclusively skeletal muscle. The most recently described mitochondrial myopathies are due to defects in nuclear DNA, including coenzyme Q10 deficiency and mutations in genes controlling mitochondrial DNA abundance and structure, such as POLG, TK2, and MPV17. Barth syndrome, an X-linked recessive mitochondrial myopathy/cardiopathy, is associated with decreased amount and altered structure of cardiolipin, the main phospholipid of the inner mitochondrial membrane, but a secondary impairment of respiratory chain function is plausible. The role of mutations in protein-coding genes of mitochondrial DNA in causing isolated myopathies has been confirmed. Mutations in tRNA genes of mitochondrial DNA can also cause predominantly myopathic syndromes and--contrary to conventional wisdom--these mutations can be homoplasmic. Defects in the mitochondrial respiratory chain impair energy production and almost invariably involve skeletal muscle, causing exercise intolerance, cramps, recurrent myoglobinuria, or fixed weakness, which often affects extraocular muscles and results in droopy eyelids (ptosis) and progressive external ophthalmoplegia.
Horn, Paul R; Head-Gordon, Martin
2016-02-28
In energy decomposition analysis (EDA) of intermolecular interactions calculated via density functional theory, the initial supersystem wavefunction defines the so-called "frozen energy" including contributions such as permanent electrostatics, steric repulsions, and dispersion. This work explores the consequences of the choices that must be made to define the frozen energy. The critical choice is whether the energy should be minimized subject to the constraint of fixed density. Numerical results for Ne2, (H2O)2, BH3-NH3, and ethane dissociation show that there can be a large energy lowering associated with constant density orbital relaxation. By far the most important contribution is constant density inter-fragment relaxation, corresponding to charge transfer (CT). This is unwanted in an EDA that attempts to separate CT effects, but it may be useful in other contexts such as force field development. An algorithm is presented for minimizing single determinant energies at constant density both with and without CT by employing a penalty function that approximately enforces the density constraint.
NASA Astrophysics Data System (ADS)
Dickens, J. K.
1991-04-01
The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d(sigma)/dE, following nonelastic neutron interactions with C-12 for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed.
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
Early Evolution of Conserved Regulatory Sequences Associated with Development in Vertebrates
McEwen, Gayle K.; Goode, Debbie K.; Parker, Hugo J.; Woolfe, Adam; Callaway, Heather; Elgar, Greg
2009-01-01
Comparisons between diverse vertebrate genomes have uncovered thousands of highly conserved non-coding sequences, an increasing number of which have been shown to function as enhancers during early development. Despite their extreme conservation over 500 million years from humans to cartilaginous fish, these elements appear to be largely absent in invertebrates, and, to date, there has been little understanding of their mode of action or the evolutionary processes that have modelled them. We have now exploited emerging genomic sequence data for the sea lamprey, Petromyzon marinus, to explore the depth of conservation of this type of element in the earliest diverging extant vertebrate lineage, the jawless fish (agnathans). We searched for conserved non-coding elements (CNEs) at 13 human gene loci and identified lamprey elements associated with all but two of these gene regions. Although markedly shorter and less well conserved than within jawed vertebrates, identified lamprey CNEs are able to drive specific patterns of expression in zebrafish embryos, which are almost identical to those driven by the equivalent human elements. These CNEs are therefore a unique and defining characteristic of all vertebrates. Furthermore, alignment of lamprey and other vertebrate CNEs should permit the identification of persistent sequence signatures that are responsible for common patterns of expression and contribute to the elucidation of the regulatory language in CNEs. Identifying the core regulatory code for development, common to all vertebrates, provides a foundation upon which regulatory networks can be constructed and might also illuminate how large conserved regulatory sequence blocks evolve and become fixed in genomic DNA. PMID:20011110
Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui
2018-01-01
This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results.
2018-01-01
This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results. PMID:29370248
Interactions between moist heating and dynamics in atmospheric predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straus, D.M.; Huntley, M.A.
1994-02-01
The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less
2014-01-01
Background The genome is pervasively transcribed but most transcripts do not code for proteins, constituting non-protein-coding RNAs. Despite increasing numbers of functional reports of individual long non-coding RNAs (lncRNAs), assessing the extent of functionality among the non-coding transcriptional output of mammalian cells remains intricate. In the protein-coding world, transcripts differentially expressed in the context of processes essential for the survival of multicellular organisms have been instrumental in the discovery of functionally relevant proteins and their deregulation is frequently associated with diseases. We therefore systematically identified lncRNAs expressed differentially in response to oncologically relevant processes and cell-cycle, p53 and STAT3 pathways, using tiling arrays. Results We found that up to 80% of the pathway-triggered transcriptional responses are non-coding. Among these we identified very large macroRNAs with pathway-specific expression patterns and demonstrated that these are likely continuous transcripts. MacroRNAs contain elements conserved in mammals and sauropsids, which in part exhibit conserved RNA secondary structure. Comparing evolutionary rates of a macroRNA to adjacent protein-coding genes suggests a local action of the transcript. Finally, in different grades of astrocytoma, a tumor disease unrelated to the initially used cell lines, macroRNAs are differentially expressed. Conclusions It has been shown previously that the majority of expressed non-ribosomal transcripts are non-coding. We now conclude that differential expression triggered by signaling pathways gives rise to a similar abundance of non-coding content. It is thus unlikely that the prevalence of non-coding transcripts in the cell is a trivial consequence of leaky or random transcription events. PMID:24594072
NASA Astrophysics Data System (ADS)
Semenov, Alexander; Babikov, Dmitri
2013-11-01
We formulated the mixed quantum/classical theory for rotationally and vibrationally inelastic scattering process in the diatomic molecule + atom system. Two versions of theory are presented, first in the space-fixed and second in the body-fixed reference frame. First version is easy to derive and the resultant equations of motion are transparent, but the state-to-state transition matrix is complex-valued and dense. Such calculations may be computationally demanding for heavier molecules and/or higher temperatures, when the number of accessible channels becomes large. In contrast, the second version of theory requires some tedious derivations and the final equations of motion are rather complicated (not particularly intuitive). However, the state-to-state transitions are driven by real-valued sparse matrixes of much smaller size. Thus, this formulation is the method of choice from the computational point of view, while the space-fixed formulation can serve as a test of the body-fixed equations of motion, and the code. Rigorous numerical tests were carried out for a model system to ensure that all equations, matrixes, and computer codes in both formulations are correct.
Generalized fluid impulse functions for oscillating marine structures
NASA Astrophysics Data System (ADS)
Janardhanan, K.; Price, W. G.; Wu, Y.
1992-03-01
A selection of generalized impulse response functions is presented for a variety of rigid and flexible marine structures (i.e. mono-hull, SWATH, floating drydock and twin dock, fixed flexible pile). These functions are determined from calculated and experimental frequency-dependent hydrodynamic data, and the characteristics of these data depend on the type of structure considered. This information is reflected in the shape and duration of the generalized impulse response functions which are pre-requisites for a generalized integro-differential mathematical model describing the dynamic behaviour of the structures to seaway excitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakao, N.; /SLAC; Taniguchi, S.
Neutron energy spectra were measured behind the lateral shield of the CERF (CERN-EU High Energy Reference Field) facility at CERN with a 120 GeV/c positive hadron beam (a mixture of mainly protons and pions) on a cylindrical copper target (7-cm diameter by 50-cm long). An NE213 organic liquid scintillator (12.7-cm diameter by 12.7-cm long) was located at various longitudinal positions behind shields of 80- and 160-cm thick concrete and 40-cm thick iron. The measurement locations cover an angular range with respect to the beam axis between 13 and 133{sup o}. Neutron energy spectra in the energy range between 32 MeVmore » and 380 MeV were obtained by unfolding the measured pulse height spectra with the detector response functions which have been verified in the neutron energy range up to 380 MeV in separate experiments. Since the source term and experimental geometry in this experiment are well characterized and simple and results are given in the form of energy spectra, these experimental results are very useful as benchmark data to check the accuracies of simulation codes and nuclear data. Monte Carlo simulations of the experimental set up were performed with the FLUKA, MARS and PHITS codes. Simulated spectra for the 80-cm thick concrete often agree within the experimental uncertainties. On the other hand, for the 160-cm thick concrete and iron shield differences are generally larger than the experimental uncertainties, yet within a factor of 2. Based on source term simulations, observed discrepancies among simulations of spectra outside the shield can be partially explained by differences in the high-energy hadron production in the copper target.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jan; Ferrada, Juan J; Curd, Warren
During inductive plasma operation of ITER, fusion power will reach 500 MW with an energy multiplication factor of 10. The heat will be transferred by the Tokamak Cooling Water System (TCWS) to the environment using the secondary cooling system. Plasma operations are inherently safe even under the most severe postulated accident condition a large, in-vessel break that results in a loss-of-coolant accident. A functioning cooling water system is not required to ensure safe shutdown. Even though ITER is inherently safe, TCWS equipment (e.g., heat exchangers, piping, pressurizers) are classified as safety important components. This is because the water is predictedmore » to contain low-levels of radionuclides (e.g., activated corrosion products, tritium) with activity levels high enough to require the design of components to be in accordance with French regulations for nuclear pressure equipment, i.e., the French Order dated 12 December 2005 (ESPN). ESPN has extended the practical application of the methodology established by the Pressure Equipment Directive (97/23/EC) to nuclear pressure equipment, under French Decree 99-1046 dated 13 December 1999, and Order dated 21 December 1999 (ESP). ASME codes and supplementary analyses (e.g., Failure Modes and Effects Analysis) will be used to demonstrate that the TCWS equipment meets these essential safety requirements. TCWS is being designed to provide not only cooling, with a capacity of approximately 1 GW energy removal, but also elevated temperature baking of first-wall/blanket, vacuum vessel, and divertor. Additional TCWS functions include chemical control of water, draining and drying for maintenance, and facilitation of leak detection/localization. The TCWS interfaces with the majority of ITER systems, including the secondary cooling system. U.S. ITER is responsible for design, engineering, and procurement of the TCWS with industry support from an Engineering Services Organization (ESO) (AREVA Federal Services, with support from Northrop Grumman, and OneCIS). ITER International Organization (ITER-IO) is responsible for design oversight and equipment installation in Cadarache, France. TCWS equipment will be fabricated using ASME design codes with quality assurance and oversight by an Agreed Notified Body (approved by the French regulator) that will ensure regulatory compliance. This paper describes the TCWS design and how U.S. ITER and fabricators will use ASME codes to comply with EU Directives and French Orders and Decrees.« less
Reaction path of energetic materials using THOR code
NASA Astrophysics Data System (ADS)
Durães, L.; Campos, J.; Portugal, A.
1998-07-01
The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.
Neural coding in barrel cortex during whisker-guided locomotion
Sofroniew, Nicholas James; Vlasov, Yurii A; Hires, Samuel Andrew; Freeman, Jeremy; Svoboda, Karel
2015-01-01
Animals seek out relevant information by moving through a dynamic world, but sensory systems are usually studied under highly constrained and passive conditions that may not probe important dimensions of the neural code. Here, we explored neural coding in the barrel cortex of head-fixed mice that tracked walls with their whiskers in tactile virtual reality. Optogenetic manipulations revealed that barrel cortex plays a role in wall-tracking. Closed-loop optogenetic control of layer 4 neurons can substitute for whisker-object contact to guide behavior resembling wall tracking. We measured neural activity using two-photon calcium imaging and extracellular recordings. Neurons were tuned to the distance between the animal snout and the contralateral wall, with monotonic, unimodal, and multimodal tuning curves. This rich representation of object location in the barrel cortex could not be predicted based on simple stimulus-response relationships involving individual whiskers and likely emerges within cortical circuits. DOI: http://dx.doi.org/10.7554/eLife.12559.001 PMID:26701910
NASA Astrophysics Data System (ADS)
Mazza, Mirko
2015-12-01
Reinforced concrete (r.c.) framed buildings designed in compliance with inadequate seismic classifications and code provisions present in many cases a high vulnerability and need to be retrofitted. To this end, the insertion of a base isolation system allows a considerable reduction of the seismic loads transmitted to the superstructure. However, strong near-fault ground motions, which are characterised by long-duration horizontal pulses, may amplify the inelastic response of the superstructure and induce a failure of the isolation system. The above considerations point out the importance of checking the effectiveness of different isolation systems for retrofitting a r.c. framed structure. For this purpose, a numerical investigation is carried out with reference to a six-storey r.c. framed building, which, primarily designed (as to be a fixed-base one) in compliance with the previous Italian code (DM96) for a medium-risk seismic zone, has to be retrofitted by insertion of an isolation system at the base for attaining performance levels imposed by the current Italian code (NTC08) in a high-risk seismic zone. Besides the (fixed-base) original structure, three cases of base isolation are studied: elastomeric bearings acting alone (e.g. HDLRBs); in-parallel combination of elastomeric and friction bearings (e.g. high-damping-laminated-rubber bearings, HDLRBs and steel-PTFE sliding bearings, SBs); friction bearings acting alone (e.g. friction pendulum bearings, FPBs). The nonlinear analysis of the fixed-base and base-isolated structures subjected to horizontal components of near-fault ground motions is performed for checking plastic conditions at the potential critical (end) sections of the girders and columns as well as critical conditions of the isolation systems. Unexpected high values of ductility demand are highlighted at the lower floors of all base-isolated structures, while re-centring problems of the base isolation systems under near-fault earthquakes are expected in case of friction bearings acting alone (i.e. FPBs) or that in combination (i.e. SBs) with HDLRBs.
Thermohydrodynamic analysis of cryogenic liquid turbulent flow fluid film bearings
NASA Technical Reports Server (NTRS)
Andres, Luis San
1993-01-01
A thermohydrodynamic analysis is presented and a computer code developed for prediction of the static and dynamic force response of hydrostatic journal bearings (HJB's), annular seals or damper bearing seals, and fixed arc pad bearings for cryogenic liquid applications. The study includes the most important flow characteristics found in cryogenic fluid film bearings such as flow turbulence, fluid inertia, liquid compressibility and thermal effects. The analysis and computational model devised allow the determination of the flow field in cryogenic fluid film bearings along with the dynamic force coefficients for rotor-bearing stability analysis.
NASA Astrophysics Data System (ADS)
Mahmood, Asif; Ramay, Shahid M.; Rafique, Hafiz Muhammad; Al-Zaghayer, Yousef; Khan, Salah Ud-Din
2014-05-01
In this paper, first-principles calculations of structural, electronic, optical and thermoelectric properties of AgMO3 (M = V, Nb and Ta) have been carried out using full potential linearized augmented plane wave plus local orbitals method (FP - LAPW + lo) and BoltzTraP code within the framework of density functional theory (DFT). The calculated structural parameters are found to agree well with the experimental data, while the electronic band structure indicates that AgNbO3 and AgTaO3 are semiconductors with indirect bandgaps of 1.60 eV and 1.64 eV, respectively, between the occupied O 2p and unoccupied d states of Nb and Ta. On the other hand, AgVO3 is found metallic due to the overlapping behavior of states across the Fermi level. Furthermore, optical properties, such as dielectric function, absorption coefficient, optical reflectivity, refractive index and extinction coefficient of AgNbO3 and AgTaO3, are calculated for incident photon energy up to 50 eV. Finally, we calculate thermo power for AgNbO3 and AgTaO3 at fixed doping 1019 cm-3. Electron doped thermo power of AgNbO3 shows significant increase over AgTaO3 with temperature.
Track structure in radiation biology: theory and applications.
Nikjoo, H; Uehara, S; Wilson, W E; Hoshi, M; Goodhead, D T
1998-04-01
A brief review is presented of the basic concepts in track structure and the relative merit of various theoretical approaches adopted in Monte-Carlo track-structure codes are examined. In the second part of the paper, a formal cluster analysis is introduced to calculate cluster-distance distributions. Total experimental ionization cross-sections were least-square fitted and compared with the calculation by various theoretical methods. Monte-Carlo track-structure code Kurbuc was used to examine and compare the spectrum of the secondary electrons generated by using functions given by Born-Bethe, Jain-Khare, Gryzinsky, Kim-Rudd, Mott and Vriens' theories. The cluster analysis in track structure was carried out using the k-means method and Hartigan algorithm. Data are presented on experimental and calculated total ionization cross-sections: inverse mean free path (IMFP) as a function of electron energy used in Monte-Carlo track-structure codes; the spectrum of secondary electrons generated by different functions for 500 eV primary electrons; cluster analysis for 4 MeV and 20 MeV alpha-particles in terms of the frequency of total cluster energy to the root-mean-square (rms) radius of the cluster and differential distance distributions for a pair of clusters; and finally relative frequency distribution for energy deposited in DNA, single-strand break and double-strand breaks for 10MeV/u protons, alpha-particles and carbon ions. There are a number of Monte-Carlo track-structure codes that have been developed independently and the bench-marking presented in this paper allows a better choice of the theoretical method adopted in a track-structure code to be made. A systematic bench-marking of cross-sections and spectra of the secondary electrons shows differences between the codes at atomic level, but such differences are not significant in biophysical modelling at the macromolecular level. Clustered-damage evaluation shows: that a substantial proportion of dose ( 30%) is deposited by low-energy electrons; the majority of DNA damage lesions are of simple type; the complexity of damage increases with increased LET, while the total yield of strand breaks remains constant; and at high LET values nearly 70% of all double-strand breaks are of complex type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Styron, Jedediah D.
2016-11-01
This work will focus on the characterization of NTOF detectors fielded on ICF experiments conducted at the Z-experimental facility with emphasis on the MagLif and gas puff campaigns. Three experiments have been proposed. The first experiment will characterize the response of the PMT with respect to the amplitude and width of signals produced by single neutron events. A second experiment will characterize the neutron transit time through the scintillator and the third is to characterize the pulse amplitude for a very specific range of neutron induced charged particle interactions within the scintillator. These experiments will cover incident neutron energies relevantmore » to D-D and D-T fusion reactions. These measurements will be taken as a function of detector bias to cover the entire dynamic range of the detector. Throughout the characterization process, the development of a predictive capability is desired. A new post processing code has been proposed that will calculate a neutron time-of-flight spectrum in units of MeVee. This code will couple the experimentally obtained values and the results obtained with the Monte Carlo code MCNP6. The motivation of this code is to correct for geometry issues when transferring the calibration results from a light lab setting to the Zenvironment. This capability will be used to develop a hypothetical design of LOS270 such that more favorable neutron measurements, requiring less correction, can be made in the future.« less
Directed Energy Non-lethal Weapons
2010-06-16
technologies that alter skeletal muscle contraction and/or neural functioning (i.e., neurosecretion) via radiofrequency (RF)/microwave (MW...chromaffin cells and 2) completion of studies on the effect of 0.75 to 1 GHz RF fields on skeletal muscle contraction , using in each study fixed
Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos
NASA Astrophysics Data System (ADS)
Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi
2017-11-01
Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.
Characterization of the orf1glnKamtB operon of Herbaspirillum seropedicae.
Noindorf, Lilian; Rego, Fabiane G M; Baura, Valter A; Monteiro, Rose A; Wassem, Roseli; Cruz, Leonardo M; Rigo, Liu U; Souza, Emanuel M; Steffens, Maria B R; Pedrosa, Fabio O; Chubatsu, Leda S
2006-03-01
Herbaspirillum seropedicae is an endophytic nitrogen-fixing bacterium that colonizes economically important grasses. In this organism, the amtB gene is co-transcribed with two other genes: glnK that codes for a PII-like protein and orf1 that codes for a probable periplasmatic protein of unknown function. The expression of the orf1glnKamtB operon is increased under nitrogen-limiting conditions and is dependent on NtrC. An amtB mutant failed to transport methylammonium. Post-translational control of nitrogenase was also partially impaired in this mutant, since a complete switch-off of nitrogenase after ammonium addition was not observed. This result suggests that the AmtB protein is involved in the signaling pathway for the reversible inactivation of nitrogenase in H. seropedicae.
Anatomical and functional organization of the human substantia nigra and its connections
Zhang, Yu; Larcher, Kevin Michel-Herve; Misic, Bratislav
2017-01-01
We investigated the anatomical and functional organization of the human substantia nigra (SN) using diffusion and functional MRI data from the Human Connectome Project. We identified a tripartite connectivity-based parcellation of SN with a limbic, cognitive, motor arrangement. The medial SN connects with limbic striatal and cortical regions and encodes value (greater response to monetary wins than losses during fMRI), while the ventral SN connects with associative regions of cortex and striatum and encodes salience (equal response to wins and losses). The lateral SN connects with somatomotor regions of striatum and cortex and also encodes salience. Behavioral measures from delay discounting and flanker tasks supported a role for the value-coding medial SN network in decisional impulsivity, while the salience-coding ventral SN network was associated with motor impulsivity. In sum, there is anatomical and functional heterogeneity of human SN, which underpins value versus salience coding, and impulsive choice versus impulsive action. PMID:28826495
Propagation of Computational Uncertainty Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2007-01-01
This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
AUTO_DERIV: Tool for automatic differentiation of a Fortran code
NASA Astrophysics Data System (ADS)
Stamatiadis, S.; Farantos, S. C.
2010-10-01
AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.
Ice Load Project Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Timothy J.; Brown, Thomas; Byrne, Alex
As interest and investment in offshore wind projects increase worldwide, some turbines will be installed in locations where ice of significant thickness forms on the water surface. This ice moves under the driving forces of wind, current, and thermal effects and may result in substantial forces on bottom-fixed support structures. The North and Baltic Seas in Europe have begun to see significant wind energy development and the Great Lakes of the United States and Canada may host wind energy development in the near future. Design of the support structures for these projects is best performed through the use of anmore » integrated tool that can calculate the cumulative effects of forces due to turbine operations, wind, waves, and floating ice. The dynamic nature of ice forces requires that these forces be included in the design simulations, rather than added as static forces to simulation results. The International Electrotechnical Commission (IEC) standard[2] for offshore wind turbine design and the International Organization for Standardization (ISO) standard[3] for offshore structures provide requirements and algorithms for the calculation of forces induced by surface ice; however, currently none of the major wind turbine dynamic simulation codes provides the ability to model ice loads. The scope of work of the project described in this report includes the development of a suite of subroutines, collectively named IceFloe, that meet the requirements of the IEC and ISO standards and couples with four of the major wind turbine dynamic simulation codes. The mechanisms by which ice forces impinge on offshore structures generally include the forces required for crushing of the ice against vertical-sided structures and the forces required to fracture the ice as it rides up on conical-sided structures. Within these two broad categories, the dynamic character of the forces with respect to time is also dependent on other factors such as the velocity and thickness of the moving ice and the response of the structure. In some cases, the dynamic effects are random and in other cases they are deterministic, such as the effect of structural resonance and coupling of the ice forces with the defection of the support structure. The initial versions of the IceFloe routines incorporate modules that address these varied force and dynamic phenomena with seven alternative algorithms that can be specified by the user. The IceFloe routines have been linked and tested with four major wind turbine aeroelastic simulation codes: FAST, a tool developed under the management of the National Renewable Energy Laboratory (NREL) and available free of charge from its web site; Bladed[4], a widely-used commercial package available from DNV GL; ADAMS[5], a general purpose multi-body simulation code used in the wind industry and available from MSC Software; and HAWC2[6], a code developed by and available for purchase from Danmarks Tekniske Universitet (DTU). Interface routines have been developed and tested with full wind turbine simulations for each of these codes and the source code and example inputs and outputs are available from the NREL website.« less
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code
NASA Astrophysics Data System (ADS)
Panettieri, Vanessa; Amor Duch, Maria; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat
2007-01-01
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm2 and a thickness of 0.5 µm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water™ build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water™ cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.
Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code.
Panettieri, Vanessa; Duch, Maria Amor; Jornet, Núria; Ginjaume, Mercè; Carrasco, Pablo; Badal, Andreu; Ortega, Xavier; Ribas, Montserrat
2007-01-07
The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson&Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm(2) and a thickness of 0.5 microm which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system can successfully reproduce the response of a detector with such a small active area.
King, Andrew W; Baskerville, Adam L; Cox, Hazel
2018-03-13
An implementation of the Hartree-Fock (HF) method using a Laguerre-based wave function is described and used to accurately study the ground state of two-electron atoms in the fixed nucleus approximation, and by comparison with fully correlated (FC) energies, used to determine accurate electron correlation energies. A variational parameter A is included in the wave function and is shown to rapidly increase the convergence of the energy. The one-electron integrals are solved by series solution and an analytical form is found for the two-electron integrals. This methodology is used to produce accurate wave functions, energies and expectation values for the helium isoelectronic sequence, including at low nuclear charge just prior to electron detachment. Additionally, the critical nuclear charge for binding two electrons within the HF approach is calculated and determined to be Z HF C =1.031 177 528.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
Modeling the effects of pH and ionic strength on swelling of anionic polyelectrolyte gels
NASA Astrophysics Data System (ADS)
Drozdov, A. D.; deClaville Christiansen, J.
2015-07-01
A constitutive model is developed for the elastic response of an anionic polyelectrolyte gel under swelling in water with an arbitrary pH and an arbitrary molar fraction of dissolved monovalent salt. A gel is treated as a three-phase medium consisting of a solid phase (polymer network), solvent (water), and solute (mobile ions). Transport of solvent and solute is thought of as their diffusion through the polymer network accelerated by an electric field formed by mobile and fixed ions and accompanied by chemical reactions (dissociation of functional groups attached to polymer chains and formation of ion pairs between bound charges and mobile counter-ions). Constitutive equations are derived by means of the free energy imbalance inequality for an arbitrary three-dimensional deformation with finite strains. These relations are applied to analyze equilibrium swelling diagrams on poly(acrylic acid) gel, poly(methacrylic acid) gel, and three composite hydrogels under water uptake in a bath (i) with a fixed molar fraction of salt and varied pH, and (ii) with a fixed pH and varied molar fraction of salt. To validate the ability of the model to predict observations quantitatively, material constants are found by matching swelling curves under one type of experimental conditions and results of simulation are compared with experimental data in the other type of tests.
Detector response function of an energy-resolved CdTe single photon counting detector.
Liu, Xin; Lee, Hyoung Koo
2014-01-01
While spectral CT using single photon counting detector has shown a number of advantages in diagnostic imaging, knowledge of the detector response function of an energy-resolved detector is needed to correct the signal bias and reconstruct the image more accurately. The objective of this paper is to study the photo counting detector response function using laboratory sources, and investigate the signal bias correction method. Our approach is to model the detector response function over the entire diagnostic energy range (20 keV
Numerical studies of the deposition of material released from fixed and rotary wing aircraft
NASA Technical Reports Server (NTRS)
Bilanin, A. J.; Teske, M. E.
1984-01-01
The computer code AGDISP (AGricultural DISPersal) has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern. In this report, the equations governing the motion of aerially released particles are developed, including a description of the evaporation model used. A series of case studies, using AGDISP, are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.
2016-02-16
Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.
2016-03-01
Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less
Efficient self-consistency for magnetic tight binding
NASA Astrophysics Data System (ADS)
Soin, Preetma; Horsfield, A. P.; Nguyen-Manh, D.
2011-06-01
Tight binding can be extended to magnetic systems by including an exchange interaction on an atomic site that favours net spin polarisation. We have used a published model, extended to include long-ranged Coulomb interactions, to study defects in iron. We have found that achieving self-consistency using conventional techniques was either unstable or very slow. By formulating the problem of achieving charge and spin self-consistency as a search for stationary points of a Harris-Foulkes functional, extended to include spin, we have derived a much more efficient scheme based on a Newton-Raphson procedure. We demonstrate the capabilities of our method by looking at vacancies and self-interstitials in iron. Self-consistency can indeed be achieved in a more efficient and stable manner, but care needs to be taken to manage this. The algorithm is implemented in the code PLATO. Program summaryProgram title:PLATO Catalogue identifier: AEFC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 228 747 No. of bytes in distributed program, including test data, etc.: 1 880 369 Distribution format: tar.gz Programming language: C and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux, Mac OS X, Windows XP Has the code been vectorised or parallelised?: Yes. Up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Catalogue identifier of previous version: AEFC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2616 Does the new version supersede the previous version?: Yes Nature of problem: Achieving charge and spin self-consistency in magnetic tight binding can be very difficult. Our existing schemes failed altogether, or were very slow. Solution method: A new scheme for achieving self-consistency in orthogonal tight binding has been introduced that explicitly evaluates the first and second derivatives of the energy with respect to input charge and spin, and then uses these to search for stationary values of the energy. Reasons for new version: Bug fixes and new functionality. Summary of revisions: New charge and spin mixing scheme for orthogonal tight binding. Numerous small bug fixes. Restrictions: The new mixing scheme scales poorly with system size. In particular the memory usage scales as number of atoms to the power 4. It is restricted to systems with about 200 atoms or less. Running time: Test cases will run in a few minutes, large calculations may run for several days.
NASA Astrophysics Data System (ADS)
Prettyman, T. H.; Gardner, R. P.; Verghese, K.
1993-08-01
A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.
The response of a radiophotoluminescent glass dosimeter in megavoltage photon and electron beams.
Araki, Fujio; Ohno, Takeshi
2014-12-01
This study investigated the response of a radiophotoluminescent glass dosimeter (RGD) in megavoltage photon and electron beams. The RGD response was compared with ion chamber measurements for 4-18 MV photons and 6-20 MeV electrons in plastic water phantoms. The response was also calculated via Monte Carlo (MC) simulations with EGSnrc/egs_chamber and Cavity user-codes, respectively. In addition, the response of the RGD cavity was analyzed as a function of field sizes and depths according to Burlin's general cavity theory. The perturbation correction factor, PQ, in the RGD cavity was also estimated from MC simulations for photon and electron beams. The calculated and measured RGD energy response at reference conditions with a 10 × 10 cm(2) field and 10 cm depth in photons was lower by up to 2.5% with increasing energy. The variation in RGD response in the field size range of 5 × 5 cm(2) to 20 × 20 cm(2) was 3.9% and 0.7%, at 10 cm depth for 4 and 18 MV, respectively. The depth dependence of the RGD response was constant within 1% for energies above 6 MV but it increased by 2.6% and 1.6% for a large (20 × 20 cm(2)) field at 4 and 6 MV, respectively. The dose contributions from photon interactions (1 - d) in the RGD cavity, according to Burlin's cavity theory, decreased with increasing energy and decreasing field size. The variation in (1 - d) between field sizes became larger with increasing depth for the lower energies of 4 and 6 MV. PQ for the RGD cavity was almost constant between 0.96 and 0.97 at 10 MV energies and above. Meanwhile, PQ depends strongly on field size and depth for 4 and 6 MV photons. In electron beams, the RGD response at a reference depth, dref, varied by less than 1% over the electron energy range but was on average 4% lower than the response for 6 MV photons. The RGD response for photon beams depends on both (1 - d) and perturbation effects in the RGD cavity. Therefore, it is difficult to predict the energy dependence of RGD response by Burlin's theory and it is recommended to directly measure RGD response or use the MC-calculated RGD response, regarding the practical use. The response for electron beams decreased rapidly at a depth beyond dref for lower mean electron energies <3 MeV and in contrast PQ increased.
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
THE POLARIZATION OF NEUTRONS FROM THE STRIPPING OF DEUTERONS ON C$sup 1$$sup 2$
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzanowski, A.; Grotowski, K.; Niewodniczanski, H.
1961-01-01
The neutron polarization in the reaction C/sup 12/(d,n)N/sup 13/ at 12.9 Mev is measured as a function of the neutron emission angle. In addition, the neutron energy spectrum is measured at a fixed angle, in order to find the relative numbers of neutrons associated with various energy levels of N/sup 13/. The measured data are used to induce the properties of the N/sup 13/ energy levels studied. (T.F.H.)
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullen, D.E.
1978-07-04
The code SIGMA1 Doppler broadens evaluated cross sections in the ENDF/B format. The code can be applied only to data that vary as a linear function of energy and cross section between tabulated points. This report describes the methods used in the code and serves as a user's guide to the code. 6 figures, 2 tables.
Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.
NASA Astrophysics Data System (ADS)
Stossel, Bryan Joseph
1995-01-01
Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.
NASA Astrophysics Data System (ADS)
Takeyama, Mirei; Kaji, Daiya; Morimoto, Kouji; Wakabayashi, Yasuo; Tokanai, Fuyuki; Morita, Kosuke
Detector response to spontaneous fission (SF) of heavy nuclides produced in the 206Pb(48Ca,2n)252No reaction was investigated using a gas-filled recoil ion separator (GARIS). Kinetic energy distributions of the SF originating from 252No were observed by tuning implantation depth of evaporation residue (ER) to the detector. The focal plane detector used in the GARIS experiments was well calibrated by comparing with the known total kinetic energy (TKE) of SF due to 252No. The correction value for the TKE calculation was deduced as a function of the implantation depth of 252No to the detector. Furthermore, we have investigated the results by comparing with those obtained by a computer simulation using the particle and heavy ion transport code system (PHITS).
Shahriari, Ali; Dawson, Neal J.; Bell, Ryan A. V.; Storey, Kenneth B.
2013-01-01
The intertidal marine snail, Littorina littorea, has evolved to withstand extended bouts of oxygen deprivation brought about by changing tides or other potentially harmful environmental conditions. Survival is dependent on a strong suppression of its metabolic rate and a drastic reorganization of its cellular biochemistry in order to maintain energy balance under fixed fuel reserves. Lactate dehydrogenase (LDH) is a crucial enzyme of anaerobic metabolism as it is typically responsible for the regeneration of NAD+, which allows for the continued functioning of glycolysis in the absence of oxygen. This study compared the kinetic and structural characteristics of the D-lactate specific LDH (E.C. 1.1.1.28) from foot muscle of aerobic control versus 24 h anoxia-exposed L. littorea. Anoxic LDH displayed a near 50% decrease in V max (pyruvate-reducing direction) as compared to control LDH. These kinetic differences suggest that there may be a stable modification and regulation of LDH during anoxia, and indeed, subsequent dot-blot analyses identified anoxic LDH as being significantly less acetylated than the corresponding control enzyme. Therefore, acetylation may be the regulatory mechanism that is responsible for the suppression of LDH activity during anoxia, which could allow for the production of alternative glycolytic end products that in turn would increase the ATP yield under fixed fuel reserves. PMID:24233354
COMPTEL neutron response at 17 MeV
NASA Technical Reports Server (NTRS)
Oneill, Terrence J.; Ait-Ouamer, Farid; Morris, Joann; Tumer, O. Tumay; White, R. Stephen; Zych, Allen D.
1992-01-01
The Compton imaging telescope (COMPTEL) instrument of the Gamma Ray Observatory was exposed to 17 MeV d,t neutrons prior to launch. These data were analyzed and compared with Monte Carlo calculations using the MCNP(LANL) code. Energy and angular resolutions are compared and absolute efficiencies are calculated at 0 and 30 degrees incident angle. The COMPTEL neutron responses at 17 MeV and higher energies are needed to understand solar flare neutron data.
A test of the IAEA code of practice for absorbed dose determination in photon and electron beams
NASA Astrophysics Data System (ADS)
Leitner, Arnold; Tiefenboeck, Wilhelm; Witzani, Josef; Strachotinsky, Christian
1990-12-01
The IAEA (International Atomic Energy Agency) code of practice TRS 277 gives recommendations for absorbed dose determination in high energy photon and electron beams based on the use of ionization chambers calibrated in terms of exposure of air kerma. The scope of the work was to test the code for cobalt 60 gamma radiation and for several radiation qualities at four different types of electron accelerators and to compare the ionization chamber dosimetry with ferrous sulphate dosimetry. The results show agreement between the two methods within about one per cent for all the investigated qualities. In addition the response of the TLD capsules of the IAEA/WHO TL dosimetry service was determined.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Sparsey™: event recognition via deep hierarchical sparse distributed codes
Rinkus, Gerard J.
2014-01-01
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns. PMID:25566046
Collisional excitation of interstellar methyl cyanide
NASA Technical Reports Server (NTRS)
Green, Sheldon
1986-01-01
Theoretical calculations are used to determine the collisional excitation rates of methyl cyanide under interstellar molecular cloud conditions. The required Q(L,M) as a function of kinetic temperature were determined by averaging fixed energy IOS (infinite order sudden) results over appropriate Boltzmann distributions of collision energies. At a kinetic temperature of 40 K, rates within a K ladder were found to be accurate to generally better than about 30 percent.
The anatomy of choice: dopamine and decision-making.
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J
2014-11-05
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses-and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, A.; Grote, D. P.; Vay, J. L.
2015-05-29
The Fusion Energy Sciences Advisory Committee’s subcommittee on non-fusion applications (FESAC NFA) is conducting a survey to obtain information from the fusion community about non-fusion work that has resulted from their DOE-funded fusion research. The subcommittee has requested that members of the community describe recent developments connected to the activities of the DOE Office of Fusion Energy Sciences. Two questions in particular were posed by the subcommittee. This document contains the authors’ responses to those questions.
Yasuda, Michiko; Miwa, Hiroki; Masuda, Sachiko; Takebayashi, Yumiko; Sakakibara, Hitoshi; Okazaki, Shin
2016-08-01
Symbiosis between legumes and rhizobia leads to the formation of N2-fixing root nodules. In soybean, several host genes, referred to as Rj genes, control nodulation. Soybean cultivars carrying the Rj4 gene restrict nodulation by specific rhizobia such as Bradyrhizobium elkanii We previously reported that the restriction of nodulation was caused by B. elkanii possessing a functional type III secretion system (T3SS), which is known for its delivery of virulence factors by pathogenic bacteria. In the present study, we investigated the molecular basis for the T3SS-dependent nodulation restriction in Rj4 soybean. Inoculation tests revealed that soybean cultivar BARC-2 (Rj4/Rj4) restricted nodulation by B. elkanii USDA61, whereas its nearly isogenic line BARC-3 (rj4/rj4) formed nitrogen-fixing nodules with the same strain. Root-hair curling and infection threads were not observed in the roots of BARC-2 inoculated with USDA61, indicating that Rj4 blocked B. elkanii infection in the early stages. Accumulation of H2O2 and salicylic acid (SA) was observed in the roots of BARC-2 inoculated with USDA61. Transcriptome analyses revealed that inoculation of USDA61, but not its T3SS mutant in BARC-2, induced defense-related genes, including those coding for hypersensitive-induced responsive protein, which act in effector-triggered immunity (ETI) in Arabidopsis. These findings suggest that B. elkanii T3SS triggers the SA-mediated ETI-type response in Rj4 soybean, which consequently blocks symbiotic interactions. This study revealed a common molecular mechanism underlying both plant-pathogen and plant-symbiont interactions, and suggests that establishment of a root nodule symbiosis requires the evasion or suppression of plant immune responses triggered by rhizobial effectors. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Trainor, Laurel J
2012-02-01
Evidence is presented that predictive coding is fundamental to brain function and present in early infancy. Indeed, mismatch responses to unexpected auditory stimuli are among the earliest robust cortical event-related potential responses, and have been measured in young infants in response to many types of deviation, including in pitch, timing, and melodic pattern. Furthermore, mismatch responses change quickly with specific experience, suggesting that predictive coding reflects a powerful, early-developing learning mechanism. Copyright © 2011 Elsevier B.V. All rights reserved.
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, S.; Jaiswal, P.; Li, Ye
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC
Dawson, S.; Jaiswal, P.; Li, Ye; ...
2016-12-01
We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less
Gudhka, Reema K; Neilan, Brett A; Burns, Brendan P
2015-01-01
Halococcus hamelinensis was the first archaeon isolated from stromatolites. These geomicrobial ecosystems are thought to be some of the earliest known on Earth, yet, despite their evolutionary significance, the role of Archaea in these systems is still not well understood. Detailed here is the genome sequencing and analysis of an archaeon isolated from stromatolites. The genome of H. hamelinensis consisted of 3,133,046 base pairs with an average G+C content of 60.08% and contained 3,150 predicted coding sequences or ORFs, 2,196 (68.67%) of which were protein-coding genes with functional assignments and 954 (29.83%) of which were of unknown function. Codon usage of the H. hamelinensis genome was consistent with a highly acidic proteome, a major adaptive mechanism towards high salinity. Amino acid transport and metabolism, inorganic ion transport and metabolism, energy production and conversion, ribosomal structure, and unknown function COG genes were overrepresented. The genome of H. hamelinensis also revealed characteristics reflecting its survival in its extreme environment, including putative genes/pathways involved in osmoprotection, oxidative stress response, and UV damage repair. Finally, genome analyses indicated the presence of putative transposases as well as positive matches of genes of H. hamelinensis against various genomes of Bacteria, Archaea, and viruses, suggesting the potential for horizontal gene transfer.
NASA Astrophysics Data System (ADS)
Sarria, D.
2016-12-01
The field of High Energy Atmospheric Physics (HEAP) includes the study of energetic events related to thunderstorms, such as Terrestrial Gamma-ray Flashes (TGF), associated electron-positron beams (TEB), gamma-ray glows and Thunderstorm Ground Enhancements (TGE). Understanding these phenomena requires accurate models for the interaction of particles with atmospheric air and electro-magnetic fields in the <100 MeV energy range. This study is the next step of the work presented in [C. Rutjes et al., 2016] that compared the performances of various codes in the absence of electro-magnetic fields. In the first part, we quantify simple but informative test cases of electrons in various electric field profiles. We will compare the avalanche length (of the Relativistic Runaway Electron Avalanche (RREA) process), the photon/electron spectra and spatial scattering. In particular, we test the effect of the low-energy threshold, that was found to be very important [Skeltved et al., 2014]. Note that even without a field, it was found to be important because of the straggling effect [C. Rutjes et al., 2016]. For this first part, we will be comparing GEANT4 (different flavours), FLUKA and the custom made code GRRR. In the second part, we test the propagation of these high energy particles in the atmosphere, from production altitude (around 10 km to 18 km) to satellite altitude (600 km). We use a simple and clearly fixed set-up for the atmospheric density, the geomagnetic field, the initial conditions, and the detection conditions of the particles. For this second part, we will be comparing GEANT4 (different flavours), FLUKA/CORSIKA and the custom made code MC-PEPTITA. References : C. Rutjes et al., 2016. Evaluation of Monte Carlo tools for high energy atmospheric physics. Geosci. Model Dev. Under review. Skeltved, A. B. et al., 2014. Modelling the relativistic runaway electron avalanche and the feedback mechanism with geant4. JGRA, doi :10.1002/2014JA020504.
New Kohn-Sham density functional based on microscopic nuclear and neutron matter equations of state
NASA Astrophysics Data System (ADS)
Baldo, M.; Robledo, L. M.; Schuck, P.; Viñas, X.
2013-06-01
A new version of the Barcelona-Catania-Paris energy functional is applied to a study of nuclear masses and other properties. The functional is largely based on calculated ab initio nuclear and neutron matter equations of state. Compared to typical Skyrme functionals having 10-12 parameters apart from spin-orbit and pairing terms, the new functional has only 2 or 3 adjusted parameters, fine tuning the nuclear matter binding energy and fixing the surface energy of finite nuclei. An energy rms value of 1.58 MeV is obtained from a fit of these three parameters to the 579 measured masses reported in the Audi and Wapstra [Nucl. Phys. ANUPABL0375-947410.1016/j.nuclphysa.2003.11.003 729, 337 (2003)] compilation. This rms value compares favorably with the one obtained using other successful mean field theories, which range from 1.5 to 3.0 MeV for optimized Skyrme functionals and 0.7 to 3.0 for the Gogny functionals. The other properties that have been calculated and compared to experiment are nuclear radii, the giant monopole resonance, and spontaneous fission lifetimes.
Ipe, N E; Rosser, K E; Moretti, C J; Manning, J W; Palmer, M J
2001-08-01
This paper evaluates the characteristics of ionization chambers for the measurement of absorbed dose to water using very low-energy x-rays. The values of the chamber correction factor, k(ch), used in the IPEMB 1996 code of practice for the UK secondary standard ionization chambers (PTW type M23342 and PTW type M23344), the Roos (PTW type 34001) and NACP electron chambers are derived. The responses in air of the small and large soft x-ray chambers (PTW type M23342 and PTW type M23344) and the NACP and Roos electron ionization chambers were compared. Besides the soft x-ray chambers, the NACP and Roos chambers can be used for very low-energy x-ray dosimetry provided that they are used in the restricted energy range for which their response does not change by more than 5%. The chamber correction factor was found by comparing the absorbed dose to water determined using the dosimetry protocol recommended for low-energy x-rays with that for very low-energy x-rays. The overlap energy range was extended using data from Grosswendt and Knight. Chamber correction factors given in this paper are chamber dependent, varying from 1.037 to 1.066 for a PTW type M23344 chamber, which is very different from a value of unity given in the IPEMB code. However, the values of k(ch) determined in this paper agree with those given in the DIN standard within experimental uncertainty. The authors recommend that the very low-energy section of the IPEMB code is amended to include the most up-to-date values of k(ch).
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
OPTIMAL ELECTRON ENERGIES FOR DRIVING CHROMOSPHERIC EVAPORATION IN SOLAR FLARES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reep, J. W.; Bradshaw, S. J.; Alexander, D., E-mail: jr665@cam.ac.uk, E-mail: stephen.bradshaw@rice.edu, E-mail: dalex@rice.edu
2015-08-01
In the standard model of solar flares, energy deposition by a beam of electrons drives strong chromospheric evaporation leading to a significantly denser corona and much brighter emission across the spectrum. Chromospheric evaporation was examined in great detail by Fisher et al., who described a distinction between two different regimes, termed explosive and gentle evaporation. In this work, we examine the importance of electron energy and stopping depths on the two regimes and on the atmospheric response. We find that with explosive evaporation, the atmospheric response does not depend strongly on electron energy. In the case of gentle evaporation, lowermore » energy electrons are significantly more efficient at heating the atmosphere and driving up-flows sooner than higher energy electrons. We also find that the threshold between explosive and gentle evaporation is not fixed at a given beam energy flux, but also depends strongly on the electron energy and duration of heating. Further, at low electron energies, a much weaker beam flux is required to drive explosive evaporation.« less
Energy Levels and Oscillator Strengths for Ne-like Iron Ions
NASA Astrophysics Data System (ADS)
Zhong, J. Y.; Zhang, J.; Zhao, G.; Lu, X..
2004-02-01
Energy levels and oscillator strengths among the 27 fine-structure levels belonging to the (1s22s2)2p6, 2p53s, 2p53p and 2p53d configurations of neon-like iron ion have been calculated by using three atomic structure codes, RCN/RCG, AUTOSTRUCTURE (AS) and GRASP. The relativistic corrections of the wave functions are taken into account in RCN/RCG calculations. The results well agree with experimental and theoretical data wherever available. Finally the accuracy of three codes was analyzed.
Simulations to study the static polarization limit for RHIC lattice
NASA Astrophysics Data System (ADS)
Duan, Zhe; Qin, Qing
2016-01-01
A study of spin dynamics based on simulations with the Polymorphic Tracking Code (PTC) is reported, exploring the dependence of the static polarization limit on various beam parameters and lattice settings for a practical RHIC lattice. It is shown that the behavior of the static polarization limit is dominantly affected by the vertical motion, while the effect of beam-beam interaction is small. In addition, the “nonresonant beam polarization” observed and studied in the lattice-independent model is also observed in this lattice-dependent model. Therefore, this simulation study gives insights of polarization evolution at fixed beam energies, that are not available in simple spin tracking. Supported by the U.S. Department of Energy (DE-AC02-98CH10886), Hundred-Talent Program (Chinese Academy of Sciences), and National Natural Science Foundation of China (11105164)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barahona, B.; Jonkman, J.; Damiani, R.
2014-12-01
Coupled dynamic analysis has an important role in the design of offshore wind turbines because the systems are subject to complex operating conditions from the combined action of waves and wind. The aero-hydro-servo-elastic tool FAST v8 is framed in a novel modularization scheme that facilitates such analysis. Here, we present the verification of new capabilities of FAST v8 to model fixed-bottom offshore wind turbines. We analyze a series of load cases with both wind and wave loads and compare the results against those from the previous international code comparison projects-the International Energy Agency (IEA) Wind Task 23 Subtask 2 Offshoremore » Code Comparison Collaboration (OC3) and the IEA Wind Task 30 OC3 Continued (OC4) projects. The verification is performed using the NREL 5-MW reference turbine supported by monopile, tripod, and jacket substructures. The substructure structural-dynamics models are built within the new SubDyn module of FAST v8, which uses a linear finite-element beam model with Craig-Bampton dynamic system reduction. This allows the modal properties of the substructure to be synthesized and coupled to hydrodynamic loads and tower dynamics. The hydrodynamic loads are calculated using a new strip theory approach for multimember substructures in the updated HydroDyn module of FAST v8. These modules are linked to the rest of FAST through the new coupling scheme involving mapping between module-independent spatial discretizations and a numerically rigorous implicit solver. The results show that the new structural dynamics, hydrodynamics, and coupled solutions compare well to the results from the previous code comparison projects.« less
10 CFR 603.305 - Use of a fixed-support TIA.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Use of a fixed-support TIA. 603.305 Section 603.305 Energy... Expenditure-Based and Fixed-Support Technology Investment Agreements § 603.305 Use of a fixed-support TIA. The contracting officer may use a fixed-support TIA if: (a) The agreement is to support or stimulate RD&D with...
Simulation of Ionospheric Response During Solar Eclipse Events
NASA Astrophysics Data System (ADS)
Kordella, L.; Earle, G. D.; Huba, J.
2016-12-01
Total solar eclipses are rare, short duration events that present interesting case studies of ionospheric behavior because the structure of the ionosphere is determined and stabilized by varying energies of solar radiation (Lyman alpha, X-ray, U.V., etc.). The ionospheric response to eclipse events is a source of scientific intrigue that has been studied in various capacities over the past 50 years. Unlike the daily terminator crossings, eclipses cause highly localized, steep gradients of ionization efficiency due to their comparatively small solar zenith angle. However, the corona remains present even at full obscuration, meaning that the energy reduction never falls to the levels seen at night. Previous eclipse studies performed by research groups in the US, UK, China and Russia have shown a range of effects, some counter-intuitive and others contradictory. In the shadowed region of an eclipse (i.e. umbra) it is logical to assume a reduction in ionization rates correlating with the reduction of incident solar radiation. Results have shown that even this straightforward hypothesis may not be true; effects on plasma distribution, motion and temperature are more appreciable than might be expected. Recent advancements in ionospheric simulation codes present the opportunity to investigate the relationship between geophysical conditions and geomagnetic location on resulting eclipse event ionosphere. Here we present computational simulation results using the Naval Research Lab (NRL) developed ionospheric modeling codes Sami2 and Sami3 (Sami2 is Another Model of the Ionosphere) modified with spatio-temporal photoionization attenuation functions derived from theory and empirical data.
Towards self-correcting quantum memories
NASA Astrophysics Data System (ADS)
Michnicki, Kamil
This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.
Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.
Wilkinson, Karl; Skylaris, Chris-Kriton
2013-10-30
We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.
Characterization of the Gamma Response of a Cadmium Capture-gated Neutron Spectrometer
NASA Astrophysics Data System (ADS)
Hogan, Nathaniel; Rees, Lawrence; Czirr, Bart; Bastola, Suraj
2010-10-01
We have studied the gamma response of a newly developed capture-gated neutron spectrometer. Such spectrometers detect a dual signal from incoming neutrons, allowing for differentiation between other particles, such as gamma rays. The neutron provides a primary light pulse in either plastic or liquid scintillator through neutron-proton collisions. A capture material then delivers a second pulse as the moderated neutron captures in the intended material, which then de-excites with the release of gamma energy. The presented spectrometer alternates one centimeter thick plastic scintillators with sheets of cadmium inserted in between for neutron capture. The neutron capture in cadmium offers a release of gamma energy ˜ 9 MeV. To verify that the interaction was caused by a neutron, the response functions of both events must be well known. Due to the prior existence of many capture-gated neutron spectrometers, the proton recoil pulse has already been studied, but the capture pulse is unique to each spectrometer and must be measured. Experimental results agree with theoretical Monte-Carlo code, both suggesting that the optics and geometry of the spectrometer play a large role in its efficiency. Results prove promising for the efficiency of the spectrometer.
Evaluation of a photon counting Medipix3RX CZT spectral x-ray detector
Jorgensen, Steven M.; Vercnocke, Andrew J.; Rundle, David S.; Butler, Philip H.; McCollough, Cynthia H.; Ritman, Erik L.
2016-01-01
We assessed the performance of a cadmium zinc telluride (CZT)-based Medipix3RX x-ray detector as a candidate for micro-computed tomography (micro-CT) imaging. This technology was developed at CERN for the Large Hadron Collider. It features an array of 128 by 128, 110 micrometer square pixels, each with eight simultaneous threshold counters, five of which utilize real-time charge summing, significantly reducing the charge sharing between contiguous pixels. Pixel response curves were created by imaging a range of x-ray intensities by varying x-ray tube current and by varying the exposure time with fixed x-ray current. Photon energy-related assessments were made by flooding the detector with the tin foil filtered emission of an I-125 radioisotope brachytherapy seed and sweeping the energy threshold of each of the four charge-summed counters of each pixel in 1 keV steps. Long term stability assessments were made by repeating exposures over the course of one hour. The high properly-functioning pixel yield (99%), long term stability (linear regression of whole-chip response over one hour of acquisitions: y = −0.0038x + 2284; standard deviation: 3.7 counts) and energy resolution (2.5 keV FWHM (single pixel), 3.7 keV FWHM across the full image) make this device suitable for spectral micro-CT. The charge summing performance effectively reduced the measurement corruption caused by charge sharing which, when unaccounted for, shifts the photon energy assignment to lower energies, degrading both count and energy accuracy. Effective charge summing greatly improves the potential for calibrated, energy-specific material decomposition and K edge difference imaging approaches. PMID:27795606
Evaluation of a photon counting Medipix3RX CZT spectral x-ray detector.
Jorgensen, Steven M; Vercnocke, Andrew J; Rundle, David S; Butler, Philip H; McCollough, Cynthia H; Ritman, Erik L
2016-08-28
We assessed the performance of a cadmium zinc telluride (CZT)-based Medipix3RX x-ray detector as a candidate for micro-computed tomography (micro-CT) imaging. This technology was developed at CERN for the Large Hadron Collider. It features an array of 128 by 128, 110 micrometer square pixels, each with eight simultaneous threshold counters, five of which utilize real-time charge summing, significantly reducing the charge sharing between contiguous pixels. Pixel response curves were created by imaging a range of x-ray intensities by varying x-ray tube current and by varying the exposure time with fixed x-ray current. Photon energy-related assessments were made by flooding the detector with the tin foil filtered emission of an I-125 radioisotope brachytherapy seed and sweeping the energy threshold of each of the four charge-summed counters of each pixel in 1 keV steps. Long term stability assessments were made by repeating exposures over the course of one hour. The high properly-functioning pixel yield (99%), long term stability (linear regression of whole-chip response over one hour of acquisitions: y = -0.0038x + 2284; standard deviation: 3.7 counts) and energy resolution (2.5 keV FWHM (single pixel), 3.7 keV FWHM across the full image) make this device suitable for spectral micro-CT. The charge summing performance effectively reduced the measurement corruption caused by charge sharing which, when unaccounted for, shifts the photon energy assignment to lower energies, degrading both count and energy accuracy. Effective charge summing greatly improves the potential for calibrated, energy-specific material decomposition and K edge difference imaging approaches.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.
2013-12-01
A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.
Zhang, Yao; Li, Yan; Xie, Jiang-Bo
2016-01-01
The response of plants to drought is controlled by the interaction between physiological regulation and morphological adjustment. Although recent studies have highlighted the long-term morphological acclimatization of plants to drought, there is still debate on how plant biomass allocation patterns respond to drought. In this study, we performed a greenhouse experiment with first-year seedlings of a desert shrub in control, drought and re-water treatments, to examine their physiological and morphological traits during drought and subsequent recovery. We found that (i) biomass was preferentially allocated to roots along a fixed allometric trajectory throughout the first year of development, irrespective of the variation in water availability; and (ii) this fixed biomass allocation pattern benefited the post-drought recovery. These results suggest that, in a stressful environment, natural selection has favoured a fixed biomass allocation pattern rather than plastic responses to environmental variation. The fixed ‘preferential allocation to root’ biomass suggests that roots may play a critical role in determining the fate of this desert shrub during prolonged drought. As the major organ for resource acquisition and storage, how the root system functions during drought requires further investigation. PMID:27073036
A novel neutron energy spectrum unfolding code using particle swarm optimization
NASA Astrophysics Data System (ADS)
Shahabinejad, H.; Sohrabpour, M.
2017-07-01
A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.
Panda, Bandita; Basu, Bhakti; Acharya, Celin; Rajaram, Hema; Apte, Shree Kumar
2017-01-01
Two strains of the nitrogen-fixing cyanobacterium Anabaena, native to Indian paddy fields, displayed differential sensitivity to exposure to uranyl carbonate at neutral pH. Anabaena sp. strain PCC 7120 and Anabaena sp. strain L-31 displayed 50% reduction in survival (LD 50 dose), following 3h exposure to 75μM and 200μM uranyl carbonate, respectively. Uranium responsive proteome alterations were visualized by 2D gel electrophoresis, followed by protein identification by MALDI-ToF mass spectrometry. The two strains displayed significant differences in levels of proteins associated with photosynthesis, carbon metabolism, and oxidative stress alleviation, commensurate with their uranium tolerance. Higher uranium tolerance of Anabaena sp. strain L-31 could be attributed to sustained photosynthesis and carbon metabolism and superior oxidative stress defense, as compared to the uranium sensitive Anabaena sp. strain PCC 7120. Uranium responsive proteome modulations in two nitrogen-fixing strains of Anabaena, native to Indian paddy fields, revealed that rapid adaptation to better oxidative stress management, and maintenance of metabolic and energy homeostasis underlies superior uranium tolerance of Anabaena sp. strain L-31 compared to Anabaena sp. strain PCC 7120. Copyright © 2016 Elsevier B.V. All rights reserved.
Chaos in a restricted problem of rotation of a rigid body with a fixed point
NASA Astrophysics Data System (ADS)
Borisov, A. V.; Kilin, A. A.; Mamaev, I. S.
2008-06-01
In this paper, we consider the transition to chaos in the phase portrait of a restricted problem of rotation of a rigid body with a fixed point. Two interrelated mechanisms responsible for chaotization are indicated: (1) the growth of the homoclinic structure and (2) the development of cascades of period doubling bifurcations. On the zero level of the area integral, an adiabatic behavior of the system (as the energy tends to zero) is noted. Meander tori induced by the break of the torsion property of the mapping are found.
State Dependency of Chemosensory Coding in the Gustatory Thalamus (VPMpc) of Alert Rats
Liu, Haixin
2015-01-01
The parvicellular portion of the ventroposteromedial nucleus (VPMpc) is the part of the thalamus that processes gustatory information. Anatomical evidence shows that the VPMpc receives ascending gustatory inputs from the parabrachial nucleus (PbN) in the brainstem and sends projections to the gustatory cortex (GC). Although taste processing in PbN and GC has been the subject of intense investigation in behaving rodents, much less is known on how VPMpc neurons encode gustatory information. Here we present results from single-unit recordings in the VPMpc of alert rats receiving multiple tastants. Thalamic neurons respond to taste with time-varying modulations of firing rates, consistent with those observed in GC and PbN. These responses encode taste quality as well as palatability. Comparing responses to tastants either passively delivered, or self-administered after a cue, unveiled the effects of general expectation on taste processing in VPMpc. General expectation led to an improvement of taste coding by modulating response dynamics, and single neuron ability to encode multiple tastants. Our results demonstrate that the time course of taste coding as well as single neurons' ability to encode for multiple qualities are not fixed but rather can be altered by the state of the animal. Together, the data presented here provide the first description that taste coding in VPMpc is dynamic and state-dependent. SIGNIFICANCE STATEMENT Over the past years, a great deal of attention has been devoted to understanding taste coding in the brainstem and cortex of alert rodents. Thanks to this research, we now know that taste coding is dynamic, distributed, and context-dependent. Alas, virtually nothing is known on how the gustatory thalamus (VPMpc) processes gustatory information in behaving rats. This manuscript investigates taste processing in the VPMpc of behaving rats. Our results show that thalamic neurons encode taste and palatability with time-varying patterns of activity and that thalamic coding of taste is modulated by general expectation. Our data will appeal not only to researchers interested in taste, but also to a broader audience of sensory and systems neuroscientists interested in the thalamocortical system. PMID:26609147
Field Testing of Compartmentalization Methods for Multifamily Construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, K.; Lstiburek, J. W.
2015-03-01
The 2012 International Energy Conservation Code (IECC) has an airtightness requirement of 3 air changes per hour at 50 Pascals test pressure (3 ACH50) for single-family and multifamily construction (in climate zones 3–8). The Leadership in Energy & Environmental Design certification program and ASHRAE Standard 189 have comparable compartmentalization requirements. ASHRAE Standard 62.2 will soon be responsible for all multifamily ventilation requirements (low rise and high rise); it has an exceptionally stringent compartmentalization requirement. These code and program requirements are driving the need for easier and more effective methods of compartmentalization in multifamily buildings.
A neutron spectrum unfolding computer code based on artificial neural networks
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2014-02-01
The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in the HTML format. NSDann unfolding code is freely available, upon request to the authors.
NASA Astrophysics Data System (ADS)
Toma, G.; Apel, W. D.; Arteaga, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.
2010-11-01
Previous EAS investigations have shown that for a fixed primary energy the charged particle density becomes independent of the primary mass at certain (fixed) distances from the shower core. This feature can be used as an estimator for the primary energy. We present results on the reconstruction of the primary energy spectrum of cosmic rays from the experimentally recorded S(500) observable (the density of charged particles at 500 m distance to the shower core) using the KASCADE-Grande detector array. The KASCADE-Grande experiment is hosted by the Karlsruhe Institute for Technology-Campus North, Karlsruhe, Germany, and operated by an international collaboration. The constant intensity cut (CIC) method is applied to evaluate the attenuation of the S(500) observable with the zenith angle and is corrected for. A calibration of S(500) values with the primary energy has been worked out by simulations and was applied to the data to obtain the primary energy spectrum (in the energy range log10[E0/GeV]∈[7.5,9]). The systematic uncertainties induced by different sources are considered. In addition, a correction based on a response matrix is applied to account for the effects of shower-to-shower fluctuations on the spectral index of the reconstructed energy spectrum.
NASA Astrophysics Data System (ADS)
Schimeczek, C.; Engel, D.; Wunner, G.
2012-07-01
Our previously published code for calculating energies and bound-bound transitions of medium-Z elements at neutron star magnetic field strengths [D. Engel, M. Klews, G. Wunner, Comput. Phys. Comm. 180 (2009) 302-311] was based on the adiabatic approximation. It assumes a complete decoupling of the (fast) gyration of the electrons under the action of the magnetic field and the (slow) bound motion along the field under the action of the Coulomb forces. For the single-particle orbitals this implied that each is a product of a Landau state and an (unknown) longitudinal wave function whose B-spline coefficients were determined self-consistently by solving the Hartree-Fock equations for the many-electron problem on a finite-element grid. In the present code we go beyond the adiabatic approximation, by allowing the transverse part of each orbital to be a superposition of Landau states, while assuming that the longitudinal part can be approximated by the same wave function in each Landau level. Inserting this ansatz into the energy variational principle leads to a system of coupled equations in which the B-spline coefficients depend on the weights of the individual Landau states, and vice versa, and which therefore has to be solved in a doubly self-consistent manner. The extended ansatz takes into account the back-reaction of the Coulomb motion of the electrons along the field direction on their motion in the plane perpendicular to the field, an effect which cannot be captured by the adiabatic approximation. The new code allows for the inclusion of up to 8 Landau levels. This reduces the relative error of energy values as compared to the adiabatic approximation results by typically a factor of three (1/3 of the original error), and yields accurate results also in regions of lower neutron star magnetic field strengths where the adiabatic approximation fails. Further improvements in the code are a more sophisticated choice of the initial wave functions, which takes into account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The Slater determinants of the atomic wave functions are constructed from single-particle orbitals ψi which are products of a wave function in the z direction (the direction of the magnetic field) and an expansion of the wave function perpendicular to the direction of the magnetic field in terms of Landau states, ψi(ρ,φ,z)=Pi(z)∑n=0NLtinϕni(ρ,φ). The tin are expansion coefficients, and the expansion is cut off at some maximum Landau level quantum number n=NL. In the previous version of the code only the lowest Landau level was included (NL=0), in the new version NL can take values of up to 7. As in the previous version of the code, the longitudinal wave functions are expanded in terms of sixth-order B-splines on finite elements on the z axis, with a combination of equidistant and quadratically widening element borders. Both the B-spline expansion coefficients and the Landau weights tin of all orbitals have to be determined in a doubly self-consistent way: For a given set of Landau weights tin, the system of linear equations for the B-spline expansion coefficients, which is equivalent to the Hartree-Fock equations for the longitudinal wave functions, is solved numerically. In the second step, for frozen B-spline coefficients new Landau weights are determined by minimizing the total energy with respect to the Landau expansion coefficients. Both steps require solving non-linear eigenvalue problems of Roothaan type. The procedure is repeated until convergence of both the B-spline coefficients and the Landau weights is achieved. Reasons for new version: The former version of the code was restricted to the adiabatic approximation, which assumes the quantum dynamics of the electrons in the plane perpendicular to the magnetic field to be fixed in the lowest Landau level, n=0. This approximation is valid only if the magnetic field strengths are large compared to the reference magnetic field BZ, for a nuclear charge Z,BZ=Z24.70108×105 T. Summary of revisions: In the new version, the transverse parts of the orbitals are expanded in terms of Landau states up to n=7, and the expansion coefficients are determined, together with the longitudinal wave functions, in a doubly self-consistent way. Thus the back-reaction of the quantum dynamics along the magnetic field direction on the quantum dynamics in the plane perpendicular to it is taken into account. The new ansatz not only increases the accuracy of the results for energy values and transition strengths obtained so far, but also allows their calculation for magnetic field strengths down to B≳BZ, where the adiabatic approximation fails. Restrictions: Intense magnetic field strengths are required, since the expansion of the transverse single-particle wave functions using 8 Landau levels will no longer produce accurate results if the scaled magnetic field strength parameter βZ=B/BZ becomes much smaller than unity. Unusual features: A huge program speed-up is achieved by making use of pre-calculated binary files. These can be calculated with additional programs provided with this package. Running time: 1-30 min.
Impact of topology in foliated quantum Einstein gravity.
Houthoff, W B; Kurov, A; Saueressig, F
2017-01-01
We use a functional renormalization group equation tailored to the Arnowitt-Deser-Misner formulation of gravity to study the scale dependence of Newton's coupling and the cosmological constant on a background spacetime with topology [Formula: see text]. The resulting beta functions possess a non-trivial renormalization group fixed point, which may provide the high-energy completion of the theory through the asymptotic safety mechanism. The fixed point is robust with respect to changing the parametrization of the metric fluctuations and regulator scheme. The phase diagrams show that this fixed point is connected to a classical regime through a crossover. In addition the flow may exhibit a regime of "gravitational instability", modifying the theory in the deep infrared. Our work complements earlier studies of the gravitational renormalization group flow on a background topology [Formula: see text] (Biemans et al. Phys Rev D 95:086013, 2017, Biemans et al. arXiv:1702.06539, 2017) and establishes that the flow is essentially independent of the background topology.
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogdanov, O. V., E-mail: bov@tpu.ru; Fiks, E. I.; Pivovarov, Yu. L.
2012-09-15
Numerical methods are used to study the dependence of the structure and the width of the angular distribution of Vavilov-Cherenkov radiation with a fixed wavelength in the vicinity of the Cherenkov cone on the radiator parameters (thickness and refractive index), as well as on the parameters of the relativistic heavy ion beam (charge and initial energy). The deceleration of relativistic heavy ions in the radiator, which decreases the velocity of ions, modifies the condition of structural interference of the waves emitted from various segments of the trajectory; as a result, a complex distribution of Vavilov-Cherenkov radiation appears. The main quantitymore » is the stopping power of a thin layer of the radiator (average loss of the ion energy), which is calculated by the Bethe-Bloch formula and using the SRIM code package. A simple formula is obtained to estimate the angular distribution width of Cherenkov radiation (with a fixed wavelength) from relativistic heavy ions taking into account the deceleration in the radiator. The measurement of this width can provide direct information on the charge of the ion that passes through the radiator, which extends the potentialities of Cherenkov detectors. The isotopic effect (dependence of the angular distribution of Vavilov-Cherenkov radiation on the ion mass) is also considered.« less
Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1997-01-01
This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.
Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1997-01-01
This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.
Sato, Tatsuhiko; Furuta, Takuya; Hashimoto, Shintaro; Kuga, Naoya
2015-01-01
PHITS is a general purpose Monte Carlo particle transport simulation code developed through the collaboration of several institutes mainly in Japan. It can analyze the motion of nearly all radiations over wide energy ranges in 3-dimensional matters. It has been used for various applications including medical physics. This paper reviews the recent improvements of the code, together with the biological dose estimation method developed on the basis of the microdosimetric function implemented in PHITS.
Integrating risk assessment and life cycle assessment: a case study of insulation.
Nishioka, Yurika; Levy, Jonathan I; Norris, Gregory A; Wilson, Andrew; Hofstetter, Patrick; Spengler, John D
2002-10-01
Increasing residential insulation can decrease energy consumption and provide public health benefits, given changes in emissions from fuel combustion, but also has cost implications and ancillary risks and benefits. Risk assessment or life cycle assessment can be used to calculate the net impacts and determine whether more stringent energy codes or other conservation policies would be warranted, but few analyses have combined the critical elements of both methodologies In this article, we present the first portion of a combined analysis, with the goal of estimating the net public health impacts of increasing residential insulation for new housing from current practice to the latest International Energy Conservation Code (IECC 2000). We model state-by-state residential energy savings and evaluate particulate matter less than 2.5 microm in diameter (PM2.5), NOx, and SO2 emission reductions. We use past dispersion modeling results to estimate reductions in exposure, and we apply concentration-response functions for premature mortality and selected morbidity outcomes using current epidemiological knowledge of effects of PM2.5 (primary and secondary). We find that an insulation policy shift would save 3 x 10(14) British thermal units or BTU (3 x 10(17) J) over a 10-year period, resulting in reduced emissions of 1,000 tons of PM2.5, 30,000 tons of NOx, and 40,000 tons of SO2. These emission reductions yield an estimated 60 fewer fatalities during this period, with the geographic distribution of health benefits differing from the distribution of energy savings because of differences in energy sources, population patterns, and meteorology. We discuss the methodology to be used to integrate life cycle calculations, which can ultimately yield estimates that can be compared with costs to determine the influence of external costs on benefit-cost calculations.
10 CFR 603.300 - Difference between an expenditure-based and a fixed-support TIA.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Difference between an expenditure-based and a fixed-support TIA. 603.300 Section 603.300 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS... Agreements § 603.300 Difference between an expenditure-based and a fixed-support TIA. The contracting officer...
Evaluation and Testing of the ADVANTG Code on SNM Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.
2013-09-24
Pacific Northwest National Laboratory (PNNL) has been tasked with evaluating the effectiveness of ORNL’s new hybrid transport code, ADVANTG, on scenarios of interest to our NA-22 sponsor, specifically of detection of diversion of special nuclear material (SNM). PNNL staff have determined that acquisition and installation of ADVANTG was relatively straightforward for a code in its phase of development, but probably not yet sufficient for mass distribution to the general user. PNNL staff also determined that with little effort, ADVANTG generated weight windows that typically worked for the problems and generated results consistent with MCNP. With slightly greater effort of choosingmore » a finer mesh around detectors or sample reaction tally regions, the figure of merit (FOM) could be further improved in most cases. This does take some limited knowledge of deterministic transport methods. The FOM could also be increased by limiting the energy range for a tally to the energy region of greatest interest. It was then found that an MCNP run with the full energy range for the tally showed improved statistics in the region used for the ADVANTG run. The specific case of interest chosen by the sponsor is the CIPN project from Las Alamos National Laboratory (LANL), which is an active interrogation, non-destructive assay (NDA) technique to quantify the fissile content in a spent fuel assembly and is also sensitive to cases of material diversion. Unfortunately, weight windows for the CIPN problem cannot currently be properly generated with ADVANTG due to inadequate accommodations for source definition. ADVANTG requires that a fixed neutron source be defined within the problem and cannot account for neutron multiplication. As such, it is rendered useless in active interrogation scenarios. It is also interesting to note that this is a difficult problem to solve and that the automated weight windows generator in MCNP actually slowed down the problem. Therefore, PNNL had determined that there is not an effective tool available for speeding up MCNP for problems such as the CIPN scenario. With regard to the Benchmark scenarios, ADVANTG performed very well for most of the difficult, long-running, standard radiation detection scenarios. Specifically, run time speedups were observed for spatially large scenarios, or those having significant shielding or scattering geometries. ADVANTG performed on par with existing codes for moderate sized scenarios, or those with little to moderate shielding, or multiple paths to the detectors. ADVANTG ran slower than MCNP for very simply, spatially small cases with little to no shielding that run very quickly anyway. Lastly, ADVANTG could not solve problems that did not consist of fixed source to detector geometries. For example, it could not solve scenarios with multiple detectors or secondary particles, such as active interrogation, neutron induced gamma, or fission neutrons.« less
NASA Astrophysics Data System (ADS)
Liu, Yuxin; Huang, Zhitong; Li, Wei; Ji, Yuefeng
2016-03-01
Various patterns of device-to-device (D2D) communication, from Bluetooth to Wi-Fi Direct, are emerging due to the increasing requirements of information sharing between mobile terminals. This paper presents an innovative pattern named device-to-device visible light communication (D2D-VLC) to alleviate the growing traffic problem. However, the occlusion problem is a difficulty in D2D-VLC. This paper proposes a game theory-based solution in which the best-response dynamics and best-response strategies are used to realize a mode-cooperative selection mechanism. This mechanism uses system capacity as the utility function to optimize system performance and selects the optimal communication mode for each active user from three candidate modes. Moreover, the simulation and experimental results show that the mechanism can attain a significant improvement in terms of effectiveness and energy saving compared with the cases where the users communicate via only the fixed transceivers (light-emitting diode and photo diode) or via only D2D.
30 CFR 75.1107-16 - Inspection of fire suppression devices.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Systems” (NFPA No. 11A—1970). National Fire Code No. 13A “Care and Maintenance of Sprinkler Systems” (NFPA No. 13A—1971). National Fire Code No. 15 “Water Spray Fixed Systems for Fire Protection” (NFPA No. 15—1969). National Fire Code No. 17 “Dry Chemical Extinguishing Systems” (NFPA No. 17—1969). National Fire...
30 CFR 75.1107-16 - Inspection of fire suppression devices.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Systems” (NFPA No. 11A—1970). National Fire Code No. 13A “Care and Maintenance of Sprinkler Systems” (NFPA No. 13A—1971). National Fire Code No. 15 “Water Spray Fixed Systems for Fire Protection” (NFPA No. 15—1969). National Fire Code No. 17 “Dry Chemical Extinguishing Systems” (NFPA No. 17—1969). National Fire...
30 CFR 75.1107-16 - Inspection of fire suppression devices.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Systems” (NFPA No. 11A—1970). National Fire Code No. 13A “Care and Maintenance of Sprinkler Systems” (NFPA No. 13A—1971). National Fire Code No. 15 “Water Spray Fixed Systems for Fire Protection” (NFPA No. 15—1969). National Fire Code No. 17 “Dry Chemical Extinguishing Systems” (NFPA No. 17—1969). National Fire...
NASA Technical Reports Server (NTRS)
Tinker, Michael L.
1998-01-01
Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.
Prediction suppression and surprise enhancement in monkey inferotemporal cortex.
Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R
2017-07-01
Exposing monkeys, over the course of days and weeks, to pairs of images presented in fixed sequence, so that each leading image becomes a predictor for the corresponding trailing image, affects neuronal visual responsiveness in area TE. At the end of the training period, neurons respond relatively weakly to a trailing image when it appears in a trained sequence and, thus, confirms prediction, whereas they respond relatively strongly to the same image when it appears in an untrained sequence and, thus, violates prediction. This effect could arise from prediction suppression (reduced firing in response to the occurrence of a probable event) or surprise enhancement (elevated firing in response to the omission of a probable event). To identify its cause, we compared firing under the prediction-confirming and prediction-violating conditions to firing under a prediction-neutral condition. The results provide strong evidence for prediction suppression and limited evidence for surprise enhancement. NEW & NOTEWORTHY In predictive coding models of the visual system, neurons carry signed prediction error signals. We show here that monkey inferotemporal neurons exhibit prediction-modulated firing, as posited by these models, but that the signal is unsigned. The response to a prediction-confirming image is suppressed, and the response to a prediction-violating image may be enhanced. These results are better explained by a model in which the visual system emphasizes unpredicted events than by a predictive coding model. Copyright © 2017 the American Physiological Society.
Simulation of the neutron response matrix of an EJ309 liquid scintillator
NASA Astrophysics Data System (ADS)
Bai, Huaiyong; Wang, Zhimin; Zhang, Luyu; Jiang, Haoyu; Lu, Yi; Chen, Jinxiang; Zhang, Guohui
2018-04-01
The neutron response matrix is the basis for measuring the neutron energy spectrum through unfolding the pulse height spectrum detected with a liquid scintillator. Based on the light output of the EJ309 liquid scintillator and the related reaction cross sections, a Monte Carlo code is developed to obtain the neutron response matrix. The effects of the related reactions, the contributions of different number of neutron interactions and the wall effect of the recoil proton are discussed. With the obtained neutron response matrix and the GRAVEL iterative unfolding method, the neutron energy spectra of the 252Cf and the 241AmBe neutron sources are measured, and the results are respectively compared with the theoretical prediction of the 252Cf neutron energy spectrum and the previous results of the 241AmBe neutron energy spectra.
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula
Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian
2017-01-01
The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817
Face Coding Is Bilateral in the Female Brain
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-01-01
Background It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. Methodology/Principal Findings In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. Conclusions/Significance LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content. PMID:20574528
Face coding is bilateral in the female brain.
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-06-21
It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content.
Is scale-invariance in gauge-Yukawa systems compatible with the graviton?
NASA Astrophysics Data System (ADS)
Christiansen, Nicolai; Eichhorn, Astrid; Held, Aaron
2017-10-01
We explore whether perturbative interacting fixed points in matter systems can persist under the impact of quantum gravity. We first focus on semisimple gauge theories and show that the leading order gravity contribution evaluated within the functional Renormalization Group framework preserves the perturbative fixed-point structure in these models discovered in [J. K. Esbensen, T. A. Ryttov, and F. Sannino, Phys. Rev. D 93, 045009 (2016)., 10.1103/PhysRevD.93.045009]. We highlight that the quantum-gravity contribution alters the scaling dimension of the gauge coupling, such that the system exhibits an effective dimensional reduction. We secondly explore the effect of metric fluctuations on asymptotically safe gauge-Yukawa systems which feature an asymptotically safe fixed point [D. F. Litim and F. Sannino, J. High Energy Phys. 12 (2014) 178., 10.1007/JHEP12(2014)178]. The same effective dimensional reduction that takes effect in pure gauge theories also impacts gauge-Yukawa systems. There, it appears to lead to a split of the degenerate free fixed point into an interacting infrared attractive fixed point and a partially ultraviolet attractive free fixed point. The quantum-gravity induced infrared fixed point moves towards the asymptotically safe fixed point of the matter system, and annihilates it at a critical value of the gravity coupling. Even after that fixed-point annihilation, graviton effects leave behind new partially interacting fixed points for the matter sector.
NASA Astrophysics Data System (ADS)
Nelson, N.; Azmy, Y.; Gardner, R. P.; Mattingly, J.; Smith, R.; Worrall, L. G.; Dewji, S.
2017-11-01
Detector response functions (DRFs) are often used for inverse analysis. We compute the DRF of a sodium iodide (NaI) nuclear material holdup field detector using the code named g03 developed by the Center for Engineering Applications of Radioisotopes (CEAR) at NC State University. Three measurement campaigns were performed in order to validate the DRF's constructed by g03: on-axis detection of calibration sources, off-axis measurements of a highly enriched uranium (HEU) disk, and on-axis measurements of the HEU disk with steel plates inserted between the source and the detector to provide attenuation. Furthermore, this work quantifies the uncertainty of the Monte Carlo simulations used in and with g03, as well as the uncertainties associated with each semi-empirical model employed in the full DRF representation. Overall, for the calibration source measurements, the response computed by the DRF for the prediction of the full-energy peak region of responses was good, i.e. within two standard deviations of the experimental response. In contrast, the DRF tended to overestimate the Compton continuum by about 45-65% due to inadequate tuning of the electron range multiplier fit variable that empirically represents physics associated with electron transport that is not modeled explicitly in g03. For the HEU disk measurements, computed DRF responses tended to significantly underestimate (more than 20%) the secondary full-energy peaks (any peak of lower energy than the highest-energy peak computed) due to scattering in the detector collimator and aluminum can, which is not included in the g03 model. We ran a sufficiently large number of histories to ensure for all of the Monte Carlo simulations that the statistical uncertainties were lower than their experimental counterpart's Poisson uncertainties. The uncertainties associated with least-squares fits to the experimental data tended to have parameter relative standard deviations lower than the peak channel relative standard deviation in most cases and good reduced chi-square values. The highest sources of uncertainty were identified as the energy calibration polynomial factor (due to limited source availability and NaI resolution) and the Ba-133 peak fit (only a very weak source was available), which were 20% and 10%, respectively.
Fixed Base Modal Survey of the MPCV Orion European Service Module Structural Test Article
NASA Technical Reports Server (NTRS)
Winkel, James P.; Akers, J. C.; Suarez, Vicente J.; Staab, Lucas D.; Napolitano, Kevin L.
2017-01-01
Recently, the MPCV Orion European Service Module Structural Test Article (E-STA) underwent sine vibration testing using the multi-axis shaker system at NASA GRC Plum Brook Station Mechanical Vibration Facility (MVF). An innovative approach using measured constraint shapes at the interface of E-STA to the MVF allowed high-quality fixed base modal parameters of the E-STA to be extracted, which have been used to update the E-STA finite element model (FEM), without the need for a traditional fixed base modal survey. This innovative approach provided considerable program cost and test schedule savings. This paper documents this modal survey, which includes the modal pretest analysis sensor selection, the fixed base methodology using measured constraint shapes as virtual references and measured frequency response functions, and post-survey comparison between measured and analysis fixed base modal parameters.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Morphine tolerance as a function of ratio schedule: response requirement or unit price?
Hughes, Christine E; Sigmon, Stacey C; Pitts, Raymond C; Dykstra, Linda A
2005-05-01
Key pecking by 3 pigeons was maintained by a multiple fixed-ratio 10, fixed-ratio 30, fixed-ratio 90 schedule of food presentation. Components differed with respect to amount of reinforcement, such that the unit price was 10 responses per 1-s access to food. Acute administration of morphine, l-methadone, and cocaine dose-dependently decreased overall response rates in each of the components. When a rate decreasing dose of morphine was administered daily, tolerance, as measured by an increase in the dose that reduced response rates to 50% of control (i.e., the ED50 value), developed in each of the components; however, the degree of tolerance was smallest in the fixed-ratio 90 component (i.e., the ED50 value increased the least). When the l-methadone dose-effect curve was redetermined during the chronic morphine phase, the degree of cross-tolerance conferred to l-methadone was similar across components, suggesting that behavioral variables may not influence the degree of cross-tolerance between opioids. During the chronic phase, the cocaine dose-effect curve shifted to the right for 2 pigeons and to the left for 1 pigeon, which is consistent with predictions based on the lack of pharmacological similarity between morphine and cocaine. When the morphine, l-methadone, and cocaine dose-effect curves were redetermined after chronic morphine administration ended, the morphine and l-methadone ED50s replicated those obtained prior to chronic morphine administration. The morphine data suggest that the fixed-ratio value (i.e., the absolute output) determines the degree of tolerance and not the unit price.
Xu, Yifang; Collins, Leslie M
2005-06-01
This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.
Linear microbunching analysis for recirculation machines
Tsai, C. -Y.; Douglas, D.; Li, R.; ...
2016-11-28
Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, C. -Y.; Douglas, D.; Li, R.
Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less
The role of water vapor in the ITCZ response to hemispherically asymmetric forcings
NASA Astrophysics Data System (ADS)
Clark, S.; Ming, Y.; Held, I.
2016-12-01
Studies using both comprehensive and simplified models have shown that changes to the inter-hemispheric energy budget can lead to changes in the position of the ITCZ. In these studies, the mean position of the ITCZ tends to shift toward the hemisphere receiving more energy. While included in many studies using comprehensive models, the role of the water vapor-radiation feedback in influencing ITCZ shifts has not been focused on in isolation in an idealized setting. Here we use an aquaplanet idealized moist general circulation model initially developed by Dargan Frierson, without clouds, newly coupled to a full radiative transfer code to investigate the role of water vapor in the ITCZ response to hemispherically asymmetric forcings. We induce a southward ITCZ shift by reducing the incoming solar radiation in the northern hemisphere. To isolate the radiative impact of water vapor, we run simulations where the radiation code sees the prognostic water vapor field, which responds dynamically to temperature, parameterized convection, and the circulation and also run simulations where the radiation code sees a prescribed static climatological water vapor field. We find that under Earth-like climate conditions, a shifting water vapor distribution's interaction with longwave radiation amplifies the latitudinal displacement of the ITCZ in response to a given hemispherically asymmetric forcing roughly by a factor of two; this effect appears robust to the convection scheme used. We argue that this amplifying effect can be explained using the energy flux equator theory for the position of the ITCZ.
Reversible RNA adenosine methylation in biological regulation
Jia, Guifang; Fu, Ye; He, Chuan
2012-01-01
N6-methyladenosine (m6A) is a ubiquitous modification in messenger RNA (mRNA) and other RNAs across most eukaryotes. For many years, however, the exact functions of m6A were not clearly understood. The discovery that the fat mass and obesity associated protein (FTO) is an m6A demethylase indicates that this modification is reversible and dynamically regulated, suggesting it has regulatory roles. In addition, it has been shown that m6A affects cell fate decisions in yeast and plant development. Recent affinity-based m6A profiling in mouse and human cells further showed that this modification is a widespread mark in coding and non-coding RNA transcripts and is likely dynamically regulated throughout developmental processes. Therefore, reversible RNA methylation, analogous to reversible DNA and histone modifications, may affect gene expression and cell fate decisions by modulating multiple RNA-related cellular pathways, which potentially provides rapid responses to various cellular and environmental signals, including energy and nutrient availability in mammals. PMID:23218460
Code OK3 - An upgraded version of OK2 with beam wobbling function
NASA Astrophysics Data System (ADS)
Ogoyski, A. I.; Kawata, S.; Popov, P. H.
2010-07-01
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
An Energy Balance Model to Predict Chemical Partitioning in a Photosynthetic Microbial Mat
NASA Technical Reports Server (NTRS)
Hoehler, Tori M.; Albert, Daniel B.; DesMarais, David J.
2006-01-01
Studies of biosignature formation in photosynthetic microbial mat communities offer potentially useful insights with regards to both solar and extrasolar astrobiology. Biosignature formation in such systems results from the chemical transformation of photosynthetically fixed carbon by accessory microorganisms. This fixed carbon represents a source not only of reducing power, but also energy, to these organisms, so that chemical and energy budgets should be coupled. We tested this hypothesis by applying an energy balance model to predict the fate of photosynthetic productivity under dark, anoxic conditions. Fermentation of photosynthetically fixed carbon is taken to be the only source of energy available to cyanobacteria in the absence of light and oxygen, and nitrogen fixation is the principal energy demand. The alternate fate for fixed carbon is to build cyanobacterial biomass with Redfield C:N ratio. The model predicts that, under completely nitrogen-limited conditions, growth is optimized when 78% of fixed carbon stores are directed into fermentative energy generation, with the remainder allocated to growth. These predictions were compared to measurements made on microbial mats that are known to be both nitrogen-limited and populated by actively nitrogen-fixing cyanobacteria. In these mats, under dark, anoxic conditions, 82% of fixed carbon stores were diverted into fermentation. The close agreement between these independent approaches suggests that energy balance models may provide a quantitative means of predicting chemical partitioning within such systems - an important step towards understanding how biological productivity is ultimately partitioned into biosignature compounds.
The Purine Bias of Coding Sequences is Determined by Physicochemical Constraints on Proteins.
Ponce de Leon, Miguel; de Miranda, Antonio Basilio; Alvarez-Valin, Fernando; Carels, Nicolas
2014-01-01
For this report, we analyzed protein secondary structures in relation to the statistics of three nucleotide codon positions. The purpose of this investigation was to find which properties of the ribosome, tRNA or protein level, could explain the purine bias (Rrr) as it is observed in coding DNA. We found that the Rrr pattern is the consequence of a regularity (the codon structure) resulting from physicochemical constraints on proteins and thermodynamic constraints on ribosomal machinery. The physicochemical constraints on proteins mainly come from the hydropathy and molecular weight (MW) of secondary structures as well as the energy cost of amino acid synthesis. These constraints appear through a network of statistical correlations, such as (i) the cost of amino acid synthesis, which is in favor of a higher level of guanine in the first codon position, (ii) the constructive contribution of hydropathy alternation in proteins, (iii) the spatial organization of secondary structure in proteins according to solvent accessibility, (iv) the spatial organization of secondary structure according to amino acid hydropathy, (v) the statistical correlation of MW with protein secondary structures and their overall hydropathy, (vi) the statistical correlation of thymine in the second codon position with hydropathy and the energy cost of amino acid synthesis, and (vii) the statistical correlation of adenine in the second codon position with amino acid complexity and the MW of secondary protein structures. Amino acid physicochemical properties and functional constraints on proteins constitute a code that is translated into a purine bias within the coding DNA via tRNAs. In that sense, the Rrr pattern within coding DNA is the effect of information transfer on nucleotide composition from protein to DNA by selection according to the codon positions. Thus, coding DNA structure and ribosomal machinery co-evolved to minimize the energy cost of protein coding given the functional constraints on proteins.
NASA Technical Reports Server (NTRS)
Chartas, G.; Flanagan, K.; Hughes, J. P.; Kellogg, E. M.; Nguyen, D.; Zombek, M.; Joy, M.; Kolodziejezak, J.
1993-01-01
The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters, correcting for the reflectivity of the mirror and convolving with the detector response.
NASA Technical Reports Server (NTRS)
Chartas, G.; Flanagan, Kathy; Hughes, John P.; Kellogg, Edwin M.; Nguyen, D.; Zombeck, M.; Joy, M.; Kolodziejezak, J.
1992-01-01
The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters correcting for the reflectivity of the mirror and convolving with the detector response.
Dynamic nesting and the incommensurate magnetic response in superconducting YBa 2Cu 3O 6+ y
NASA Astrophysics Data System (ADS)
Brinckmann, Jan; Lee, Patrick A.
1999-05-01
The dynamic magnetic susceptibility χ″( q, ω) of the t- t‧- J-model for YBCO compounds is studied in slave-boson mean-field theory. Within a renormalized random-phase approximation χ″ is compared for different fixed energies ω in the superconducting state. At the energy ω= ω0, where χ″(( π, π), ω) shows a sharp peak (the `41 meV resonance'), the response is commensurate in wave vector space. At lower energies around ωi=0.7 ω0, however, we find four peaks at q=( π± δ, π) and ( π, π± δ). The results are in agreement with inelastic neutron scattering experiments, in particular with the incommensurate response recently observed in YBa 2Cu 3O 6.6 by Mook et al. We argue that dynamic nesting in the dispersion of quasi particles causes this effect.
Rotating full- and reduced-dimensional quantum chemical models of molecules
NASA Astrophysics Data System (ADS)
Fábri, Csaba; Mátyus, Edit; Császár, Attila G.
2011-02-01
A flexible protocol, applicable to semirigid as well as floppy polyatomic systems, is developed for the variational solution of the rotational-vibrational Schrödinger equation. The kinetic energy operator is expressed in terms of curvilinear coordinates, describing the internal motion, and rotational coordinates, characterizing the orientation of the frame fixed to the nonrigid body. Although the analytic form of the kinetic energy operator might be very complex, it does not need to be known a priori within this scheme as it is constructed automatically and numerically whenever needed. The internal coordinates can be chosen to best represent the system of interest and the body-fixed frame is not restricted to an embedding defined with respect to a single reference geometry. The features of the technique mentioned make it especially well suited to treat large-amplitude nuclear motions. Reduced-dimensional rovibrational models can be defined straightforwardly by introducing constraints on the generalized coordinates. In order to demonstrate the flexibility of the protocol and the associated computer code, the inversion-tunneling of the ammonia (14NH3) molecule is studied using one, two, three, four, and six active vibrational degrees of freedom, within both vibrational and rovibrational variational computations. For example, the one-dimensional inversion-tunneling model of ammonia is considered also for nonzero rotational angular momenta. It turns out to be difficult to significantly improve upon this simple model. Rotational-vibrational energy levels are presented for rotational angular momentum quantum numbers J = 0, 1, 2, 3, and 4.
Development of a new EMP code at LANL
NASA Astrophysics Data System (ADS)
Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.
2006-05-01
A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.
NASA Astrophysics Data System (ADS)
Dima, R. S.; Maluta, N. E.; Maphanga, R. R.; Sankaran, V.
2017-10-01
Titanium dioxide (TiO2) polymorphs are widely used in many energy-related applications due to their peculiar electronic and physicochemical properties. The electronic structures of brookite TiO2 surfaces doped with transition metal ruthenium have been investigated by ab initio band calculations based on the density functional theory with the planewave ultrasoft pseudopotential method. The generalized gradient approximation (GGA) was used in the scheme of Perdew-Burke-Ernzerhof (PBE) to describe the exchange-correlation functional. All calculations were carried out with CASTEP (Cambridge Sequential Total EnergyPackage) code in Materials Studio of Accelrys Inc. The surface structures of Ru doped TiO2 were constructed by cleaving the 1 × 1 × 1 optimized bulk structure of brookite TiO2. The results indicate that Ru doping can narrow the band gap of TiO2, leading to the improvement in the photoreactivity of TiO2, and simultaneously maintain strong redox potential. The theoretical calculations could provide meaningful guide to develop more active photocatalysts with visible light response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1991-04-01
The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.
NASA Technical Reports Server (NTRS)
Braun, W. R.
1981-01-01
Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.
Adaptive and reliably acknowledged FSO communications
NASA Astrophysics Data System (ADS)
Fitz, Michael P.; Halford, Thomas R.; Kose, Cenk; Cromwell, Jonathan; Gordon, Steven
2015-05-01
Atmospheric turbulence causes the receive signal intensity on free space optical (FSO) communication links to vary over time. Scintillation fades can stymie connectivity for milliseconds at a time. To approach the information-theoretic limits of communication in such time-varying channels, it necessary to either code across extremely long blocks of data - thereby inducing unacceptable delays - or to vary the code rate according to the instantaneous channel conditions. We describe the design, laboratory testing, and over-the-air testing of an FSO modem that employs a protocol with adaptive coded modulation (ACM) and hybrid automatic repeat request. For links with fixed throughput, this protocol provides a 10dB reduction in the required received signal-to-noise ratio (SNR); for links with fixed range, this protocol provides the greater than a 3x increase in throughput. Independent U.S. Government tests demonstrate that our protocol effectively adapts the code rate to match the instantaneous channel conditions. The modem is able to provide throughputs in excess of 850 Mbps on links with ranges greater than 15 kilometers.
NASA Astrophysics Data System (ADS)
Bouderba, Yasmina; Nait Amor, Samir; Tribeche, Mouloud
2015-04-01
The VLF radio waves propagating in the Earth-Ionosphere waveguide are sensitive to the ionospheric disturbances due to X rays solar flux. In order to understand the VLF signal response to the solar flares, the LWPC code is used to simulate the signal perturbation parameters (amplitude and phase) at fixed solar zenith angle. In this work, we used the NRK-Algiers signal data and the study was done for different flares classes. The results show that the perturbed parameters increase with the increasing solar flares flux. This increases is due to the growth of the electron density resulting from the changes of the Wait's parameters. However, the behavior of the perturbation parameters as function of distance shows different forms of signal perturbations. It was also observed that the null points move towards the transmitter location when the flare flux increases which is related to the modal composition of the propagating signal. Effectively, for a given mode, the plot of the attenuation coefficient as function of the flare flux shows a decreases when the flux increases which is more significant for high modes. Thus, the solar flares effect is to amplify the VLF signal by reducing the attenuation coefficient.
Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webster, R., E-mail: ross.webster07@imperial.ac.uk; Harrison, N. M.; Bernasconi, L.
2015-06-07
We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features ofmore » the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.« less
Fixed-ratio discrimination: effects of response-produced blackouts1
Lydersen, Tore; Crossman, E. K.
1974-01-01
For three pigeons, reinforcement depended upon a left side-key response after execution of a fixed ratio 10 on the center key, and upon a right side-key response after fixed ratio 20. Each response during the fixed ratios produced a 0.5-sec blackout. The time between the first and last response in fixed ratio 10 was then equated with the time between the first and last response in fixed ratio 20 by increasing the blackout duration. The accuracy of side-key choice was disrupted, thereby suggesting that time, rather than number of responses, controlled choice responding. When the time between the first and last response was equated during both ratios, asymptotic accuracy was approximately equal to (two birds) or somewhat higher than (one bird) that obtained previously. The results of probes with intermediate fixed ratios and blackouts suggested that control of side-key choice had transferred from the time between the first and last response in ratios to blackout duration. PMID:16811819
NASA Astrophysics Data System (ADS)
Medvigy, D.; Levy, J.; Xu, X.; Batterman, S. A.; Hedin, L.
2013-12-01
Ecosystems, by definition, involve a community of organisms. These communities generally exhibit heterogeneity in their structure and composition as a result of local variations in climate, soil, topography, disturbance history, and other factors. Climate-driven shifts in ecosystems will likely include an internal re-organization of community structure and composition and as well as the introduction of novel species. In terms of vegetation, this ecosystem heterogeneity can occur at relatively small scales, sometimes of the order of tens of meters or even less. Because this heterogeneous landscape generally has a variable and nonlinear response to environmental perturbations, it is necessary to carefully aggregate the local competitive dynamics between individual plants to the large scales of tens or hundreds of kilometers represented in climate models. Accomplishing this aggregation in a computationally efficient way has proven to be an extremely challenging task. To meet this challenge, the Ecosystem Demography 2 (ED2) model statistically characterizes a distribution of local resource environments, and then simulates the competition between individuals of different sizes and species (or functional groupings). Within this framework, it is possible to explicitly simulate the impacts of climate change on ecosystem structure and composition, including both internal re-organization and the introduction of novel species or functional groups. This presentation will include several illustrative applications of the evolution of ecosystem structure and composition under climate change. One application pertains to the role of nitrogen-fixing species in tropical forests. Will increasing CO2 concentrations increase the demand for nutrients and perhaps give a competitive edge to nitrogen-fixing species? Will potentially warmer and drier conditions make some tropical forests more water-limited, reducing the demand for nitrogen, thereby giving a competitive advantage to non-nitrogen-fixing species? Will the response of nitrogen-fixing species to climate change be sensitive to local disturbance histories?
The Nrf2-antioxidant response element pathway: a target for regulating energy metabolism
USDA-ARS?s Scientific Manuscript database
The nuclear factor E2-related factor 2 (Nrf2) is a transcription factor that responds to oxidative stress by binding to the antioxidant response element (ARE) in the promoter of genes coding for antioxidant enzymes like NAD(P)H:quinone oxidoreductase 1 (NQO1) and proteins for glutathione synthesis. ...
Physical stress, mass, and energy for non-relativistic matter
NASA Astrophysics Data System (ADS)
Geracie, Michael; Prabhu, Kartik; Roberts, Matthew M.
2017-06-01
For theories of relativistic matter fields there exist two possible definitions of the stress-energy tensor, one defined by a variation of the action with the coframes at fixed connection, and the other at fixed torsion. These two stress-energy tensors do not necessarily coincide and it is the latter that corresponds to the Cauchy stress measured in the lab. In this note we discuss the corresponding issue for non-relativistic matter theories. We point out that while the physical non-relativistic stress, momentum, and mass currents are defined by a variation of the action at fixed torsion, the energy current does not admit such a description and is naturally defined at fixed connection. Any attempt to define an energy current at fixed torsion results in an ambiguity which cannot be resolved from the background spacetime data or conservation laws. We also provide computations of these quantities for some simple non-relativistic actions.
An international survey of building energy codes and their implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Meredydd; Roshchanka, Volha; Graham, Peter
Buildings are key to low-carbon development everywhere, and many countries have introduced building energy codes to improve energy efficiency in buildings. Yet, building energy codes can only deliver results when the codes are implemented. For this reason, studies of building energy codes need to consider implementation of building energy codes in a consistent and comprehensive way. This research identifies elements and practices in implementing building energy codes, covering codes in 22 countries that account for 70% of global energy demand from buildings. Access to benefits of building energy codes depends on comprehensive coverage of buildings by type, age, size, andmore » geographic location; an implementation framework that involves a certified agency to inspect construction at critical stages; and independently tested, rated, and labeled building energy materials. Training and supporting tools are another element of successful code implementation, and their role is growing in importance, given the increasing flexibility and complexity of building energy codes. Some countries have also introduced compliance evaluation and compliance checking protocols to improve implementation. This article provides examples of practices that countries have adopted to assist with implementation of building energy codes.« less
Two degrees of freedom parallel linkageto track solarthermal platforms installed on ships
NASA Astrophysics Data System (ADS)
Visa, I.; Cotorcea, A.; Moldovan, M.; Neagoe, M.
2016-08-01
Transportation is responsible at global level for one third of the total energy consumption. Solutions to reduce conventional fuel consumption are under research, to improve the systems’ efficiency and to replace the current fossil fuels. There already are several applications, usually onsmall maritime vehicles, using photovoltaic systems to cover the electric energy demand on-board andto support the owners’ commitment towards sustainability. In most cases, these systems are fixed, parallely aligned with the deck; thus, the amount of solar energy received is heavily reduced (down to 50%) as compared to the available irradiance. Large scale, feasible applications require to maximize the energy output of the solar convertors implemented on ships; using solar tracking systems is an obvious path, allowing a gain up to 35...40% in the output energy, as compared to fixed systems. Spatial limitations, continuous movement of the ship and harsh navigation condition are the main barriers in implementation. This paper proposes a solar tracking system with two degrees of freedom, for a solar thermal platform, based on a parallel linkage with sphericaljoints, considered as Multibody System. The analytical model for mobile platform position, pressure angles and a numerical example are given in the paper.
NASA Astrophysics Data System (ADS)
González Cornejo, Felipe A.; Cruchaga, Marcela A.; Celentano, Diego J.
2017-11-01
The present work reports a fluid-rigid solid interaction formulation described within the framework of a fixed-mesh technique. The numerical analysis is focussed on the study of a vortex-induced vibration (VIV) of a circular cylinder at low Reynolds number. The proposed numerical scheme encompasses the fluid dynamics computation in an Eulerian domain where the body is embedded using a collection of markers to describe its shape, and the rigid solid's motion is obtained with the well-known Newton's law. The body's velocity is imposed on the fluid domain through a penalty technique on the embedded fluid-solid interface. The fluid tractions acting on the solid are computed from the fluid dynamic solution of the flow around the body. The resulting forces are considered to solve the solid motion. The numerical code is validated by contrasting the obtained results with those reported in the literature using different approaches for simulating the flow past a fixed circular cylinder as a benchmark problem. Moreover, a mesh convergence analysis is also done providing a satisfactory response. In particular, a VIV problem is analyzed, emphasizing the description of the synchronization phenomenon.
Calculations of the energy levels and oscillator strengths of the Ne-like Fe Ion (Fe XVII)
NASA Astrophysics Data System (ADS)
Zhong, Jia-yong; Zhang, Jie; Zhao, Gang; Lu, Xin
Energy levels and oscillator strengths among the 27 fine-structure levels belonging to the (ls 22s 2)2p 6, 2p 53s, 2p 53p and 2p 53d configurations of the neon-like iron ion have been calculated using three atomic structure codes RCN/RCG, AUTOSTRUCTURE (AS) and GRASP. Relativistic corrections of the wave functions are taken into account in the RCN/RCG calculation. The results agree well with the available experimental and theoretical data. The accuracy of the three codes is analysed.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
NLC Luminosity as a Function of Beam Parameters
NASA Astrophysics Data System (ADS)
Nosochkov, Y.
2002-06-01
Realistic calculation of NLC luminosity has been performed using particle tracking in DIMAD and beam-beam simulations in GUINEA-PIG code for various values of beam emittance, energy and beta functions at the Interaction Point (IP). Results of the simulations are compared with analytic luminosity calculations. The optimum range of IP beta functions for high luminosity was identified.
Self-Avoiding Walks on the Random Lattice and the Random Hopping Model on a Cayley Tree
NASA Astrophysics Data System (ADS)
Kim, Yup
Using a field theoretic method based on the replica trick, it is proved that the three-parameter renormalization group for an n-vector model with quenched randomness reduces to a two-parameter one in the limit n (--->) 0 which corresponds to self-avoiding walks (SAWs). This is also shown by the explicit calculation of the renormalization group recursion relations to second order in (epsilon). From this reduction we find that SAWs on the random lattice are in the same universality class as SAWs on the regular lattice. By analogy with the case of the n-vector model with cubic anisotropy in the limit n (--->) 1, the fixed-point structure of the n-vector model with randomness is analyzed in the SAW limit, so that a physical interpretation of the unphysical fixed point is given. Corrections of the values of critical exponents of the unphysical fixed point published previously is also given. Next we formulate an integral equation and recursion relations for the configurationally averaged one particle Green's function of the random hopping model on a Cayley tree of coordination number ((sigma) + 1). This formalism is tested by applying it successfully to the nonrandom model. Using this scheme for 1 << (sigma) < (INFIN) we calculate the density of states of this model with a Gaussian distribution of hopping matrix elements in the range of energy E('2) > E(,c)('2), where E(,c) is a critical energy described below. The singularity in the Green's function which occurs at energy E(,1)('(0)) for (sigma) = (INFIN) is shifted to complex energy E(,1) (on the unphysical sheet of energy E) for small (sigma)('-1). This calculation shows that the density of states is smooth function of energy E around the critical energy E(,c) = Re E(,1) in accord with Wegner's theorem. In this formulation the density of states has no sharp phase transition on the real axis of E because E(,1) has developed an imaginary part. Using the Lifschitz argument, we calculate the density of states near the band edge for the model when the hopping matrix elements are governed by a bounded probability distribution. It is also shown within the dynamical system language that the density of states of the model with a bounded distribution never vanishes inside the band and we suggest a theoretical mechanism for the formation of energy bands.
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
Crystal growth and furnace analysis
NASA Technical Reports Server (NTRS)
Dakhoul, Youssef M.
1986-01-01
A thermal analysis of Hg/Cd/Te solidification in a Bridgman cell is made using Continuum's VAST code. The energy equation is solved in an axisymmetric, quasi-steady domain for both the molten and solid alloy regions. Alloy composition is calculated by a simplified one-dimensional model to estimate its effect on melt thermal conductivity and, consequently, on the temperature field within the cell. Solidification is assumed to occur at a fixed temperature of 979 K. Simplified boundary conditions are included to model both the radiant and conductive heat exchange between the furnace walls and the alloy. Calculations are performed to show how the steady-state isotherms are affected by: the hot and cold furnace temperatures, boundary condition parameters, and the growth rate which affects the calculated alloy's composition. The Advanced Automatic Directional Solidification Furnace (AADSF), developed by NASA, is also thermally analyzed using the CINDA code. The objective is to determine the performance and the overall power requirements for different furnace designs.
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
NASA Astrophysics Data System (ADS)
Martin, Alexandre; Torrent, Marc; Caracas, Razvan
2015-03-01
A formulation of the response of a system to strain and electric field perturbations in the pseudopotential-based density functional perturbation theory (DFPT) has been proposed by D.R Hamman and co-workers. It uses an elegant formalism based on the expression of DFT total energy in reduced coordinates, the key quantity being the metric tensor and its first and second derivatives. We propose to extend this formulation to the Projector Augmented-Wave approach (PAW). In this context, we express the full elastic tensor including the clamped-atom tensor, the atomic-relaxation contributions (internal stresses) and the response to electric field change (piezoelectric tensor and effective charges). With this we are able to compute the elastic tensor for all materials (metals and insulators) within a fully analytical formulation. The comparison with finite differences calculations on simple systems shows an excellent agreement. This formalism has been implemented in the plane-wave based DFT ABINIT code. We apply it to the computation of elastic properties and seismic-wave velocities of iron with impurity elements. By analogy with the materials contained in meteorites, tested impurities are light elements (H, O, C, S, Si).
NASA Astrophysics Data System (ADS)
Hu, Z.; Chen, Z.; Peng, X.; Du, T.; Cui, Z.; Ge, L.; Zhu, W.; Wang, Z.; Zhu, X.; Chen, J.; Zhang, G.; Li, X.; Chen, J.; Zhang, H.; Zhong, G.; Hu, L.; Wan, B.; Gorini, G.; Fan, T.
2017-06-01
A Bonner sphere spectrometer (BSS) plays an important role in characterizing neutron spectra and determining their neutron dose in a neutron-gamma mixed field. A BSS consisting of a set of nine polyethylene spheres with a 3He proportional counter was developed at Peking University to perform neutron spectrum and dosimetry measurements. Response functions (RFs) of the BSS were calculated with the general Monte Carlo code MCNP5 for the neutron energy range from thermal up to 20 MeV, and were experimentally calibrated with monoenergetic neutron beams from 144 keV to 14 MeV on a 4.5 MV Van de Graaff accelerator. The calculated RFs were corrected with the experimental values, and the whole response matrix was completely established. The spectrum of a 241Am-Be source was obtained after unfolding the measurement data of the BSS to the source and in fair agreement with the expected one. The integral ambient dose equivalent corresponding to the spectrum was 0.95 of the expected value. Results of the unfolded spectrum and the integral dose equivalent measured by the BSS verified that the RFs of the BSS were well established.
EVALUATION OF AN INDIVIDUALLY PACED COURSE FOR AIRBORNE RADIO CODE OPERATORS. FINAL REPORT.
ERIC Educational Resources Information Center
BALDWIN, ROBERT O.; JOHNSON, KIRK A.
IN THIS STUDY COMPARISONS WERE MADE BETWEEN AN INDIVIDUALLY PACED VERSION OF THE AIRBORNE RADIO CODE OPERATOR (ARCO) COURSE AND TWO VERSIONS OF THE COURSE IN WHICH THE STUDENTS PROGRESSED AT A FIXED PACE. THE ARCO COURSE IS A CLASS C SCHOOL IN WHICH THE STUDENT LEARNS TO SEND AND RECEIVE MILITARY MESSAGES USING THE INTERNATIONAL MORSE CODE. THE…
Priority coding for control room alarms
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1994-01-01
Indicating the priority of a spatially fixed, activated alarm tile on an alarm tile array by a shape coding at the tile, and preferably using the same shape coding wherever the same alarm condition is indicated elsewhere in the control room. The status of an alarm tile can change automatically or by operator acknowledgement, but tones and/or flashing cues continue to provide status information to the operator.
Effects of Nonequilibrium Chemistry and Darcy-Forchheimer Pyrolysis Flow for Charring Ablator
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq; Milos, Frank S.
2013-01-01
The fully implicit ablation and thermal response code simulates pyrolysis and ablation of thermal protection materials and systems. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid.This work describes new modeling capabilities that are added to a special version of code. These capabilities include a time-dependent pyrolysis gas flow momentum equation with Darcy-Forchheimer terms and pyrolysis gas species conservation equations with finite rate homogeneous chemical reactions. The total energy conservation equation is also enhanced for consistency with these new additions. Two groups of parametric studies of the phenolic impregnated carbon ablator are performed. In the first group, an Orion flight environment for a proposed lunar-return trajectory is considered. In the second group, various test conditions for arcjet models are examined. The central focus of these parametric studies is to understand the effect of pyrolysis gas momentum transfer on material in-depth thermal responses with finite-rate, equilibrium, or frozen homogeneous gas chemistry. Results indicate that the presence of chemical nonequilibrium pyrolysis gas flow does not significantly alter the in-depth thermal response performance predicted using the chemical equilibrium gas model.
The response of a radiophotoluminescent glass dosimeter in megavoltage photon and electron beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Araki, Fujio, E-mail: f-araki@kumamoto-u.ac.jp; Ohno, Takeshi
Purpose: This study investigated the response of a radiophotoluminescent glass dosimeter (RGD) in megavoltage photon and electron beams. Methods: The RGD response was compared with ion chamber measurements for 4–18 MV photons and 6–20 MeV electrons in plastic water phantoms. The response was also calculated via Monte Carlo (MC) simulations with EGSnrc/egs-chamber and Cavity user-codes, respectively. In addition, the response of the RGD cavity was analyzed as a function of field sizes and depths according to Burlin’s general cavity theory. The perturbation correction factor, P{sub Q}, in the RGD cavity was also estimated from MC simulations for photon and electronmore » beams. Results: The calculated and measured RGD energy response at reference conditions with a 10 × 10 cm{sup 2} field and 10 cm depth in photons was lower by up to 2.5% with increasing energy. The variation in RGD response in the field size range of 5 × 5 cm{sup 2} to 20 × 20 cm{sup 2} was 3.9% and 0.7%, at 10 cm depth for 4 and 18 MV, respectively. The depth dependence of the RGD response was constant within 1% for energies above 6 MV but it increased by 2.6% and 1.6% for a large (20 × 20 cm{sup 2}) field at 4 and 6 MV, respectively. The dose contributions from photon interactions (1 − d) in the RGD cavity, according to Burlin’s cavity theory, decreased with increasing energy and decreasing field size. The variation in (1 − d) between field sizes became larger with increasing depth for the lower energies of 4 and 6 MV. P{sub Q} for the RGD cavity was almost constant between 0.96 and 0.97 at 10 MV energies and above. Meanwhile, P{sub Q} depends strongly on field size and depth for 4 and 6 MV photons. In electron beams, the RGD response at a reference depth, d{sub ref}, varied by less than 1% over the electron energy range but was on average 4% lower than the response for 6 MV photons. Conclusions: The RGD response for photon beams depends on both (1 − d) and perturbation effects in the RGD cavity. Therefore, it is difficult to predict the energy dependence of RGD response by Burlin’s theory and it is recommended to directly measure RGD response or use the MC-calculated RGD response, regarding the practical use. The response for electron beams decreased rapidly at a depth beyond d{sub ref} for lower mean electron energies <3 MeV and in contrast P{sub Q} increased.« less
Warm Body Temperature Facilitates Energy Efficient Cortical Action Potentials
Yu, Yuguo; Hill, Adam P.; McCormick, David A.
2012-01-01
The energy efficiency of neural signal transmission is important not only as a limiting factor in brain architecture, but it also influences the interpretation of functional brain imaging signals. Action potential generation in mammalian, versus invertebrate, axons is remarkably energy efficient. Here we demonstrate that this increase in energy efficiency is due largely to a warmer body temperature. Increases in temperature result in an exponential increase in energy efficiency for single action potentials by increasing the rate of Na+ channel inactivation, resulting in a marked reduction in overlap of the inward Na+, and outward K+, currents and a shortening of action potential duration. This increase in single spike efficiency is, however, counterbalanced by a temperature-dependent decrease in the amplitude and duration of the spike afterhyperpolarization, resulting in a nonlinear increase in the spike firing rate, particularly at temperatures above approximately 35°C. Interestingly, the total energy cost, as measured by the multiplication of total Na+ entry per spike and average firing rate in response to a constant input, reaches a global minimum between 37–42°C. Our results indicate that increases in temperature result in an unexpected increase in energy efficiency, especially near normal body temperature, thus allowing the brain to utilize an energy efficient neural code. PMID:22511855
Glueball spectra from a matrix model of pure Yang-Mills theory
NASA Astrophysics Data System (ADS)
Acharyya, Nirmalendu; Balachandran, A. P.; Pandey, Mahul; Sanyal, Sambuddha; Vaidya, Sachindeo
2018-05-01
We present variational estimates for the low-lying energies of a simple matrix model that approximates SU(3) Yang-Mills theory on a three-sphere of radius R. By fixing the ground state energy, we obtain the (integrated) renormalization group (RG) equation for the Yang-Mills coupling g as a function of R. This RG equation allows to estimate the mass of other glueball states, which we find to be in excellent agreement with lattice simulations.
NASA Astrophysics Data System (ADS)
Susanty, W.; Helwani, Z.; Zulfansyah
2018-04-01
Oil palm frond can be used as alternative energy source by torrefaction process. Torrefaction is a treatment process of biomass into solid fuel by heating within temperature range of 200-300°C in an inert environment. This research aims to result solid fuel through torrefaction and to study the effect of process variable interaction. Torrefaction of oil palm frond was using fixed bed horizontal reactor with operation condition of temperature (225-275 °C), time (15-45 minutes) and nitrogen flow rate (50-150 ml/min). Responses resulted were calorific value and proximate (moisture, ash, volatile matter and fixed carbon). Analysis result was processed by using Design Expert v7.0.0. Result obtained for calorific value was 17.700-19.600 kJ/kg and for the proximate were moisture range of 3-4%; ash range of 1.5-4%; volatile matter of 45-55% and fixed carbon of 37-46%. The most affecting factor signficantly towards the responses was temperature then followed by time and nitrogen flow rate.
Investigation on energy conversion technology using biochemical reaction elements, 2
NASA Astrophysics Data System (ADS)
1994-03-01
For measures taken for resource/energy and environmental issues, a study is made on utilization of microbial biochemical reaction. As a reaction system using chemical energy, cited is production of petroleum substitution substances and food/feed by CO2 fixation using hydrogen energy and hydrogen bacteria. As to photo energy utilization, regarded as promising are CO2 fixation using photo energy and microalgae, and production of hydrogen and useful carbon compound using photosynthetic organisms. As living organism/electric energy interconversion, cited is the culture of chemoautotrophic bacteria which fix CO2 using electric energy. For enhancing its conversion efficiency, it is important to develop a technology of gene manipulation of the bacteria and a system to use functional biochemical elements adaptable to the electrode reaction. With regard to utilization of the microorganism metabolic function, the paper presents emission of soluble nitrogen in the hydrosphere into the atmosphere using denitrifying bacteria, removal of phosphorus, reduction in environmental pollution caused by heavy metal dilute solutions, and recovery as resources, etc.