Sample records for depletion codes applied

  1. Development of the MCNPX depletion capability: A Monte Carlo linked depletion method that automates the coupling between MCNPX and CINDER90 for high fidelity burnup calculations

    NASA Astrophysics Data System (ADS)

    Fensin, Michael Lorne

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.

  2. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  3. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  4. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tippayakul, C.; Ivanov, K.; Misu, S.

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross sectionmore » library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)« less

  5. Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skutnik, S.; Havloej, F.; Lago, D.

    2013-07-01

    The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)

  6. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  7. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  8. Status Report on NEAMS PROTEUS/ORIGEN Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieselquist, William A

    2016-02-18

    The US Department of Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program has contributed significantly to the development of the PROTEUS neutron transport code at Argonne National Laboratory and to the Oak Ridge Isotope Generation and Depletion Code (ORIGEN) depletion/decay code at Oak Ridge National Laboratory. PROTEUS’s key capability is the efficient and scalable (up to hundreds of thousands of cores) neutron transport solver on general, unstructured, three-dimensional finite-element-type meshes. The scalability and mesh generality enable the transfer of neutron and power distributions to other codes in the NEAMS toolkit for advanced multiphysics analysis. Recently, ORIGEN has received considerablemore » modernization to provide the high-performance depletion/decay capability within the NEAMS toolkit. This work presents a description of the initial integration of ORIGEN in PROTEUS, mainly performed during FY 2015, with minor updates in FY 2016.« less

  9. Impact of Reactor Operating Parameters on Cask Reactivity in BWR Burnup Credit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Betzler, Benjamin R; Ade, Brian J

    This paper discusses the effect of reactor operating parameters used in fuel depletion calculations on spent fuel cask reactivity, with relevance for boiling-water reactor (BWR) burnup credit (BUC) applications. Assessments that used generic BWR fuel assembly and spent fuel cask configurations are presented. The considered operating parameters, which were independently varied in the depletion simulations for the assembly, included fuel temperature, bypass water density, specific power, and operating history. Different operating history scenarios were considered for the assembly depletion to determine the effect of relative power distribution during the irradiation cycles, as well as the downtime between cycles. Depletion, decay,more » and criticality simulations were performed using computer codes and associated nuclear data within the SCALE code system. Results quantifying the dependence of cask reactivity on the assembly depletion parameters are presented herein.« less

  10. CESAR: A Code for Nuclear Fuel and Waste Characterisation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal, J.M.; Grouiller, J.P.; Launay, A.

    2006-07-01

    CESAR (Simplified Evolution Code Applied to Reprocessing) is a depletion code developed through a joint program between CEA and COGEMA. In the late 1980's, the first use of this code dealt with nuclear measurement at the Laboratories of the La Hague reprocessing plant. The use of CESAR was then extended to characterizations of all entrance materials and for characterisation, via tracer, of all produced waste. The code can distinguish more than 100 heavy nuclides, 200 fission products and 100 activation products, and it can characterise both the fuel and the structural material of the fuel. CESAR can also make depletionmore » calculations from 3 months to 1 million years of cooling time. Between 2003-2005, the 5. version of the code was developed. The modifications were related to the harmonisation of the code's nuclear data with the JEF2.2 nuclear data file. This paper describes the code and explains the extensive use of this code at the La Hague reprocessing plant and also for prospective studies. The second part focuses on the modifications of the latest version, and describes the application field and the qualification of the code. Many companies and the IAEA use CESAR today. CESAR offers a Graphical User Interface, which is very user-friendly. (authors)« less

  11. Development of a new lattice physics code robin for PWR application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Chen, G.

    2013-07-01

    This paper presents a description of methodologies and preliminary verification results of a new lattice physics code ROBIN, being developed for PWR application at Shanghai NuStar Nuclear Power Technology Co., Ltd. The methods used in ROBIN to fulfill various tasks of lattice physics analysis are an integration of historical methods and new methods that came into being very recently. Not only these methods like equivalence theory for resonance treatment and method of characteristics for neutron transport calculation are adopted, as they are applied in many of today's production-level LWR lattice codes, but also very useful new methods like the enhancedmore » neutron current method for Dancoff correction in large and complicated geometry and the log linear rate constant power depletion method for Gd-bearing fuel are implemented in the code. A small sample of verification results are provided to illustrate the type of accuracy achievable using ROBIN. It is demonstrated that ROBIN is capable of satisfying most of the needs for PWR lattice analysis and has the potential to become a production quality code in the future. (authors)« less

  12. 75 FR 38182 - Proposed Collection; Comment Request for Regulation Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-01

    ... Deplete the Ozone Layer and on Products Containing Such Chemicals (Sec. Sec. 52.4682-1(b), 52.4682-2(b....gov . SUPPLEMENTARY INFORMATION: Title: Excise Tax on Chemicals That Deplete the Ozone Layer and on... Revenue Code sections 4681 and 4682 relating to the tax on chemicals that deplete the ozone layer and on...

  13. Nuclear Fuel Depletion Analysis Using Matlab Software

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Nematollahi, M. R.

    Coupled first order IVPs are frequently used in many parts of engineering and sciences. In this article, we presented a code including three computer programs which are joint with the Matlab software to solve and plot the solutions of the first order coupled stiff or non-stiff IVPs. Some engineering and scientific problems related to IVPs are given and fuel depletion (production of the 239Pu isotope) in a Pressurized Water Nuclear Reactor (PWR) are computed by the present code.

  14. CESAR5.3: Isotopic depletion for Research and Testing Reactor decommissioning

    NASA Astrophysics Data System (ADS)

    Ritter, Guillaume; Eschbach, Romain; Girieud, Richard; Soulard, Maxime

    2018-05-01

    CESAR stands in French for "simplified depletion applied to reprocessing". The current version is now number 5.3 as it started 30 years ago from a long lasting cooperation with ORANO, co-owner of the code with CEA. This computer code can characterize several types of nuclear fuel assemblies, from the most regular PWR power plants to the most unexpected gas cooled and graphite moderated old timer research facility. Each type of fuel can also include numerous ranges of compositions like UOX, MOX, LEU or HEU. Such versatility comes from a broad catalog of cross section libraries, each corresponding to a specific reactor and fuel matrix design. CESAR goes beyond fuel characterization and can also provide an evaluation of structural materials activation. The cross-sections libraries are generated using the most refined assembly or core level transport code calculation schemes (CEA APOLLO2 or ERANOS), based on the European JEFF3.1.1 nuclear data base. Each new CESAR self shielded cross section library benefits all most recent CEA recommendations as for deterministic physics options. Resulting cross sections are organized as a function of burn up and initial fuel enrichment which allows to condensate this costly process into a series of Legendre polynomials. The final outcome is a fast, accurate and compact CESAR cross section library. Each library is fully validated, against a stochastic transport code (CEA TRIPOLI 4) if needed and against a reference depletion code (CEA DARWIN). Using CESAR does not require any of the neutron physics expertise implemented into cross section libraries generation. It is based on top quality nuclear data (JEFF3.1.1 for ˜400 isotopes) and includes up to date Bateman equation solving algorithms. However, defining a CESAR computation case can be very straightforward. Most results are only 3 steps away from any beginner's ambition: Initial composition, in core depletion and pool decay scenario. On top of a simple utilization architecture, CESAR includes a portable Graphical User Interface which can be broadly deployed in R&D or industrial facilities. Aging facilities currently face decommissioning and dismantling issues. This way to the end of the nuclear fuel cycle requires a careful assessment of source terms in the fuel, core structures and all parts of a facility that must be disposed of with "industrial nuclear" constraints. In that perspective, several CESAR cross section libraries were constructed for early CEA Research and Testing Reactors (RTR's). The aim of this paper is to describe how CESAR operates and how it can be used to help these facilities care for waste disposal, nuclear materials transport or basic safety cases. The test case will be based on the PHEBUS Facility located at CEA - Cadarache.

  15. The Long Noncoding RNA Transcriptome of Dictyostelium discoideum Development.

    PubMed

    Rosengarten, Rafael D; Santhanam, Balaji; Kokosar, Janez; Shaulsky, Gad

    2017-02-09

    Dictyostelium discoideum live in the soil as single cells, engulfing bacteria and growing vegetatively. Upon starvation, tens of thousands of amoebae enter a developmental program that includes aggregation, multicellular differentiation, and sporulation. Major shifts across the protein-coding transcriptome accompany these developmental changes. However, no study has presented a global survey of long noncoding RNAs (ncRNAs) in D. discoideum To characterize the antisense and long intergenic noncoding RNA (lncRNA) transcriptome, we analyzed previously published developmental time course samples using an RNA-sequencing (RNA-seq) library preparation method that selectively depletes ribosomal RNAs (rRNAs). We detected the accumulation of transcripts for 9833 protein-coding messenger RNAs (mRNAs), 621 lncRNAs, and 162 putative antisense RNAs (asRNAs). The noncoding RNAs were interspersed throughout the genome, and were distinct in expression level, length, and nucleotide composition. The noncoding transcriptome displayed a temporal profile similar to the coding transcriptome, with stages of gradual change interspersed with larger leaps. The transcription profiles of some noncoding RNAs were strongly correlated with known differentially expressed coding RNAs, hinting at a functional role for these molecules during development. Examining the mitochondrial transcriptome, we modeled two novel antisense transcripts. We applied yet another ribosomal depletion method to a subset of the samples to better retain transfer RNA (tRNA) transcripts. We observed polymorphisms in tRNA anticodons that suggested a post-transcriptional means by which D. discoideum compensates for codons missing in the genomic complement of tRNAs. We concluded that the prevalence and characteristics of long ncRNAs indicate that these molecules are relevant to the progression of molecular and cellular phenotypes during development. Copyright © 2017 Rosengarten et al.

  16. Turtle 24.0 diffusion depletion code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is corecontained. Provision is made to save data on tape for futuremore » reference. ( auth)« less

  17. 26 CFR 1.613-7 - Application of percentage depletion rates provided in section 613(b) to certain taxable years...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... TAXES (CONTINUED) Natural Resources § 1.613-7 Application of percentage depletion rates provided in... Code). In the case of mines, wells, or other natural deposits listed in section 613(b), the election...

  18. Probabilistic approach for decay heat uncertainty estimation using URANIE platform and MENDEL depletion code

    NASA Astrophysics Data System (ADS)

    Tsilanizara, A.; Gilardi, N.; Huynh, T. D.; Jouanne, C.; Lahaye, S.; Martinez, J. M.; Diop, C. M.

    2014-06-01

    The knowledge of the decay heat quantity and the associated uncertainties are important issues for the safety of nuclear facilities. Many codes are available to estimate the decay heat. ORIGEN, FISPACT, DARWIN/PEPIN2 are part of them. MENDEL is a new depletion code developed at CEA, with new software architecture, devoted to the calculation of physical quantities related to fuel cycle studies, in particular decay heat. The purpose of this paper is to present a probabilistic approach to assess decay heat uncertainty due to the decay data uncertainties from nuclear data evaluation like JEFF-3.1.1 or ENDF/B-VII.1. This probabilistic approach is based both on MENDEL code and URANIE software which is a CEA uncertainty analysis platform. As preliminary applications, single thermal fission of uranium 235, plutonium 239 and PWR UOx spent fuel cell are investigated.

  19. Depletion optimization of lumped burnable poisons in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodah, Z.H.

    1982-01-01

    Techniques were developed to construct a set of basic poison depletion curves which deplete in a monotonical manner. These curves were combined to match a required optimized depletion profile by utilizing either linear or non-linear programming methods. Three computer codes, LEOPARD, XSDRN, and EXTERMINATOR-2 were used in the analyses. A depletion routine was developed and incorporated into the XSDRN code to allow the depletion of fuel, fission products, and burnable poisons. The Three Mile Island Unit-1 reactor core was used in this work as a typical PWR core. Two fundamental burnable poison rod designs were studied. They are a solidmore » cylindrical poison rod and an annular cylindrical poison rod with water filling the central region.These two designs have either a uniform mixture of burnable poisons or lumped spheroids of burnable poisons in the poison region. Boron and gadolinium are the two burnable poisons which were investigated in this project. Thermal self-shielding factor calculations for solid and annular poison rods were conducted. Also expressions for overall thermal self-shielding factors for one or more than one size group of poison spheroids inside solid and annular poison rods were derived and studied. Poison spheroids deplete at a slower rate than the poison mixture because each spheroid exhibits some self-shielding effects of its own. The larger the spheroid, the higher the self-shielding effects due to the increase in poison concentration.« less

  20. Understanding the haling power depletion (HPD) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, S.; Blyth, T.; Ivanov, K.

    2012-07-01

    The Pennsylvania State Univ. (PSU) is using the university version of the Studsvik Scandpower Code System (CMS) for research and education purposes. Preparations have been made to incorporate the CMS into the PSU Nuclear Engineering graduate class 'Nuclear Fuel Management' course. The information presented in this paper was developed during the preparation of the material for the course. The Haling Power Depletion (HPD) was presented in the course for the first time. The HPD method has been criticized as not valid by many in the field even though it has been successfully applied at PSU for the past 20 years.more » It was noticed that the radial power distribution (RPD) for low leakage cores during depletion remained similar to that of the HPD during most of the cycle. Thus, the Haling Power Depletion (HPD) may be used conveniently mainly for low leakage cores. Studies were then made to better understand the HPD and the results are presented in this paper. Many different core configurations can be computed quickly with the HPD without using Burnable Poisons (BP) to produce several excellent low leakage core configurations that are viable for power production. Once the HPD core configuration is chosen for further analysis, techniques are available for establishing the BP design to prevent violating any of the safety constraints in such HPD calculated cores. In summary, in this paper it has been shown that the HPD method can be used for guiding the design for the low leakage core. (authors)« less

  1. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  2. Gadolinia depletion analysis by CASMO-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Y.; Saji, E.; Toba, A.

    1993-01-01

    CASMO-4 is the most recent version of the lattice physics code CASMO introduced by Studsvik. The principal aspects of the CASMO-4 model that differ from the models in previous CASMO versions are as follows: (1) heterogeneous model for two-dimensional transport theory calculations; and (2) microregion depletion model for burnable absorbers, such as gadolinia. Of these aspects, the first has previously been benchmarked against measured data of critical experiments and Monte Carlo calculations, verifying the high degree of accuracy. To proceed with CASMO-4 benchmarking, it is desirable to benchmark the microregion depletion model, which enables CASMO-4 to calculate gadolinium depletion directlymore » without the need for precalculated MICBURN cross-section data. This paper presents the benchmarking results for the microregion depletion model in CASMO-4 using the measured data of depleted gadolinium rods.« less

  3. Measured and calculated fast neutron spectra in a depleted uranium and lithium hydride shielded reactor

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.; Mueller, R. A.

    1973-01-01

    Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.

  4. Work plan for improving the DARWIN2.3 depleted material balance calculation of nuclides of interest for the fuel cycle

    NASA Astrophysics Data System (ADS)

    Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain

    2017-09-01

    DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.

  5. Fully depleted back illuminated CCD

    DOEpatents

    Holland, Stephen Edward

    2001-01-01

    A backside illuminated charge coupled device (CCD) is formed of a relatively thick high resistivity photon sensitive silicon substrate, with frontside electronic circuitry, and an optically transparent backside ohmic contact for applying a backside voltage which is at least sufficient to substantially fully deplete the substrate. A greater bias voltage which overdepletes the substrate may also be applied. One way of applying the bias voltage to the substrate is by physically connecting the voltage source to the ohmic contact. An alternate way of applying the bias voltage to the substrate is to physically connect the voltage source to the frontside of the substrate, at a point outside the depletion region. Thus both frontside and backside contacts can be used for backside biasing to fully deplete the substrate. Also, high resistivity gaps around the CCD channels and electrically floating channel stop regions can be provided in the CCD array around the CCD channels. The CCD array forms an imaging sensor useful in astronomy.

  6. Transfers of proven oil and gas properties from individuals to controlled corporations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cash, L.S.; Dickens, T.L.

    1985-12-01

    Code Section 613A(c) (10) sets forth an exception, for transfers by individuals to their controlled corporations, to the general rule of section 613A(c)(9) denying percentage depletion to transferees of proven oil and gas properties. The proposed regulations attempt to provide guidelines to help taxpayers comply, and, although these regulations are not all-inclusive, they should be helpful to taxpayers who must rely on these provisions to prevent the loss of deductions for percentage depletion. Because of some apparent ambiguities in this area of the Internal Revenue Code and Treasury's inability to flesh out these ambiguities in its proposed regulations, affected taxpayersmore » should be cautious. 2 tables.« less

  7. Modeling charge collection efficiency degradation in partially depleted GaAs photodiodes using the 1- and 2-carrier Hecht equations

    DOE PAGES

    Auden, E. C.; Vizkelethy, G.; Serkland, D. K.; ...

    2017-03-24

    Here, the Hecht equation can be used to model the nonlinear degradation of charge collection efficiency (CCE) in response to radiation-induced displacement damage in both fully and partially depleted GaAs photodiodes. CCE degradation is measured for laser-generated photocurrent as a function of fluence and bias in Al 0.3Ga 0.7As/GaAs/Al 0.25Ga 0.75As p-i-n photodiodes which have been irradiated with 12 MeV C and 7.5 MeV Si ions. CCE is observed to degrade more rapidly with fluence in partially depleted photodiodes than in fully depleted photodiodes. When the intrinsic GaAs layer is fully depleted, the 2-carrier Hecht equation describes CCE degradation asmore » photogenerated electrons and holes recombine at defect sites created by radiation damage in the depletion region. If the GaAs layer is partially depleted, CCE degradation is more appropriately modeled as the sum of the 2-carrier Hecht equation applied to electrons and holes generated within the depletion region and the 1-carrier Hecht equation applied to minority carriers that diffuse from the field-free (non-depleted) region into the depletion region. Enhanced CCE degradation is attributed to holes that recombine within the field-free region of the partially depleted intrinsic GaAs layer before they can diffuse into the depletion region.« less

  8. Modeling charge collection efficiency degradation in partially depleted GaAs photodiodes using the 1- and 2-carrier Hecht equations

    NASA Astrophysics Data System (ADS)

    Auden, E. C.; Vizkelethy, G.; Serkland, D. K.; Bossert, D. J.; Doyle, B. L.

    2017-05-01

    The Hecht equation can be used to model the nonlinear degradation of charge collection efficiency (CCE) in response to radiation-induced displacement damage in both fully and partially depleted GaAs photodiodes. CCE degradation is measured for laser-generated photocurrent as a function of fluence and bias in Al0.3Ga0.7As/GaAs/Al0.25Ga0.75As p-i-n photodiodes which have been irradiated with 12 MeV C and 7.5 MeV Si ions. CCE is observed to degrade more rapidly with fluence in partially depleted photodiodes than in fully depleted photodiodes. When the intrinsic GaAs layer is fully depleted, the 2-carrier Hecht equation describes CCE degradation as photogenerated electrons and holes recombine at defect sites created by radiation damage in the depletion region. If the GaAs layer is partially depleted, CCE degradation is more appropriately modeled as the sum of the 2-carrier Hecht equation applied to electrons and holes generated within the depletion region and the 1-carrier Hecht equation applied to minority carriers that diffuse from the field-free (non-depleted) region into the depletion region. Enhanced CCE degradation is attributed to holes that recombine within the field-free region of the partially depleted intrinsic GaAs layer before they can diffuse into the depletion region.

  9. Modeling charge collection efficiency degradation in partially depleted GaAs photodiodes using the 1- and 2-carrier Hecht equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auden, E. C.; Vizkelethy, G.; Serkland, D. K.

    Here, the Hecht equation can be used to model the nonlinear degradation of charge collection efficiency (CCE) in response to radiation-induced displacement damage in both fully and partially depleted GaAs photodiodes. CCE degradation is measured for laser-generated photocurrent as a function of fluence and bias in Al 0.3Ga 0.7As/GaAs/Al 0.25Ga 0.75As p-i-n photodiodes which have been irradiated with 12 MeV C and 7.5 MeV Si ions. CCE is observed to degrade more rapidly with fluence in partially depleted photodiodes than in fully depleted photodiodes. When the intrinsic GaAs layer is fully depleted, the 2-carrier Hecht equation describes CCE degradation asmore » photogenerated electrons and holes recombine at defect sites created by radiation damage in the depletion region. If the GaAs layer is partially depleted, CCE degradation is more appropriately modeled as the sum of the 2-carrier Hecht equation applied to electrons and holes generated within the depletion region and the 1-carrier Hecht equation applied to minority carriers that diffuse from the field-free (non-depleted) region into the depletion region. Enhanced CCE degradation is attributed to holes that recombine within the field-free region of the partially depleted intrinsic GaAs layer before they can diffuse into the depletion region.« less

  10. New Approach For Prediction Groundwater Depletion

    NASA Astrophysics Data System (ADS)

    Moustafa, Mahmoud

    2017-01-01

    Current approaches to quantify groundwater depletion involve water balance and satellite gravity. However, the water balance technique includes uncertain estimation of parameters such as evapotranspiration and runoff. The satellite method consumes time and effort. The work reported in this paper proposes using failure theory in a novel way to predict groundwater saturated thickness depletion. An important issue in the failure theory proposed is to determine the failure point (depletion case). The proposed technique uses depth of water as the net result of recharge/discharge processes in the aquifer to calculate remaining saturated thickness resulting from the applied pumping rates in an area to evaluate the groundwater depletion. Two parameters, the Weibull function and Bayes analysis were used to model and analyze collected data from 1962 to 2009. The proposed methodology was tested in a nonrenewable aquifer, with no recharge. Consequently, the continuous decline in water depth has been the main criterion used to estimate the depletion. The value of the proposed approach is to predict the probable effect of the current applied pumping rates on the saturated thickness based on the remaining saturated thickness data. The limitation of the suggested approach is that it assumes the applied management practices are constant during the prediction period. The study predicted that after 300 years there would be an 80% probability of the saturated aquifer which would be expected to be depleted. Lifetime or failure theory can give a simple alternative way to predict the remaining saturated thickness depletion with no time-consuming processes such as the sophisticated software required.

  11. A common class of transcripts with 5'-intron depletion, distinct early coding sequence features, and N1-methyladenosine modification.

    PubMed

    Cenik, Can; Chua, Hon Nian; Singh, Guramrit; Akef, Abdalla; Snyder, Michael P; Palazzo, Alexander F; Moore, Melissa J; Roth, Frederick P

    2017-03-01

    Introns are found in 5' untranslated regions (5'UTRs) for 35% of all human transcripts. These 5'UTR introns are not randomly distributed: Genes that encode secreted, membrane-bound and mitochondrial proteins are less likely to have them. Curiously, transcripts lacking 5'UTR introns tend to harbor specific RNA sequence elements in their early coding regions. To model and understand the connection between coding-region sequence and 5'UTR intron status, we developed a classifier that can predict 5'UTR intron status with >80% accuracy using only sequence features in the early coding region. Thus, the classifier identifies transcripts with 5 ' proximal- i ntron- m inus-like-coding regions ("5IM" transcripts). Unexpectedly, we found that the early coding sequence features defining 5IM transcripts are widespread, appearing in 21% of all human RefSeq transcripts. The 5IM class of transcripts is enriched for non-AUG start codons, more extensive secondary structure both preceding the start codon and near the 5' cap, greater dependence on eIF4E for translation, and association with ER-proximal ribosomes. 5IM transcripts are bound by the exon junction complex (EJC) at noncanonical 5' proximal positions. Finally, N 1 -methyladenosines are specifically enriched in the early coding regions of 5IM transcripts. Taken together, our analyses point to the existence of a distinct 5IM class comprising ∼20% of human transcripts. This class is defined by depletion of 5' proximal introns, presence of specific RNA sequence features associated with low translation efficiency, N 1 -methyladenosines in the early coding region, and enrichment for noncanonical binding by the EJC. © 2017 Cenik et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gleicher, Frederick; Ortensi, Javier; DeHart, Mark

    Accurate calculation of desired quantities to predict fuel behavior requires the solution of interlinked equations representing different physics. Traditional fuels performance codes often rely on internal empirical models for the pin power density and a simplified boundary condition on the cladding edge. These simplifications are performed because of the difficulty of coupling applications or codes on differing domains and mapping the required data. To demonstrate an approach closer to first principles, the neutronics application Rattlesnake and the thermal hydraulics application RELAP-7 were coupled to the fuels performance application BISON under the master application MAMMOTH. A single fuel pin was modeledmore » based on the dimensions of a Westinghouse 17x17 fuel rod. The simulation consisted of a depletion period of 1343 days, roughly equal to three full operating cycles, followed by a station blackout (SBO) event. The fuel rod was depleted for 1343 days for a near constant total power loading of 65.81 kW. After 1343 days the fission power was reduced to zero (simulating a reactor shut-down). Decay heat calculations provided the time-varying energy source after this time. For this problem, Rattlesnake, BISON, and RELAP-7 are coupled under MAMMOTH in a split operator approach. Each system solves its physics on a separate mesh and, for RELAP-7 and BISON, on only a subset of the full problem domain. Rattlesnake solves the neutronics over the whole domain that includes the fuel, cladding, gaps, water, and top and bottom rod holders. Here BISON is applied to the fuel and cladding with a 2D axi-symmetric domain, and RELAP-7 is applied to the flow of the circular outer water channel with a set of 1D flow equations. The mesh on the Rattlesnake side can either be 3D (for low order transport) or 2D (for diffusion). BISON has a matching ring structure mesh for the fuel so both the power density and local burn up are copied accurately from Rattlesnake. At each depletion time step, Rattlesnake calculates a power density, fission density rate, burn-up distribution and fast flux based on the current water density and fuel temperature. These are then mapped to the BISON mesh for a fuels performance solve. BISON calculates the fuel temperature and cladding surface temperature based upon the current power density and bulk fluid temperature. RELAP-7 then calculates the fluid temperature, water density fraction and water phase velocity based upon the cladding surface temperature. The fuel temperature and the fluid density are then passed back to Rattlesnake for another neutronics calculation. Six Picard or fixed-point style iterations are preformed in this manner to obtain consistent tightly coupled and stable results. For this paper a set of results from the detailed calculation are provided for both during depletion and the SBO event. We demonstrate that a detailed calculation closer to first principles can be done under MAMMOTH between different applications on differing domains.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  14. The influence of ego depletion on sprint start performance in athletes without track and field experience.

    PubMed

    Englert, Chris; Persaud, Brittany N; Oudejans, Raôul R D; Bertrams, Alex

    2015-01-01

    We tested the assumption that ego depletion would affect the sprint start in a sample of N = 38 athletes without track and field experience in an experiment by applying a mixed between- (depletion vs. non-depletion) within- (T1: before manipulation of ego depletion vs. T2: after manipulation of ego depletion) subjects design. We assumed that ego depletion would increase the possibility for a false start, as regulating the impulse to initiate the sprinting movement too soon before the starting signal requires self-control. In line with our assumption, we found a significant interaction as there was only a significant increase in the number of false starts from T1 to T2 for the depletion group while this was not the case for the non-depletion group. We conclude that ego depletion has a detrimental influence on the sprint start in athletes without track and field experience.

  15. The influence of ego depletion on sprint start performance in athletes without track and field experience

    PubMed Central

    Englert, Chris; Persaud, Brittany N.; Oudejans, Raôul R. D.; Bertrams, Alex

    2015-01-01

    We tested the assumption that ego depletion would affect the sprint start in a sample of N = 38 athletes without track and field experience in an experiment by applying a mixed between- (depletion vs. non-depletion) within- (T1: before manipulation of ego depletion vs. T2: after manipulation of ego depletion) subjects design. We assumed that ego depletion would increase the possibility for a false start, as regulating the impulse to initiate the sprinting movement too soon before the starting signal requires self-control. In line with our assumption, we found a significant interaction as there was only a significant increase in the number of false starts from T1 to T2 for the depletion group while this was not the case for the non-depletion group. We conclude that ego depletion has a detrimental influence on the sprint start in athletes without track and field experience. PMID:26347678

  16. 26 CFR 7.57(d)-1 - Election with respect to straight line recovery of intangibles.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Tax Reform Act of 1976. Under this election taxpayers may use cost depletion to compute straight line... wells to which the election applies, cost depletion to compute straight line recovery of intangibles for... whether or not the taxpayer uses cost depletion in computing taxable income. (5) The election is made by a...

  17. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  18. Ionospheric modification - An initial report on artificially created equatorial Spread F

    NASA Technical Reports Server (NTRS)

    Ossakow, S. L.; Zalesak, S. T.; Mcdonald, B. E.

    1978-01-01

    A numerical simulation code for investigating equatorial Spread F in the collisional Rayleigh-Taylor regime is utilized to follow the evolution of artificial plasma density depletions injected into the bottomside nighttime equatorial F region. The 70 km diameter hole rapidly rises and steepens, forming plasma density enhancements at altitudes below the rising hole. The distribution of enhancements and depletions is similar to natural equatorial Spread F phenomena, except it occurs on a much faster time scale. These predictions warrant carrying out artificial injection experiments in the nighttime equatorial F region.

  19. The lncRNA CASC9 and RNA binding protein HNRNPL form a complex and co-regulate genes linked to AKT signaling.

    PubMed

    Klingenberg, Marcel; Groß, Matthias; Goyal, Ashish; Polycarpou-Schwarz, Maria; Miersch, Thilo; Ernst, Anne-Sophie; Leupold, Jörg; Patil, Nitin; Warnken, Uwe; Allgayer, Heike; Longerich, Thomas; Schirmacher, Peter; Boutros, Michael; Diederichs, Sven

    2018-05-23

    The identification of viability-associated long non-coding RNAs (lncRNA) might be a promising rationale for new therapeutic approaches in liver cancer. Here, we applied the first RNAi screening approach in hepatocellular carcinoma (HCC) cell lines to find viability-associated lncRNAs. Among the multiple identified lncRNAs with a significant impact on HCC cell viability, we selected CASC9 (Cancer Susceptibility 9) due to the strength of its phenotype, expression, and upregulation in HCC versus normal liver. CASC9 regulated viability across multiple HCC cell lines as shown by CRISPR interference, single siRNA- and siPOOL-mediated depletion of CASC9. Further, CASC9 depletion caused an increase in apoptosis and decrease of proliferation. We identified the RNA binding protein heterogeneous nuclear ribonucleoprotein L (HNRNPL) as a CASC9 interacting protein by RNA affinity purification (RAP) and validated it by native RNA immunoprecipitation (RIP). Knockdown of HNRNPL mimicked the loss-of-viability phenotype observed upon CASC9 depletion. Analysis of the proteome (SILAC) of CASC9- and HNRNPL-depleted cells revealed a set of co-regulated genes which implied a role of the CASC9:HNRNPL complex in AKT-signaling and DNA damage sensing. CASC9 expression levels were elevated in patient-derived tumor samples compared to normal control tissue and had a significant association with overall survival of HCC patients. In a xenograft chicken chorioallantoic membrane model, we measured a decreased tumor size after knockdown of CASC9. Taken together, we provide a comprehensive list of viability-associated lncRNAs in HCC. We identified the CASC9:HNRNPL complex as a clinically relevant viability-associated lncRNA/protein complex which affects AKT-signaling and DNA damage sensing in HCC. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.

  20. DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OGDEN DM; KIRCH NW

    2007-10-31

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.

  1. Cross-site comparison of ribosomal depletion kits for Illumina RNAseq library construction.

    PubMed

    Herbert, Zachary T; Kershner, Jamie P; Butty, Vincent L; Thimmapuram, Jyothi; Choudhari, Sulbha; Alekseyev, Yuriy O; Fan, Jun; Podnar, Jessica W; Wilcox, Edward; Gipson, Jenny; Gillaspy, Allison; Jepsen, Kristen; BonDurant, Sandra Splinter; Morris, Krystalynne; Berkeley, Maura; LeClerc, Ashley; Simpson, Stephen D; Sommerville, Gary; Grimmett, Leslie; Adams, Marie; Levine, Stuart S

    2018-03-15

    Ribosomal RNA (rRNA) comprises at least 90% of total RNA extracted from mammalian tissue or cell line samples. Informative transcriptional profiling using massively parallel sequencing technologies requires either enrichment of mature poly-adenylated transcripts or targeted depletion of the rRNA fraction. The latter method is of particular interest because it is compatible with degraded samples such as those extracted from FFPE and also captures transcripts that are not poly-adenylated such as some non-coding RNAs. Here we provide a cross-site study that evaluates the performance of ribosomal RNA removal kits from Illumina, Takara/Clontech, Kapa Biosystems, Lexogen, New England Biolabs and Qiagen on intact and degraded RNA samples. We find that all of the kits are capable of performing significant ribosomal depletion, though there are differences in their ease of use. All kits were able to remove ribosomal RNA to below 20% with intact RNA and identify ~ 14,000 protein coding genes from the Universal Human Reference RNA sample at >1FPKM. Analysis of differentially detected genes between kits suggests that transcript length may be a key factor in library production efficiency. These results provide a roadmap for labs on the strengths of each of these methods and how best to utilize them.

  2. A combined PHREEQC-2/parallel fracture model for the simulation of laminar/non-laminar flow and contaminant transport with reactions

    NASA Astrophysics Data System (ADS)

    Masciopinto, Costantino; Volpe, Angela; Palmiotta, Domenico; Cherubini, Claudia

    2010-09-01

    A combination of a parallel fracture model with the PHREEQC-2 geochemical model was developed to simulate sequential flow and chemical transport with reactions in fractured media where both laminar and turbulent flows occur. The integration of non-laminar flow resistances in one model produced relevant effects on water flow velocities, thus improving model prediction capabilities on contaminant transport. The proposed conceptual model consists of 3D rock-blocks, separated by horizontal bedding plane fractures with variable apertures. Particle tracking solved the transport equations for conservative compounds and provided input for PHREEQC-2. For each cluster of contaminant pathways, PHREEQC-2 determined the concentration for mass-transfer, sorption/desorption, ion exchange, mineral dissolution/precipitation and biodegradation, under kinetically controlled reactive processes of equilibrated chemical species. Field tests have been performed for the code verification. As an example, the combined model has been applied to a contaminated fractured aquifer of southern Italy in order to simulate the phenol transport. The code correctly fitted the field available data and also predicted a possible rapid depletion of phenols as a result of an increased biodegradation rate induced by a simulated artificial injection of nitrates, upgradient to the sources.

  3. Environmental performance of green building code and certification systems.

    PubMed

    Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua

    2014-01-01

    We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).

  4. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  5. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  6. Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.

    2008-05-01

    The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.

  7. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses--Criticality (keff) Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaglione, John M; Mueller, Don; Wagner, John C

    2011-01-01

    One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less

  8. Methods used to calculate doses resulting from inhalation of Capstone depleted uranium aerosols.

    PubMed

    Miller, Guthrie; Cheng, Yung Sung; Traub, Richard J; Little, Tom T; Guilmette, Raymond A

    2009-03-01

    The methods used to calculate radiological and toxicological doses to hypothetical persons inside either a U.S. Army Abrams tank or Bradley Fighting Vehicle that has been perforated by depleted uranium munitions are described. Data from time- and particle-size-resolved measurements of depleted uranium aerosol as well as particle-size-resolved measurements of aerosol solubility in lung fluids for aerosol produced in the breathing zones of the hypothetical occupants were used. The aerosol was approximated as a mixture of nine monodisperse (single particle size) components corresponding to particle size increments measured by the eight stages plus the backup filter of the cascade impactors used. A Markov Chain Monte Carlo Bayesian analysis technique was employed, which straightforwardly calculates the uncertainties in doses. Extensive quality control checking of the various computer codes used is described.

  9. Spent fuel pool storage calculations using the ISOCRIT burnup credit tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucukboyaci, Vefa; Marshall, William BJ J

    2012-01-01

    In order to conservatively apply burnup credit in spent fuel pool criticality safety analyses, Westinghouse has developed a software tool, ISOCRIT, for generating depletion isotopics. This tool is used to create isotopics data based on specific reactor input parameters, such as design basis assembly type; bounding power/burnup profiles; reactor specific moderator temperature profiles; pellet percent theoretical density; burnable absorbers, axial blanket regions, and bounding ppm boron concentration. ISOCRIT generates burnup dependent isotopics using PARAGON; Westinghouse's state-of-the-art and licensed lattice physics code. Generation of isotopics and passing the data to the subsequent 3D KENO calculations are performed in an automated fashion,more » thus reducing the chance for human error. Furthermore, ISOCRIT provides the means for responding to any customer request regarding re-analysis due to changed parameters (e.g., power uprate, exit temperature changes, etc.) with a quick turnaround.« less

  10. Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges

    NASA Astrophysics Data System (ADS)

    Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver

    2017-10-01

    The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.

  11. Comparative functional characterization of the CSR-1 22G-RNA pathway in Caenorhabditis nematodes

    PubMed Central

    Tu, Shikui; Wu, Monica Z.; Wang, Jie; Cutter, Asher D.; Weng, Zhiping; Claycomb, Julie M.

    2015-01-01

    As a champion of small RNA research for two decades, Caenorhabditis elegans has revealed the essential Argonaute CSR-1 to play key nuclear roles in modulating chromatin, chromosome segregation and germline gene expression via 22G-small RNAs. Despite CSR-1 being preserved among diverse nematodes, the conservation and divergence in function of the targets of small RNA pathways remains poorly resolved. Here we apply comparative functional genomic analysis between C. elegans and Caenorhabditis briggsae to characterize the CSR-1 pathway, its targets and their evolution. C. briggsae CSR-1-associated small RNAs that we identified by immunoprecipitation-small RNA sequencing overlap with 22G-RNAs depleted in cbr-csr-1 RNAi-treated worms. By comparing 22G-RNAs and target genes between species, we defined a set of CSR-1 target genes with conserved germline expression, enrichment in operons and more slowly evolving coding sequences than other genes, along with a small group of evolutionarily labile targets. We demonstrate that the association of CSR-1 with chromatin is preserved, and show that depletion of cbr-csr-1 leads to chromosome segregation defects and embryonic lethality. This first comparative characterization of a small RNA pathway in Caenorhabditis establishes a conserved nuclear role for CSR-1 and highlights its key role in germline gene regulation across multiple animal species. PMID:25510497

  12. A definition of depletion of fish stocks

    USGS Publications Warehouse

    Van Oosten, John

    1949-01-01

    Attention was focused on the need of a common and better understanding of the term depletion as applied to the fisheries in order to eliminate if possible the existing inexactness of thought on the subject. Depletion has been confused at various times with at least ten different ideas associated with it but which, as has has heen pointed out, are not synonymous at all. In defining depletion we must recognize that the term represents a condition and must not he confounded with the cause (overfishing) that leads to this condition or with the symptoms that identify it. Depletion was defined as a reduction, through overfishing, in the level of abundance of the exploitable segment of a stock that prevents the realization of the maximum productive capacity.

  13. A semi-empirical model for the formation and depletion of the high burnup structure in UO 2

    DOE PAGES

    Pizzocri, D.; Cappia, F.; Luzzi, L.; ...

    2017-01-31

    In the rim zone of UO 2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. To this end, we per-formed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Moreover, based on these new experimental data, we assume an exponential reduction of the average grain size withmore » local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.« less

  14. The association between controlled interpersonal affect regulation and resource depletion.

    PubMed

    Martínez-Íñigo, David; Poerio, Giulia Lara; Totterdell, Peter

    2013-07-01

    This investigation focuses on what occurs to individuals' self-regulatory resource during controlled Interpersonal Affect Regulation (IAR) which is the process of deliberately influencing the internal feeling states of others. Combining the strength model of self-regulation and the resources conservation model, the investigation tested whether: (1) IAR behaviors are positively related to ego-depletion because goal-directed behaviors demand self-regulatory processes, and (2) the use of affect-improving strategies benefits from a source of resource-recovery because it initiates positive feedback from targets, as proposed from a resource-conservation perspective. To test this, a lab study based on an experimental dual-task paradigm using a sample of pairs of friends in the UK and a longitudinal field study of a sample of healthcare workers in Spain were conducted. The experimental study showed a depleting effect of interpersonal affect-improving IAR on a subsequent self-regulation task. The field study showed that while interpersonal affect-worsening was positively associated with depletion, as indicated by the level of emotional exhaustion, interpersonal affect-improving was only associated with depletion after controlling for the effect of positive feedback from clients. The findings indicate that IAR does have implications for resource depletion, but that social reactions play a role in the outcome. © 2013 The Authors. Applied Psychology: Health and Well-Being © 2013 The International Association of Applied Psychology.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal, Jean-Marc; Eschbach, Romain; Launay, Agnes

    CEA and AREVA-NC have developed and used a depletion code named CESAR for 30 years. This user-friendly industrial tool provides fast characterizations for all types of nuclear fuel (PWR / UOX or MOX or reprocess Uranium, BWR / UOX or MOX, MTR and SFR) and the wastes associated. CESAR can evaluate 100 heavy nuclides, 200 fission products and 150 activation products (with Helium and Tritium formation). It can also characterize the structural material of the fuel (Zircalloy, stainless steel, M5 alloy). CESAR provides depletion calculations for any reactor irradiation history and from 3 months to 1 million years of coolingmore » time. CESAR5.3 is based on the latest calculation schemes recommended by the CEA and on an international nuclear data base (JEFF-3.1.1). It is constantly checked against the CEA referenced and qualified depletion code DARWIN. CESAR incorporates the CEA qualification based on the dissolution analyses of fuel rod samples and the 'La Hague' reprocessing plant feedback experience. AREVA-NC uses CESAR intensively at 'La Hague' plant, not only for prospective studies but also for characterizations at different industrial facilities all along the reprocessing process and waste conditioning (near 150 000 calculations per year). CESAR is the reference code for AREVA-NC. CESAR is used directly or indirectly with other software, data bank or special equipment in many parts of the La Hague plants. The great flexibility of CESAR has rapidly interested other projects. CESAR became a 'tool' directly integrated in some other softwares. Finally, coupled with a Graphical User Interface, it can be easily used independently, responding to many needs for prospective studies as a support for nuclear facilities or transport. An English version is available. For the principal isotopes of U and Pu, CESAR5 benefits from the CEA experimental validation for the PWR UOX fuels, up to a burnup of 60 GWd/t and for PWR MOX fuels, up to 45 GWd/t. CESAR version 5.3 uses the CEA reference calculation codes for neutron physics with the JEFF-3.1.1 nuclear data set. (authors)« less

  16. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  17. Decay heat of sodium fast reactor: Comparison of experimental measurements on the PHENIX reactor with calculations performed with the French DARWIN package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benoit, J. C.; Bourdot, P.; Eschbach, R.

    2012-07-01

    A Decay Heat (DH) experiment on the whole core of the French Sodium-Cooled Fast Reactor PHENIX has been conducted in May 2008. The measurements began an hour and a half after the shutdown of the reactor and lasted twelve days. It is one of the experiments used for the experimental validation of the depletion code DARWIN thereby confirming the excellent performance of the aforementioned code. Discrepancies between measured and calculated decay heat do not exceed 8%. (authors)

  18. Single Event Upset Rate Estimates for a 16-K CMOS (Complementary Metal Oxide Semiconductor) SRAM (Static Random Access Memory).

    DTIC Science & Technology

    1986-09-30

    4 . ~**..ft.. ft . - - - ft SI TABLES 9 I. SA32~40 Single Event Upset Test, 1140-MeV Krypton, 9/l8/8~4. . .. .. .. .. .. .16 II. CRUP Simulation...cosmic ray interaction analysis described in the remainder of this report were calculated using the CRUP computer code 3 modified for funneling. The... CRUP code requires, as inputs, the size of a depletion region specified as a retangular parallel piped with dimensions a 9 b S c, the effective funnel

  19. Method for depleting BWRs using optimal control rod patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less

  20. High pressure elasticity and thermal properties of depleted uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobsen, M. K., E-mail: mjacobsen@lanl.gov; Velisavljevic, N., E-mail: nenad@lanl.gov

    2016-04-28

    Studies of the phase diagram of uranium have revealed a wealth of high pressure and temperature phases. Under ambient conditions the crystal structure is well defined up to 100 gigapascals (GPa), but very little information on thermal conduction or elasticity is available over this same range. This work has applied ultrasonic interferometry to determine the elasticity, mechanical, and thermal properties of depleted uranium to 4.5 GPa. Results show general strengthening with applied load, including an overall increase in acoustic thermal conductivity. Further implications are discussed within. This work presents the first high pressure studies of the elasticity and thermal properties ofmore » depleted uranium metal and the first real-world application of a previously developed containment system for making such measurements.« less

  1. High pressure elasticity and thermal properties of depleted uranium

    DOE PAGES

    Jacobsen, M. K.; Velisavljevic, N.

    2016-04-28

    Studies of the phase diagram of uranium have revealed a wealth of high pressure and temperature phases. Under ambient conditions the crystal structure is well defined up to 100 gigapascals (GPa), but very little information on thermal conduction or elasticity is available over this same range. This work has applied ultrasonic interferometry to determine the elasticity, mechanical, and thermal properties of depleted uranium to 4.5 GPa. Results show general strengthening with applied load, including an overall increase in acoustic thermal conductivity. Further implications are discussed within. Lastly, this work presents the first high pressure studies of the elasticity and thermalmore » properties of depleted uranium metal and the first real-world application of a previously developed containment system for making such measurements.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sublet, J.-Ch.; Koning, A.J.; Forrest, R.A.

    The reasons for the conversion of the European Activation File, EAF into ENDF-6 format are threefold. First, it significantly enhances the JEFF-3.0 release by the addition of an activation file. Second, to considerably increase its usage by using a recognized, official file format, allowing existing plug-in processes to be effective; and third, to move towards a universal nuclear data file in contrast to the current separate general and special-purpose files. The format chosen for the JEFF-3.0/A file uses reaction cross sections (MF-3), cross sections (MF-10), and multiplicities (MF-9). Having the data in ENDF-6 format allows the ENDF suite of utilitiesmore » and checker codes to be used alongside many other utility, visualizing, and processing codes. It is based on the EAF activation file used for many applications from fission to fusion, including dosimetry, inventories, depletion-transmutation, and geophysics. JEFF-3.0/A takes advantage of four generations of EAF files. Extensive benchmarking activities on these files provide feedback and validation with integral measurements. These, in parallel with a detailed graphical analysis based on EXFOR, have been applied stimulating new measurements, significantly increasing the quality of this activation file. The next step is to include the EAF uncertainty data for all channels into JEFF-3.0/A.« less

  3. 76 FR 56167 - Marine Mammals; Pinniped Removal Authority

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-12

    ... depleted or strategic stock under the MMPA. Pursuant to section 120(b) and (c), a state may request... ``endangered'' under the Endangered Species Act, nor as ``depleted'' or ``strategic'' under the MMPA. The... the U.S. Army Corps of Engineers observers using records of applied brands and natural markings. The...

  4. 76 FR 65721 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-24

    ... Stratospheric Ozone Protection regulations, the science of ozone layer depletion, and related topics... Layer (Protocol) and the CAA. Entities applying for this exemption are asked to submit to EPA... Substances that Deplete the Ozone Layer (Protocol). The information collection request is required to obtain...

  5. Assessing local planning to control groundwater depletion: California as a microcosm of global issues

    NASA Astrophysics Data System (ADS)

    Nelson, Rebecca L.

    2012-01-01

    Groundwater pumping has caused excessive groundwater depletion around the world, yet regulating pumping remains a profound challenge. California uses more groundwater than any other U.S. state, and serves as a microcosm of the adverse effects of pumping felt worldwide—land subsidence, impaired water quality, and damaged ecosystems, all against the looming threat of climate change. The state largely entrusts the control of depletion to the local level. This study uses internationally accepted water resources planning theories systematically to investigate three key aspects of controlling groundwater depletion in California, with an emphasis on local-level action: (a) making decisions and engaging stakeholders; (b) monitoring groundwater; and (c) using mandatory, fee-based and voluntary approaches to control groundwater depletion (e.g., pumping restrictions, pumping fees, and education about water conservation, respectively). The methodology used is the social science-derived technique of content analysis, which involves using a coding scheme to record these three elements in local rules and plans, and State legislation, then analyzing patterns and trends. The study finds that Californian local groundwater managers rarely use, or plan to use, mandatory and fee-based measures to control groundwater depletion. Most use only voluntary approaches or infrastructure to attempt to reduce depletion, regardless of whether they have more severe groundwater problems, or problems which are more likely to have irreversible adverse effects. The study suggests legal reforms to the local groundwater planning system, drawing upon its empirical findings. Considering the content of these recommendations may also benefit other jurisdictions that use a local groundwater management planning paradigm.

  6. Applying functional metagenomics to search for novel lignocellulosic enzymes in a microbial consortium derived from a thermophilic composting phase of sugarcane bagasse and cow manure.

    PubMed

    Colombo, Lívia Tavares; de Oliveira, Marcelo Nagem Valério; Carneiro, Deisy Guimarães; de Souza, Robson Assis; Alvim, Mariana Caroline Tocantins; Dos Santos, Josenilda Carlos; da Silva, Cynthia Canêdo; Vidigal, Pedro Marcus Pereira; da Silveira, Wendel Batista; Passos, Flávia Maria Lopes

    2016-09-01

    Environments where lignocellulosic biomass is naturally decomposed are sources for discovery of new hydrolytic enzymes that can reduce the high cost of enzymatic cocktails for second-generation ethanol production. Metagenomic analysis was applied to discover genes coding carbohydrate-depleting enzymes from a microbial laboratory subculture using a mix of sugarcane bagasse and cow manure in the thermophilic composting phase. From a fosmid library, 182 clones had the ability to hydrolyse carbohydrate. Sequencing of 30 fosmids resulted in 12 contigs encoding 34 putative carbohydrate-active enzymes belonging to 17 glycosyl hydrolase (GH) families. One third of the putative proteins belong to the GH3 family, which includes β-glucosidase enzymes known to be important in the cellulose-deconstruction process but present with low activity in commercial enzyme preparations. Phylogenetic analysis of the amino acid sequences of seven selected proteins, including three β-glucosidases, showed low relatedness with protein sequences deposited in databases. These findings highlight microbial consortia obtained from a mixture of decomposing biomass residues, such as sugar cane bagasse and cow manure, as a rich resource of novel enzymes potentially useful in biotechnology for saccharification of lignocellulosic substrate.

  7. How to establish, maintain and use timber depletion accounts

    Treesearch

    William C. Siegel

    2001-01-01

    Section 1221 of the Internal Revenue Code defines capital expenditures. In general, these are amounts spent to acquire real estate or equipment, or to make improvements that increase the value of real estate or equipment already owned. Forestry examples include land, buildings, standing timber, reforestation costs, and tractors and trucks. Property owners who incur...

  8. Estimates of radiological risk from depleted uranium weapons in war scenarios.

    PubMed

    Durante, Marco; Pugliese, Mariagabriella

    2002-01-01

    Several weapons used during the recent conflict in Yugoslavia contain depleted uranium, including missiles and armor-piercing incendiary rounds. Health concern is related to the use of these weapons, because of the heavy-metal toxicity and radioactivity of uranium. Although chemical toxicity is considered the more important source of health risk related to uranium, radiation exposure has been allegedly related to cancers among veterans of the Balkan conflict, and uranium munitions are a possible source of contamination in the environment. Actual measurements of radioactive contamination are needed to assess the risk. In this paper, a computer simulation is proposed to estimate radiological risk related to different exposure scenarios. Dose caused by inhalation of radioactive aerosols and ground contamination induced by Tomahawk missile impact are simulated using a Gaussian plume model (HOTSPOT code). Environmental contamination and committed dose to the population resident in contaminated areas are predicted by a food-web model (RESRAD code). Small values of committed effective dose equivalent appear to be associated with missile impacts (50-y CEDE < 5 mSv), or population exposure by water-independent pathways (50-y CEDE < 80 mSv). The greatest hazard is related to the water contamination in conditions of effective leaching of uranium in the groundwater (50-y CEDE < 400 mSv). Even in this worst case scenario, the chemical toxicity largely predominates over radiological risk. These computer simulations suggest that little radiological risk is associated to the use of depleted uranium weapons.

  9. Discrete influx events refill depleted Ca2+ stores in a chick retinal neuron

    PubMed Central

    Borges, Salvador; Lindstrom, Sarah; Walters, Cameron; Warrier, Ajithkumar; Wilson, Martin

    2008-01-01

    The depletion of ER Ca2+ stores, following the release of Ca2+ during intracellular signalling, triggers the Ca2+ entry across the plasma membrane known as store-operated calcium entry (SOCE). We show here that brief, local [Ca2+]i increases (motes) in the thin dendrites of cultured retinal amacrine cells derived from chick embryos represent the Ca2+ entry events of SOCE and are initiated by sphingosine-1-phosphate (S1P), a sphingolipid with multiple cellular signalling roles. Externally applied S1P elicits motes but not through a G protein-coupled membrane receptor. The endogenous precursor to S1P, sphingosine, also elicits motes but its action is suppressed by dimethylsphingosine (DMS), an inhibitor of sphingosine phosphorylation. DMS also suppresses motes induced by store depletion and retards the refilling of depleted stores. These effects are reversed by exogenously applied S1P. In these neurons formation of S1P is a step in the SOCE pathway that promotes Ca2+ entry in the form of motes. PMID:18033816

  10. Discrete influx events refill depleted Ca2+ stores in a chick retinal neuron.

    PubMed

    Borges, Salvador; Lindstrom, Sarah; Walters, Cameron; Warrier, Ajithkumar; Wilson, Martin

    2008-01-15

    The depletion of ER Ca2+ stores, following the release of Ca2+ during intracellular signalling, triggers the Ca2+ entry across the plasma membrane known as store-operated calcium entry (SOCE). We show here that brief, local [Ca2+]i increases (motes) in the thin dendrites of cultured retinal amacrine cells derived from chick embryos represent the Ca2+ entry events of SOCE and are initiated by sphingosine-1-phosphate (S1P), a sphingolipid with multiple cellular signalling roles. Externally applied S1P elicits motes but not through a G protein-coupled membrane receptor. The endogenous precursor to S1P, sphingosine, also elicits motes but its action is suppressed by dimethylsphingosine (DMS), an inhibitor of sphingosine phosphorylation. DMS also suppresses motes induced by store depletion and retards the refilling of depleted stores. These effects are reversed by exogenously applied S1P. In these neurons formation of S1P is a step in the SOCE pathway that promotes Ca2+ entry in the form of motes.

  11. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    PubMed

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  12. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  13. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less

  14. Dietary arginine depletion reduces depressive-like responses in male, but not female, mice.

    PubMed

    Workman, Joanna L; Weber, Michael D; Nelson, Randy J

    2011-09-30

    Previous behavioral studies have manipulated nitric oxide (NO) production either by pharmacological inhibition of its synthetic enzyme, nitric oxide synthase (NOS), or by deletion of the genes that code for NOS. However manipulation of dietary intake of the NO precursor, L-arginine, has been understudied in regard to behavioral regulation. L-Arginine is a common amino acid present in many mammalian diets and is essential during development. In the brain L-arginine is converted into NO and citrulline by the enzyme, neuronal NOS (nNOS). In Experiment 1, paired mice were fed a diet comprised either of an L-arginine-depleted, L-arginine-supplemented, or standard level of L-arginine during pregnancy. Offspring were continuously fed the same diets and were tested in adulthood in elevated plus maze, forced swim, and resident-intruder aggression tests. L-Arginine depletion reduced depressive-like responses in male, but not female, mice and failed to significantly alter anxiety-like or aggressive behaviors. Arginine depletion throughout life reduced body mass overall and eliminated the sex difference in body mass. Additionally, arginine depletion significantly increased corticosterone concentrations, which negatively correlated with time spent floating. In Experiment 2, adult mice were fed arginine-defined diets two weeks prior to and during behavioral testing, and again tested in the aforementioned tests. Arginine depletion reduced depressive-like responses in the forced swim test, but did not alter behavior in the elevated plus maze or the resident intruder aggression test. Corticosterone concentrations were not altered by arginine diet manipulation in adulthood. These results indicate that arginine depletion throughout development, as well as during a discrete period during adulthood ameliorates depressive-like responses. These results may yield new insights into the etiology and sex differences of depression. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Three-dimensional modeling of the neutral gas depletion effect in a helicon discharge plasma

    NASA Astrophysics Data System (ADS)

    Kollasch, Jeffrey; Schmitz, Oliver; Norval, Ryan; Reiter, Detlev; Sovinec, Carl

    2016-10-01

    Helicon discharges provide an attractive radio-frequency driven regime for plasma, but neutral-particle dynamics present a challenge to extending performance. A neutral gas depletion effect occurs when neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. The Monte Carlo neutral particle tracking code EIRENE was setup for the MARIA helicon experiment at UW Madison to study its neutral particle dynamics. Prescribed plasma temperature and density profiles similar to those in the MARIA device are used in EIRENE to investigate the main causes of the neutral gas depletion effect. The most dominant plasma-neutral interactions are included so far, namely electron impact ionization of neutrals, charge exchange interactions of neutrals with plasma ions, and recycling at the wall. Parameter scans show how the neutral depletion effect depends on parameters such as Knudsen number, plasma density and temperature, and gas-surface interaction accommodation coefficients. Results are compared to similar analytic studies in the low Knudsen number limit. Plans to incorporate a similar Monte Carlo neutral model into a larger helicon modeling framework are discussed. This work is funded by the NSF CAREER Award PHY-1455210.

  16. Comparative functional characterization of the CSR-1 22G-RNA pathway in Caenorhabditis nematodes.

    PubMed

    Tu, Shikui; Wu, Monica Z; Wang, Jie; Cutter, Asher D; Weng, Zhiping; Claycomb, Julie M

    2015-01-01

    As a champion of small RNA research for two decades, Caenorhabditis elegans has revealed the essential Argonaute CSR-1 to play key nuclear roles in modulating chromatin, chromosome segregation and germline gene expression via 22G-small RNAs. Despite CSR-1 being preserved among diverse nematodes, the conservation and divergence in function of the targets of small RNA pathways remains poorly resolved. Here we apply comparative functional genomic analysis between C. elegans and Caenorhabditis briggsae to characterize the CSR-1 pathway, its targets and their evolution. C. briggsae CSR-1-associated small RNAs that we identified by immunoprecipitation-small RNA sequencing overlap with 22G-RNAs depleted in cbr-csr-1 RNAi-treated worms. By comparing 22G-RNAs and target genes between species, we defined a set of CSR-1 target genes with conserved germline expression, enrichment in operons and more slowly evolving coding sequences than other genes, along with a small group of evolutionarily labile targets. We demonstrate that the association of CSR-1 with chromatin is preserved, and show that depletion of cbr-csr-1 leads to chromosome segregation defects and embryonic lethality. This first comparative characterization of a small RNA pathway in Caenorhabditis establishes a conserved nuclear role for CSR-1 and highlights its key role in germline gene regulation across multiple animal species. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Electrical Properties of MWCNT/HDPE Composite-Based MSM Structure Under Neutron Irradiation

    NASA Astrophysics Data System (ADS)

    Kasani, H.; Khodabakhsh, R.; Taghi Ahmadi, M.; Rezaei Ochbelagh, D.; Ismail, Razali

    2017-04-01

    Because of their low cost, low energy consumption, high performance, and exceptional electrical properties, nanocomposites containing carbon nanotubes are suitable for use in many applications such as sensing systems. In this research work, a metal-semiconductor-metal (MSM) structure based on a multiwall carbon nanotube/high-density polyethylene (MWCNT/HDPE) nanocomposite is introduced as a neutron sensor. Scanning electron microscopy, Fourier-transform infrared, and infrared spectroscopy techniques were used to characterize the morphology and structure of the fabricated device. Current-voltage ( I- V) characteristic modeling showed that the device can be assumed to be a reversed-biased Schottky diode, if the voltage is high enough. To estimate the depletion layer length of the Schottky contact, impedance spectroscopy was employed. Therefore, the real and imaginary parts of the impedance of the MSM system were used to obtain electrical parameters such as the carrier mobility and dielectric constant. Experimental observations of the MSM structure under irradiation from an americium-beryllium (Am-Be) neutron source showed that the current level in the device decreased significantly. Subsequently, current pulses appeared in situ I- V and current-time ( I- t) curve measurements when increasing voltage was applied to the MSM system. The experimentally determined depletion region length as well as the space-charge-limited current mechanism for carrier transport were compared with the range for protons calculated using Monte Carlo n-particle extended (MCNPX) code, yielding the maximum energy of recoiled protons detectable by the device.

  18. AN UPDATED {sup 6}Li(p, {alpha}){sup 3}He REACTION RATE AT ASTROPHYSICAL ENERGIES WITH THE TROJAN HORSE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamia, L.; Spitaleri, C.; Sergi, M. L.

    2013-05-01

    The lithium problem influencing primordial and stellar nucleosynthesis is one of the most interesting unsolved issues in astrophysics. {sup 6}Li is the most fragile of lithium's stable isotopes and is largely destroyed in most stars during the pre-main-sequence (PMS) phase. For these stars, the convective envelope easily reaches, at least at its bottom, the relatively low {sup 6}Li ignition temperature. Thus, gaining an understanding of {sup 6}Li depletion also gives hints about the extent of convective regions. For this reason, charged-particle-induced reactions in lithium have been the subject of several studies. Low-energy extrapolations of these studies provide information about bothmore » the zero-energy astrophysical S(E) factor and the electron screening potential, U{sub e} . Thanks to recent direct measurements, new estimates of the {sup 6}Li(p, {alpha}){sup 3}He bare-nucleus S(E) factor and the corresponding U{sub e} value have been obtained by applying the Trojan Horse method to the {sup 2}H({sup 6}Li, {alpha} {sup 3}He)n reaction in quasi-free kinematics. The calculated reaction rate covers the temperature window 0.01 to 2T{sub 9} and its impact on the surface lithium depletion in PMS models with different masses and metallicities has been evaluated in detail by adopting an updated version of the FRANEC evolutionary code.« less

  19. Energy deposition measurements of single 1H, 4He and 12C ions of therapeutic energies in a silicon pixel detector

    NASA Astrophysics Data System (ADS)

    Gehrke, T.; Burigo, L.; Arico, G.; Berke, S.; Jakubek, J.; Turecek, D.; Tessonnier, T.; Mairani, A.; Martišíková, M.

    2017-04-01

    In the field of ion-beam radiotherapy and space applications, measurements of the energy deposition of single ions in thin layers are of interest for dosimetry and imaging. The present work investigates the capability of a pixelated detector Timepix to measure the energy deposition of single ions in therapeutic proton, helium- and carbon-ion beams in a 300 μm-thick sensitive silicon layer. For twelve different incident beams, the measured energy deposition distributions of single ions are compared to the expected energy deposition spectra, which were predicted by detailed Monte Carlo simulations using the FLUKA code. A methodology for the analysis of the measured data is introduced in order to identify and reject signals that are either degraded or caused by multiple overlapping ions. Applying a newly proposed linear recalibration, the energy deposition measurements are in good agreement with the simulations. The twelve measured mean energy depositions between 0.72 MeV/mm and 56.63 MeV/mm in a partially depleted silicon sensor do not deviate more than 7% from the corresponding simulated values. Measurements of energy depositions above 10 MeV/mm with a fully depleted sensor are found to suffer from saturation effects due to the too high per-pixel signal. The utilization of thinner sensors, in which a lower signal is induced, could further improve the performance of the Timepix detector for energy deposition measurements.

  20. 16 CFR 260.7 - Environmental marketing claims.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....” Also printed on the bag is a disclosure that the bag is not designed for use in home compost piles. The... the Society of the Plastics Industry (SPI) code (which consists of a design of arrows in a triangular...% less ozone depletion. The qualified comparative claim is not likely to be deceptive. [57 FR 36363, Aug...

  1. 16 CFR 260.7 - Environmental marketing claims.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....” Also printed on the bag is a disclosure that the bag is not designed for use in home compost piles. The... the Society of the Plastics Industry (SPI) code (which consists of a design of arrows in a triangular...% less ozone depletion. The qualified comparative claim is not likely to be deceptive. [57 FR 36363, Aug...

  2. Bond rupture between colloidal particles with a depletion interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, Kathryn A.; Furst, Eric M., E-mail: furst@udel.edu

    The force required to break the bonds of a depletion gel is measured by dynamically loading pairs of colloidal particles suspended in a solution of a nonadsorbing polymer. Sterically stabilized poly(methyl methacrylate) colloids that are 2.7 μm diameter are brought into contact in a solvent mixture of cyclohexane-cyclohexyl bromide and polystyrene polymer depletant. The particle pairs are subject to a tensile load at a constant loading rate over many approach-retraction cycles. The stochastic nature of the thermal rupture events results in a distribution of bond rupture forces with an average magnitude and variance that increases with increasing depletant concentration. The measuredmore » force distribution is described by the flux of particle pairs sampling the energy barrier of the bond interaction potential based on the Asakura–Oosawa depletion model. A transition state model demonstrates the significance of lubrication hydrodynamic interactions and the effect of the applied loading rate on the rupture force of bonds in a depletion gel.« less

  3. SMITHERS: An object-oriented modular mapping methodology for MCNP-based neutronic–thermal hydraulic multiphysics

    DOE PAGES

    Richard, Joshua; Galloway, Jack; Fensin, Michael; ...

    2015-04-04

    A novel object-oriented modular mapping methodology for externally coupled neutronics–thermal hydraulics multiphysics simulations was developed. The Simulator using MCNP with Integrated Thermal-Hydraulics for Exploratory Reactor Studies (SMITHERS) code performs on-the-fly mapping of material-wise power distribution tallies implemented by MCNP-based neutron transport/depletion solvers for use in estimating coolant temperature and density distributions with a separate thermal-hydraulic solver. The key development of SMITHERS is that it reconstructs the hierarchical geometry structure of the material-wise power generation tallies from the depletion solver automatically, with only a modicum of additional information required from the user. In addition, it performs the basis mapping from themore » combinatorial geometry of the depletion solver to the required geometry of the thermal-hydraulic solver in a generalizable manner, such that it can transparently accommodate varying levels of thermal-hydraulic solver geometric fidelity, from the nodal geometry of multi-channel analysis solvers to the pin-cell level of discretization for sub-channel analysis solvers.« less

  4. Numerical study of phase conjugation in stimulated Brillouin scattering from an optical waveguide

    NASA Astrophysics Data System (ADS)

    Lehmberg, R. H.

    1983-05-01

    Stimulated Brillouin scattering (SBS) in a multimode optical waveguide is examined, and the parameters that affect the wavefront conjugation fidelity are studied. The nonlinear propagation code is briefly described and the calculated quantities are defined. The parameter study in the low reflectivity limit is described, and the effects of pump depletion are considered. The waveguide produced significantly higher fidelities than the focused configuration, in agreement with several experimental studies. The light scattered back through the phase aberrator exhibited a farfield intenstiy profile closely matching that of the incident beam; however, the nearfield intensity exhibited large and rapid spatial inhomogeneities across the entire aberrator, even for conjugation fidelities as high as 98 percent. In the absence of pump depletion, the fidelity increased with average pump intensity for amplitude gains up to around e to the 10th and then decreased slowly and monotonically with higher intensity. For all cases, pump depletion significantly enhanced the fidelity of the wavefront conjugation by inhibiting the small-scale pulling effect.

  5. Approach for validating actinide and fission product compositions for burnup credit criticality safety analyses

    DOE PAGES

    Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...

    2014-11-01

    This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less

  6. Genomic analysis, cytokine expression, and microRNA profiling reveal biomarkers of human dietary zinc depletion and homeostasis.

    PubMed

    Ryu, Moon-Suhn; Langkamp-Henken, Bobbi; Chang, Shou-Mei; Shankar, Meena N; Cousins, Robert J

    2011-12-27

    Implementation of zinc interventions for subjects suspected of being zinc-deficient is a global need, but is limited due to the absence of reliable biomarkers. To discover molecular signatures of human zinc deficiency, a combination of transcriptome, cytokine, and microRNA analyses was applied to a dietary zinc depletion/repletion protocol with young male human subjects. Concomitant with a decrease in serum zinc concentration, changes in buccal and blood gene transcripts related to zinc homeostasis occurred with zinc depletion. Microarray analyses of whole blood RNA revealed zinc-responsive genes, particularly, those associated with cell cycle regulation and immunity. Responses of potential signature genes of dietary zinc depletion were further assessed by quantitative real-time PCR. The diagnostic properties of specific serum microRNAs for dietary zinc deficiency were identified by acute responses to zinc depletion, which were reversible by subsequent zinc repletion. Depression of immune-stimulated TNFα secretion by blood cells was observed after low zinc consumption and may serve as a functional biomarker. Our findings introduce numerous novel candidate biomarkers for dietary zinc status assessment using a variety of contemporary technologies and which identify changes that occur prior to or with greater sensitivity than the serum zinc concentration which represents the current zinc status assessment marker. In addition, the results of gene network analysis reveal potential clinical outcomes attributable to suboptimal zinc intake including immune function defects and predisposition to cancer. These demonstrate through a controlled depletion/repletion dietary protocol that the illusive zinc biomarker(s) can be identified and applied to assessment and intervention strategies.

  7. Analysis of the depletion of a stored aerosol in low gravity

    NASA Technical Reports Server (NTRS)

    Squires, P.

    1977-01-01

    The depletion of an aerosol stored in a container has been studied in l-g and in low gravity. Models were developed for sedimentation, coagulation and diffusional losses to the walls. The overall depletion caused by these three mechanisms is predicted to be of order 5 to 8 percent per hour in terrestrial conditions, which agrees with laboratory experience. Applying the models to a low gravity situation indicates that there only coagulation will be significant. (Gravity influences diffusional losses because of convection currents caused by random temperature gradients). For the types of aerosol studied, the rate of depletion of particles should be somewhat less than 0.001 N percent per hour, where N is the concentration per cu cm.

  8. Response of a depleted sagebrush steppe riparian system to grazing control and woody plantings

    Treesearch

    Warren P. Clary; Nancy L. Shaw; Jonathan G. Dudley; Victoria A. Saab; John W. Kinney; Lynda C. Smithman

    1996-01-01

    To find out if a depleted riparian system in the sagebrush steppe of eastern Oregon would respond quickly to improved management, five management treatments were applied for 7 years, ranging from ungrazed to heavily grazed treatments, including in some cases, planting of woody species. While the results varied, all treatments were too limited to significantly restore...

  9. Activating RNAs associate with Mediator to enhance chromatin architecture and transcription.

    PubMed

    Lai, Fan; Orom, Ulf A; Cesaroni, Matteo; Beringer, Malte; Taatjes, Dylan J; Blobel, Gerd A; Shiekhattar, Ramin

    2013-02-28

    Recent advances in genomic research have revealed the existence of a large number of transcripts devoid of protein-coding potential in multiple organisms. Although the functional role for long non-coding RNAs (lncRNAs) has been best defined in epigenetic phenomena such as X-chromosome inactivation and imprinting, different classes of lncRNAs may have varied biological functions. We and others have identified a class of lncRNAs, termed ncRNA-activating (ncRNA-a), that function to activate their neighbouring genes using a cis-mediated mechanism. To define the precise mode by which such enhancer-like RNAs function, we depleted factors with known roles in transcriptional activation and assessed their role in RNA-dependent activation. Here we report that depletion of the components of the co-activator complex, Mediator, specifically and potently diminished the ncRNA-induced activation of transcription in a heterologous reporter assay using human HEK293 cells. In vivo, Mediator is recruited to ncRNA-a target genes and regulates their expression. We show that ncRNA-a interact with Mediator to regulate its chromatin localization and kinase activity towards histone H3 serine 10. The Mediator complex harbouring disease- displays diminished ability to associate with activating ncRNAs. Chromosome conformation capture confirmed the presence of DNA looping between the ncRNA-a loci and its targets. Importantly, depletion of Mediator subunits or ncRNA-a reduced the chromatin looping between the two loci. Our results identify the human Mediator complex as the transducer of activating ncRNAs and highlight the importance of Mediator and activating ncRNA association in human disease.

  10. Regions of extreme synonymous codon selection in mammalian genes

    PubMed Central

    Schattner, Peter; Diekhans, Mark

    2006-01-01

    Recently there has been increasing evidence that purifying selection occurs among synonymous codons in mammalian genes. This selection appears to be a consequence of either cis-regulatory motifs, such as exonic splicing enhancers (ESEs), or mRNA secondary structures, being superimposed on the coding sequence of the gene. We have developed a program to identify regions likely to be enriched for such motifs by searching for extended regions of extreme codon conservation between homologous genes of related species. Here we present the results of applying this approach to five mammalian species (human, chimpanzee, mouse, rat and dog). Even with very conservative selection criteria, we find over 200 regions of extreme codon conservation, ranging in length from 60 to 178 codons. The regions are often found within genes involved in DNA-binding, RNA-binding or zinc-ion-binding. They are highly depleted for synonymous single nucleotide polymorphisms (SNPs) but not for non-synonymous SNPs, further indicating that the observed codon conservation is being driven by negative selection. Forty-three percent of the regions overlap conserved alternative transcript isoforms and are enriched for known ESEs. Other regions are enriched for TpA dinucleotides and may contain conserved motifs/structures relating to mRNA stability and/or degradation. We anticipate that this tool will be useful for detecting regions enriched in other classes of coding-sequence motifs and structures as well. PMID:16556911

  11. Impact of Nuclear Data Uncertainties on Advanced Fuel Cycles and their Irradiated Fuel - a Comparison between Libraries

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Cabellos, O.; Martínez, J. S.

    2014-04-01

    The uncertainties on the isotopic composition throughout the burnup due to the nuclear data uncertainties are analysed. The different sources of uncertainties: decay data, fission yield and cross sections; are propagated individually, and their effect assessed. Two applications are studied: EFIT (an ADS-like reactor) and ESFR (Sodium Fast Reactor). The impact of the uncertainties on cross sections provided by the EAF-2010, SCALE6.1 and COMMARA-2.0 libraries are compared. These Uncertainty Quantification (UQ) studies have been carried out with a Monte Carlo sampling approach implemented in the depletion/activation code ACAB. Such implementation has been improved to overcome depletion/activation problems with variations of the neutron spectrum.

  12. Long non-coding RNA ZFAS1 interacts with miR-150-5p to regulate Sp1 expression and ovarian cancer cell malignancy.

    PubMed

    Xia, Bairong; Hou, Yan; Chen, Hong; Yang, Shanshan; Liu, Tianbo; Lin, Mei; Lou, Ge

    2017-03-21

    We reported that long non-coding RNA ZFAS1 was upregulated in epithelial ovarian cancer tissues, and was negatively correlated to the overall survival rate of patients with epithelial ovarian cancer in this study. While depletion of ZFAS1 inhibited proliferation, migration, and development of chemoresistance, overexpression of ZFAS1 exhibited an even higher proliferation rate, migration activity, and chemoresistance in epithelial ovarian cancer cell lines. We further found miR-150-5p was a potential target of ZFAS1, which was downregulated in epithelial ovarian cancer tissue. MiR-150-5p subsequently inhibited expression of transcription factor Sp1, as evidence by luciferase assays. Inhibition of miR-150-5p rescued the suppressed proliferation and migration induced by depletion of ZFAS1 in epithelial ovarian cancer cells, at least in part. Taken together, our findings revealed a critical role of ZFAS1/miR-150-5p/Sp1 axis in promoting proliferation rate, migration activity, and development of chemoresistance in epithelial ovarian cancer. And ZFAS1/miR-150-5p may serve as novel markers and therapeutic targets of epithelial ovarian cancer.

  13. Development of probabilistic multimedia multipathway computer codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; LePoire, D.; Gnanapragasam, E.

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less

  14. Molecularly imprinted composite cryogels for hemoglobin depletion from human blood.

    PubMed

    Baydemir, Gözde; Andaç, Müge; Perçin, Işιk; Derazshamshir, Ali; Denizli, Adil

    2014-09-01

    A molecularly imprinted composite cryogel (MICC) was prepared for depletion of hemoglobin from human blood prior to use in proteome applications. Poly(hydroxyethyl methacrylate) based MICC was prepared with high gel fraction yields up to 90%, and characterized by Fourier transform infrared spectrophotometer, scanning electron microscopy, swelling studies, flow dynamics and surface area measurements. MICC exhibited a high binding capacity and selectivity for hemoglobin in the presence of immunoglobulin G, albumin and myoglobin. MICC column was successfully applied in fast protein liquid chromatography system for selective depletion of hemoglobin for human blood. The depletion ratio was highly increased by embedding microspheres into the cryogel (93.2%). Finally, MICC can be reused many times with no apparent decrease in hemoglobin adsorption capacity. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Large-Scale Physical Separation of Depleted Uranium from Soil

    DTIC Science & Technology

    2012-09-01

    Earth and Environment 285 Davidson Avenue, Suite 100 Somerset, NJ 08873 Catherine Nestler Applied Research Associates, Inc. 119 Monument Place...square meters square miles 2.589998 E+06 square meters square yards 0.8361274 square meters yards 0.9144 meters ERDC/EL TR-12-25 viii...depleted uranium EL Environmental Laboratory ERDC Engineer Research and Development Center ICP-MS Inductively coupled plasma - mass spectroscopy

  16. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  17. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  18. Protecting the Ozone Shield: A New Public Policy

    DTIC Science & Technology

    1991-04-01

    Public Policy Issue; Alterna- 11 tives; Risk Management; Clean Air Act; Global Warming 16. PRICE CODE 17. SECURITY CLASSIFICATION 𔄂. SECURITY...pattern of global warming , commonly known as "the greenhouse effect. 1 OVERVIEW OF THE OZONE DEPLETION PUBLIC POLICY ISSUE In 1974, two atmospheric...inhabitants from the harmful effects of increased UVb radiation and global warming . Another dilemma surrounds this public policy issue since the first

  19. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    NASA Astrophysics Data System (ADS)

    Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.

    1992-06-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.

  20. An Analytical Model for Assessing Stability of Pre-Existing Faults in Caprock Caused by Fluid Injection and Extraction in a Reservoir

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Bai, Bing; Li, Xiaochun; Liu, Mingze; Wu, Haiqing; Hu, Shaobin

    2016-07-01

    Induced seismicity and fault reactivation associated with fluid injection and depletion were reported in hydrocarbon, geothermal, and waste fluid injection fields worldwide. Here, we establish an analytical model to assess fault reactivation surrounding a reservoir during fluid injection and extraction that considers the stress concentrations at the fault tips and the effects of fault length. In this model, induced stress analysis in a full-space under the plane strain condition is implemented based on Eshelby's theory of inclusions in terms of a homogeneous, isotropic, and poroelastic medium. The stress intensity factor concept in linear elastic fracture mechanics is adopted as an instability criterion for pre-existing faults in surrounding rocks. To characterize the fault reactivation caused by fluid injection and extraction, we define a new index, the "fault reactivation factor" η, which can be interpreted as an index of fault stability in response to fluid pressure changes per unit within a reservoir resulting from injection or extraction. The critical fluid pressure change within a reservoir is also determined by the superposition principle using the in situ stress surrounding a fault. Our parameter sensitivity analyses show that the fault reactivation tendency is strongly sensitive to fault location, fault length, fault dip angle, and Poisson's ratio of the surrounding rock. Our case study demonstrates that the proposed model focuses on the mechanical behavior of the whole fault, unlike the conventional methodologies. The proposed method can be applied to engineering cases related to injection and depletion within a reservoir owing to its efficient computational codes implementation.

  1. Impact investigation of reactor fuel operating parameters on reactivity for use in burnup credit applications

    NASA Astrophysics Data System (ADS)

    Sloma, Tanya Noel

    When representing the behavior of commercial spent nuclear fuel (SNF), credit is sought for the reduced reactivity associated with the net depletion of fissile isotopes and the creation of neutron-absorbing isotopes, a process that begins when a commercial nuclear reactor is first operated at power. Burnup credit accounts for the reduced reactivity potential of a fuel assembly and varies with the fuel burnup, cooling time, and the initial enrichment of fissile material in the fuel. With regard to long-term SNF disposal and transportation, tremendous benefits, such as increased capacity, flexibility of design and system operations, and reduced overall costs, provide an incentive to seek burnup credit for criticality safety evaluations. The Nuclear Regulatory Commission issued Interim Staff Guidance 8, Revision 2 in 2002, endorsing burnup credit of actinide composition changes only; credit due to actinides encompasses approximately 30% of exiting pressurized water reactor SNF inventory and could potentially be increased to 90% if fission product credit were accepted. However, one significant issue for utilizing full burnup credit, compensating for actinide and fission product composition changes, is establishing a set of depletion parameters that produce an adequately conservative representation of the fuel's isotopic inventory. Depletion parameters can have a significant effect on the isotopic inventory of the fuel, and thus the residual reactivity. This research seeks to quantify the reactivity impact on a system from dominant depletion parameters (i.e., fuel temperature, moderator density, burnable poison rod, burnable poison rod history, and soluble boron concentration). Bounding depletion parameters were developed by statistical evaluation of a database containing reactor operating histories. The database was generated from summary reports of commercial reactor criticality data. Through depletion calculations, utilizing the SCALE 6 code package, several light water reactor assembly designs and in-core locations are analyzed in establishing a combination of depletion parameters that conservatively represent the fuel's isotopic inventory as an initiative to take credit for fuel burnup in criticality safety evaluations for transportation and storage of SNF.

  2. Relationship between the ability of sunscreens containing 2-ethylhexyl-4'-methoxycinnamate to protect against UVR-induced inflammation, depletion of epidermal Langerhans (Ia+) cells and suppression of alloactivating capacity of murine skin in vivo.

    PubMed

    Walker, S L; Morris, J; Chu, A C; Young, A R

    1994-01-01

    The UVB sunscreen 2-ethylhexyl-4'-methoxycinnamate was evaluated in hairless albino mouse skin for its ability to inhibit UVR-induced (i) oedema, (ii) epidermal Langerhans cell (Ia+) depletion and (iii) suppression of the alloactivating capacity of epidermal cells (mixed epidermal cell-lymphocyte reaction, MECLR). The sunscreen, prepared at 9% in ethanol or a cosmetic lotion, was applied prior to UVB/UVA irradiation. In some experiments there was a second application halfway through the irradiation. Single applications in both vehicles gave varying degrees of protection from oedema and Langerhans cell depletion but afforded no protection from suppression of MECLR. When the sunscreens were applied twice there was improved protection from oedema and Langerhans cell depletion and complete protection was afforded from suppression of MECLR. There was a clear linear relationship between Langerhans cell numbers and oedema with and without sunscreen application. The relationship between Langerhans cell numbers and MECLR was more complex. These data confirm published discrepancies between protection from oedema (a model for human erythema) and endpoints with immunological significance, but show that 2-ethylhexyl-4'-methoxycinnamate can afford complete immunoprotection, although protection is dependent on the application rate and vehicle.

  3. 36 CFR 1234.20 - What rules apply if there is a conflict between NARA standards and other regulatory standards...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...

  4. 36 CFR 1234.20 - What rules apply if there is a conflict between NARA standards and other regulatory standards...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...

  5. 36 CFR 1234.20 - What rules apply if there is a conflict between NARA standards and other regulatory standards...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...

  6. 36 CFR § 1234.20 - What rules apply if there is a conflict between NARA standards and other regulatory standards...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...

  7. Pricing of Water Resources With Depletable Externality: The Effects of Pollution Charges

    NASA Astrophysics Data System (ADS)

    Kitabatake, Yoshifusa

    1990-04-01

    With an abstraction of a real-world situation, the paper views water resources as a depletable capital asset which yields a stream of services such as water supply and the assimilation of pollution discharge. The concept of the concave or convex water resource depletion function is then introduced and applied to a general two-sector, three-factor model. The main theoretical contribution is to prove that when the water resource depletion function is a concave rather than a convex function of pollution, it is more likely that gross regional income will increase with a higher pollution charge policy. The concavity of the function is meant to imply that with an increase in pollution released, the ability of supplying water at a certain minimum quality level diminishes faster and faster. A numerical example is also provided.

  8. Development of Selective Clk1 and -4 Inhibitors for Cellular Depletion of Cancer-Relevant Proteins.

    PubMed

    ElHady, Ahmed K; Abdel-Halim, Mohammad; Abadi, Ashraf H; Engel, Matthias

    2017-07-13

    In cancer cells, kinases of the Clk family control the supply of full-length, functional mRNAs coding for a variety of proteins essential to cell growth and survival. Thus, inhibition of Clks might become a novel anticancer strategy, leading to a selective depletion of cancer-relevant proteins after turnover. On the basis of a Weinreb amide hit compound, we designed and synthesized a diverse set of methoxybenzothiophene-2-carboxamides, of which the N-benzylated derivative showed enhanced Clk1 inhibitory activity. Introduction of a m-fluorine in the benzyl moiety eventually led to the discovery of compound 21b, a potent inhibitor of Clk1 and -4 (IC 50 = 7 and 2.3 nM, respectively), exhibiting an unprecedented selectivity over Dyrk1A. 21b triggered the depletion of EGFR, HDAC1, and p70S6 kinase from the cancer cells, with potencies in line with the measured GI 50 values. In contrast, the cellular effects of congener 21a, which inhibited Clk1 only weakly, were substantially lower.

  9. Expression of Telomere-Associated Proteins is Interdependent to Stabilize Native Telomere Structure and Telomere Dysfunction by G-Quadruplex Ligand Causes TERRA Upregulation.

    PubMed

    Sadhukhan, Ratan; Chowdhury, Priyanka; Ghosh, Sourav; Ghosh, Utpal

    2018-06-01

    Telomere DNA can form specialized nucleoprotein structure with telomere-associated proteins to hide free DNA ends or G-quadruplex structures under certain conditions especially in presence of G-quadruplex ligand. Telomere DNA is transcribed to form non-coding telomere repeat-containing RNA (TERRA) whose biogenesis and function is poorly understood. Our aim was to find the role of telomere-associated proteins and telomere structures in TERRA transcription. We silenced four [two shelterin (TRF1, TRF2) and two non-shelterin (PARP-1, SLX4)] telomere-associated genes using siRNA and verified depletion in protein level. Knocking down of one gene modulated expression of other telomere-associated genes and increased TERRA from 10q, 15q, XpYp and XqYq chromosomes in A549 cells. Telomere was destabilized or damaged by G-quadruplex ligand pyridostatin (PDS) and bleomycin. Telomere dysfunction-induced foci (TIFs) were observed for each case of depletion of proteins, treatment with PDS or bleomycin. TERRA level was elevated by PDS and bleomycin treatment alone or in combination with depletion of telomere-associated proteins.

  10. Public money and human purpose: The future of taxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roodman, D.M.

    1995-09-01

    Most countries use taxes and subsidies that undermine the well-being of both the taxpayers and the environment. But there are some positive-and now proven alternatives. One of the most powerfull tools that a government can use to guide its economy is its tax code. What politicians often overlook is that even though taxes are inevitable distortionary ones are not. In fact some taxes do no harm to the economy and other such as pollution taxes actually help it to work better. However there is a chronic tendency to undertax destructive activities such as pollution and resource depletion activities which canmore » threaten long term economic security. By making environmental destruction cheap or even free governments let people and businesses ignore the costs they imposed on others and on the future. This article explores the possibilities of turning today`s taxing philosophy and subsidizing priorities completely around. To shore up economic security and brake economic decline good activities need to be taxed less. To preserve the environmental viability of modern economies over the long term, bad activities need to be taxed more. The topics discussed include the following: What should Tax codes do: (1) shift from taxing income and sales to taxing exploitation or resources, when that exploitation generates windfall profits; (2) Calibrate the new taxes so polluters and depleters will feel the costs of the harm they do others of their own and future generations; (3) shape the tax code to help people participate and survive in the modern economy; What Will Fiscal Reform Do To Businesses? 2 tabs.« less

  11. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  12. Publication bias and the limited strength model of self-control: has the evidence for ego depletion been overestimated?

    PubMed

    Carter, Evan C; McCullough, Michael E

    2014-01-01

    Few models of self-control have generated as much scientific interest as has the limited strength model. One of the entailments of this model, the depletion effect, is the expectation that acts of self-control will be less effective when they follow prior acts of self-control. Results from a previous meta-analysis concluded that the depletion effect is robust and medium in magnitude (d = 0.62). However, when we applied methods for estimating and correcting for small-study effects (such as publication bias) to the data from this previous meta-analysis effort, we found very strong signals of publication bias, along with an indication that the depletion effect is actually no different from zero. We conclude that until greater certainty about the size of the depletion effect can be established, circumspection about the existence of this phenomenon is warranted, and that rather than elaborating on the model, research efforts should focus on establishing whether the basic effect exists. We argue that the evidence for the depletion effect is a useful case study for illustrating the dangers of small-study effects as well as some of the possible tools for mitigating their influence in psychological science.

  13. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    NASA Astrophysics Data System (ADS)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  14. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    PubMed Central

    Cunningham, Michael R.; Baumeister, Roy F.

    2016-01-01

    The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger et al., 2010). Meta-analyses are supposed to reduce bias in literature reviews. Carter et al.’s (2015) meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and Funnel Plot Asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test and Precision Effect Estimate with Standard Error (PEESE) procedures. Despite these serious problems, the Carter et al. (2015) meta-analysis results actually indicate that there is a real depletion effect – contrary to their title. PMID:27826272

  15. Depletion of CD52-positive cells inhibits the development of central nervous system autoimmune disease, but deletes an immune-tolerance promoting CD8 T-cell population. Implications for secondary autoimmunity of alemtuzumab in multiple sclerosis.

    PubMed

    von Kutzleben, Stephanie; Pryce, Gareth; Giovannoni, Gavin; Baker, David

    2017-04-01

    The objective was to determine whether CD52 lymphocyte depletion can act to promote immunological tolerance induction by way of intravenous antigen administration such that it could be used to either improve efficiency of multiple sclerosis (MS) inhibition or inhibit secondary autoimmunities that may occur following alemtuzumab use in MS. Relapsing experimental autoimmune encephalomyelitis was induced in ABH mice and immune cell depletion was therapeutically applied using mouse CD52 or CD4 (in conjunction with CD8 or CD20) depleting monoclonal antibodies. Immunological unresponsiveness was then subsequently induced using intravenous central nervous system antigens and responses were assessed clinically. A dose-response of CD4 monoclonal antibody depletion indicated that the 60-70% functional CD4 T-cell depletion achieved in perceived failed trials in MS was perhaps too low to even stop disease in animals. However, more marked (~75-90%) physical depletion of CD4 T cells by CD4 and CD52 depleting antibodies inhibited relapsing disease. Surprisingly, in contrast to CD4 depletion, CD52 depletion blocked robust immunological unresponsiveness through a mechanism involving CD8 T cells. Although efficacy was related to the level of CD4 T-cell depletion, the observations that CD52 depletion of CD19 B cells was less marked in lymphoid organs than in the blood provides a rationale for the rapid B-cell hyper-repopulation that occurs following alemtuzumab administration in MS. That B cells repopulate in the relative absence of T-cell regulatory mechanisms that promote immune tolerance may account for the secondary B-cell autoimmunities, which occur following alemtuzumab treatment of MS. © 2016 The Authors. Immunology Published by John Wiley & Sons Ltd.

  16. Internationalizing professional codes in engineering.

    PubMed

    Harris, Charles E

    2004-07-01

    Professional engineering societies which are based in the United States, such as the American Society of Mechanical Engineers (ASME, now ASME International) are recognizing that their codes of ethics must apply to engineers working throughout the world. An examination of the ethical code of the ASME International shows that its provisions pose many problems of application, especially in societies outside the United States. In applying the codes effectively in the international environment, two principal issues must be addressed. First, some Culture Transcending Guidelines must be identified and justified. Nine such guidelines are identified Second, some methods for applying the codes to particular situations must be identified Three such methods are specification, balancing, and finding a creative middle way.

  17. Quantum Mechanical Modeling of Ballistic MOSFETs

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The objective of this project was to develop theory, approximations, and computer code to model quasi 1D structures such as nanotubes, DNA, and MOSFETs: (1) Nanotubes: Influence of defects on ballistic transport, electro-mechanical properties, and metal-nanotube coupling; (2) DNA: Model electron transfer (biochemistry) and transport experiments, and sequence dependence of conductance; and (3) MOSFETs: 2D doping profiles, polysilicon depletion, source to drain and gate tunneling, understand ballistic limit.

  18. XPOSE: the Exxon Nuclear revised LEOPARD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skogen, F.B.

    1975-04-01

    Main differences between XPOSE and LEOPARD codes used to generate fast and thermal neutron spectra and cross sections are presented. Models used for fast and thermal spectrum calculations as well as the depletion calculations considering U-238 chain, U-235 chain, xenon and samarium, fission products and boron-10 are described. A detailed description of the input required to run XPOSE and a description of the output are included. (FS)

  19. Lower hybrid wave phenomena associated with density depletions

    NASA Technical Reports Server (NTRS)

    Seyler, C. E.

    1994-01-01

    A fluid description of lower hybrid, whistler and magnetosonic waves is applied to study wave phenomena near the lower hybrid resonance associated with plasma density depletions. The goal is to understand the nature of lower hybrid cavitons and spikelets often associated with transverse ion acceleration events in the auroral ionosphere. Three-dimensional simulations show the ponderomotive force leads to the formation of a density cavity (caviton) in which lower hybrid wave energy is concentrated (spikelet) resulting in a three-dimensional collapse of the configuration. Plasma density depletions of the order of a few percent are shown to greatly modify the homogeneous linear properties of lower hybrid waves and account for many of the observed features of lower hybrid spikelets.

  20. A long and abundant non-coding RNA in Lactobacillus salivarius.

    PubMed

    Cousin, Fabien J; Lynch, Denise B; Chuat, Victoria; Bourin, Maxence J B; Casey, Pat G; Dalmasso, Marion; Harris, Hugh M B; McCann, Angela; O'Toole, Paul W

    2017-09-01

    Lactobacillus salivarius , found in the intestinal microbiota of humans and animals, is studied as an example of the sub-dominant intestinal commensals that may impart benefits upon their host. Strains typically harbour at least one megaplasmid that encodes functions contributing to contingency metabolism and environmental adaptation. RNA sequencing (RNA-seq)transcriptomic analysis of L. salivarius strain UCC118 identified the presence of a novel unusually abundant long non-coding RNA (lncRNA) encoded by the megaplasmid, and which represented more than 75 % of the total RNA-seq reads after depletion of rRNA species. The expression level of this 520 nt lncRNA in L. salivarius UCC118 exceeded that of the 16S rRNA, it accumulated during growth, was very stable over time and was also expressed during intestinal transit in a mouse. This lncRNA sequence is specific to the L. salivarius species; however, among 45 L . salivarius genomes analysed, not all (only 34) harboured the sequence for the lncRNA. This lncRNA was produced in 27 tested L. salivarius strains, but at strain-specific expression levels. High-level lncRNA expression correlated with high megaplasmid copy number. Transcriptome analysis of a deletion mutant lacking this lncRNA identified altered expression levels of genes in a number of pathways, but a definitive function of this new lncRNA was not identified. This lncRNA presents distinctive and unique properties, and suggests potential basic and applied scientific developments of this phenomenon.

  1. Comparative evaluation of rRNA depletion procedures for the improved analysis of bacterial biofilm and mixed pathogen culture transcriptomes

    PubMed Central

    Petrova, Olga E.; Garcia-Alcalde, Fernando; Zampaloni, Claudia; Sauer, Karin

    2017-01-01

    Global transcriptomic analysis via RNA-seq is often hampered by the high abundance of ribosomal (r)RNA in bacterial cells. To remove rRNA and enrich coding sequences, subtractive hybridization procedures have become the approach of choice prior to RNA-seq, with their efficiency varying in a manner dependent on sample type and composition. Yet, despite an increasing number of RNA-seq studies, comparative evaluation of bacterial rRNA depletion methods has remained limited. Moreover, no such study has utilized RNA derived from bacterial biofilms, which have potentially higher rRNA:mRNA ratios and higher rRNA carryover during RNA-seq analysis. Presently, we evaluated the efficiency of three subtractive hybridization-based kits in depleting rRNA from samples derived from biofilm, as well as planktonic cells of the opportunistic human pathogen Pseudomonas aeruginosa. Our results indicated different rRNA removal efficiency for the three procedures, with the Ribo-Zero kit yielding the highest degree of rRNA depletion, which translated into enhanced enrichment of non-rRNA transcripts and increased depth of RNA-seq coverage. The results indicated that, in addition to improving RNA-seq sensitivity, efficient rRNA removal enhanced detection of low abundance transcripts via qPCR. Finally, we demonstrate that the Ribo-Zero kit also exhibited the highest efficiency when P. aeruginosa/Staphylococcus aureus co-culture RNA samples were tested. PMID:28117413

  2. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    DOE PAGES

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less

  3. Nonlinear pulse propagation and phase velocity of laser-driven plasma waves

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Rossi, Francesco; Schroeder, Carl; Esarey, Eric; Leemans, Wim

    2014-10-01

    We investigate and characterize the laser evolution and plasma wave excitation by a relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, including the effects of pulse steepening, frequency redshifting, and energy depletion. We derived in 3D, and in the weakly relativistic intensity regime, analytical expressions for the laser energy depletion, the pulse self-steepening rate, the laser intensity centroid velocity, and the phase velocity of the plasma wave. Analytical results have been validated numerically using the 2D-cylindrical, ponderomotive code INF&RNO. We also discuss the extension of these results to the nonlinear regime, where an analytical theory of the nonlinear wake phase velocity is lacking. Work supported by the Office of Science, Office of High Energy Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  4. The Impact of Operating Parameters and Correlated Parameters for Extended BWR Burnup Credit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, Brian J.; Marshall, William B. J.; Ilas, Germina

    Applicants for certificates of compliance for spent nuclear fuel (SNF) transportation and dry storage systems perform analyses to demonstrate that these systems are adequately subcritical per the requirements of Title 10 of the Code of Federal Regulations (10 CFR) Parts 71 and 72. For pressurized water reactor (PWR) SNF, these analyses may credit the reduction in assembly reactivity caused by depletion of fissile nuclides and buildup of neutron-absorbing nuclides during power operation. This credit for reactivity reduction during depletion is commonly referred to as burnup credit (BUC). US Nuclear Regulatory Commission (NRC) staff review BUC analyses according to the guidancemore » in the Division of Spent Fuel Storage and Transportation Interim Staff Guidance (ISG) 8, Revision 3, Burnup Credit in the Criticality Safety Analyses of PWR Spent Fuel in Transportation and Storage Casks.« less

  5. Coherent optical adaptive technique improves the spatial resolution of STED microscopy in thick samples

    PubMed Central

    Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong

    2018-01-01

    Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356

  6. Hydrologic Drought Decision Support System (HyDroDSS)

    USGS Publications Warehouse

    Granato, Gregory E.

    2014-01-01

    The hydrologic drought decision support system (HyDroDSS) was developed by the U.S. Geological Survey (USGS) in cooperation with the Rhode Island Water Resources Board (RIWRB) for use in the analysis of hydrologic variables that may indicate the risk for streamflows to be below user-defined flow targets at a designated site of interest, which is defined herein as data-collection site on a stream that may be adversely affected by pumping. Hydrologic drought is defined for this study as a period of lower than normal streamflows caused by precipitation deficits and (or) water withdrawals. The HyDroDSS is designed to provide water managers with risk-based information for balancing water-supply needs and aquatic-habitat protection goals to mitigate potential effects of hydrologic drought. This report describes the theory and methods for retrospective streamflow-depletion analysis, rank correlation analysis, and drought-projection analysis. All three methods are designed to inform decisions made by drought steering committees and decisionmakers on the basis of quantitative risk assessment. All three methods use estimates of unaltered streamflow, which is the measured or modeled flow without major withdrawals or discharges, to approximate a natural low-flow regime. Retrospective streamflow-depletion analysis can be used by water-resource managers to evaluate relations between withdrawal plans and the potential effects of withdrawal plans on streams at one or more sites of interest in an area. Retrospective streamflow-depletion analysis indicates the historical risk of being below user-defined flow targets if different pumping plans were implemented for the period of record. Retrospective streamflow-depletion analysis also indicates the risk for creating hydrologic drought conditions caused by use of a pumping plan. Retrospective streamflow-depletion analysis is done by calculating the net streamflow depletions from withdrawals and discharges and applying these depletions to a simulated record of unaltered streamflow. Rank correlation analysis in the HyDroDSS indicates the persistence of hydrologic measurements from month to month for the prediction of developing hydrologic drought conditions and quantitatively indicates which hydrologic variables may be used to indicate the onset of hydrologic drought conditions. Rank correlation analysis also indicates the potential use of each variable for estimating the monthly minimum unaltered flow at a site of interest for use in the drought-projection analysis. Rank correlation analysis in the HyDroDSS is done by calculating Spearman’s rho for paired samples and the 95-percent confidence limits of this rho value. Rank correlation analysis can be done by using precipitation, groundwater levels, measured streamflows, and estimated unaltered streamflows. Serial correlation analysis, which indicates relations between current and future values, can be done for a single site. Cross correlation analysis, which indicates relations among current values at one site and current and future values at a second site, also can be done. Drought-projection analysis in the HyDroDSS indicates the risk for being in a hydrologic drought condition during the current month and the five following months with and without pumping. Drought-projection analysis also indicates the potential effectiveness of water-conservation methods for mitigating the effect of withdrawals in the coming months on the basis of the amount of depletion caused by different pumping plans and on the risk of unaltered flows being below streamflow targets. Drought-projection analysis in the HyDroDSS is done with Monte Carlo methods by using the position analysis method. In this method the initial value of estimated unaltered streamflows is calculated by correlation to a measured hydrologic variable (monthly precipitation, groundwater levels, or streamflows from an index station identified with the rank correlation analysis). Then a pseudorandom number generator is used to create 251 six-month-long flow traces by using a bootstrap method. Serial correlation of the estimated unaltered monthly minimum streamflows determined from the rank correlation analysis is preserved within each flow trace. The sample of unaltered streamflows indicates the risk of being below flow targets in the coming months under simulated natural conditions (without historic withdrawals). The streamflow-depletion algorithms are then used to estimate risks of flow being below targets if selected pumping plans are used. This report also describes the implementation of the HyDroDSS. The HyDroDSS was developed as a Microsoft Access® database application to facilitate storage, handling, and use of hydrologic datasets with a simple graphical user interface. The program is implemented in the database by using the Visual Basic for Applications® (VBA) programming language. Program source code for the analytical techniques is provided in the HyDroDSS and in electronic text files accompanying this report. Program source code for the graphical user interface and for data-handling code, which is specific to Microsoft Access® and the HyDroDSS, is provided in the database. An installation package with a run-time version of the software is available with this report for potential users who do not have a compatible copy of Microsoft Access®. Administrative rights are needed to install this version of the HyDroDSS. A case study, to demonstrate the use of HyDroDSS and interpretation of results for a site of interest, is detailed for the USGS streamgage on the Hunt River (station 01117000) near East Greenwich in central Rhode Island. The Hunt River streamgage was used because it has a long record of streamflow and is in a well-studied basin with a substantial amount of hydrologic and water-use data including groundwater pumping for municipal water supply.

  7. Hard-sphere fluid adsorbed in an annular wedge: The depletion force of hard-body colloidal physics

    NASA Astrophysics Data System (ADS)

    Herring, A. R.; Henderson, J. R.

    2007-01-01

    Many important issues of colloidal physics can be expressed in the context of inhomogeneous fluid phenomena. When two large colloids approach one another in solvent, they interact at least partly by the response of the solvent to finding itself adsorbed in the annular wedge formed between the two colloids. At shortest range, this fluid mediated interaction is known as the depletion force/interaction because solvent is squeezed out of the wedge when the colloids approach closer than the diameter of a solvent molecule. An equivalent situation arises when a single colloid approaches a substrate/wall. Accurate treatment of this interaction is essential for any theory developed to model the phase diagrams of homogeneous and inhomogeneous colloidal systems. The aim of our paper is a test of whether or not we possess sufficient knowledge of statistical mechanics that can be trusted when applied to systems of large size asymmetry and the depletion force in particular. When the colloid particles are much larger than a solvent diameter, the depletion force is dominated by the effective two-body interaction experienced by a pair of solvated colloids. This low concentration limit of the depletion force has therefore received considerable attention. One route, which can be rigorously based on statistical mechanical sum rules, leads to an analytic result for the depletion force when evaluated by a key theoretical tool of colloidal science known as the Derjaguin approximation. A rival approach has been based on the assumption that modern density functional theories (DFT) can be trusted for systems of large size asymmetry. Unfortunately, these two theoretical predictions differ qualitatively for hard sphere models, as soon as the solvent density is higher than about 2/3 that at freezing. Recent theoretical attempts to understand this dramatic disagreement have led to the proposal that the Derjaguin and DFT routes represent opposite limiting behavior, for very large size asymmetry and molecular sized mixtures, respectively. This proposal implies that nanocolloidal systems lie in between the two limits, so that the depletion force no longer scales linearly with the colloid radius. That is, by decreasing the size ratio from mesoscopic to molecular sized solutes, one moves smoothly between the Derjaguin and the DFT predictions for the depletion force scaled by the colloid radius. We describe the results of a simulation study designed specifically as a test of compatibility with this complex scenario. Grand canonical simulation procedures applied to hard-sphere fluid adsorbed in a series of annular wedges, representing the depletion regime of hard-body colloidal physics, confirm that neither the Derjaguin approximation, nor advanced formulations of DFT, apply at moderate to high solvent density when the geometry is appropriate to nanosized colloids. Our simulations also allow us to report structural characteristics of hard-body solvent adsorbed in hard annular wedges. Both these aspects are key ingredients in the proposal that unifies the disparate predictions, via the introduction of new physics. Our data are consistent with this proposed physics, although as yet limited to a single colloidal size asymmetry.

  8. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less

  9. Depletion of CD11c+ Cells Does Not Influence Outcomes in Mice Subjected to Transient Middle Cerebral Artery Occlusion.

    PubMed

    Kraft, Peter; Scholtyschik, Karolina; Schuhmann, Michael K; Kleinschnitz, Christoph

    2017-01-01

    While it has been shown that different T-cell subsets have a detrimental role in the acute phase of ischemic stroke, data on the impact of dendritic cells (DC) are missing. Classic DC can be characterized by the cluster of differentiation (CD)11c surface antigen. In this study, we depleted CD11c+ cells by using a CD11c-diphtheria toxin (DTX) receptor mouse strain that allows selective depletion of CD11c+ cells by DTX injection. For stroke induction, we used the model of transient middle cerebral artery occlusion (tMCAO) and analyzed stroke volume and functional outcome on days 1 and 3 as well as expression of prototypical pro- and anti-inflammatory cytokines on day 1 after tMCAO. Three different protocols for CD11c+ cell depletion, tMCAO duration, and readout time point were applied. Injection of DTX (5 or 100 ng/g) reliably depleted CD11c+ cells without influencing the fractions of other immune cell subsets. CD11c+ cell depletion had no impact on stroke volume, but mice with a longer DTX pretreatment performed worse than those with vehicle treatment. CD11c+ cell depletion led to a decrease in cortical interleukin (IL)-1β and IL-6 messenger ribonucleic acid levels. We show, for the first time, that CD11c+ cell depletion does not influence stroke volume in a mouse model of focal cerebral ischemia. Nevertheless, given the unspecificity of the CD11c surface antigen for DC, mouse models that allow a more selective depletion of DC are needed to investigate the role of DC in stroke pathophysiology. © 2017 S. Karger AG, Basel.

  10. Redwing: A MOOSE application for coupling MPACT and BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frederick N. Gleicher; Michael Rose; Tom Downar

    Fuel performance and whole core neutron transport programs are often used to analyze fuel behavior as it is depleted in a reactor. For fuel performance programs, internal models provide the local intra-pin power density, fast neutron flux, burnup, and fission rate density, which are needed for a fuel performance analysis. The fuel performance internal models have a number of limitations. These include effects on the intra-pin power distribution by nearby assembly elements, such as water channels and control rods, and the further limitation of applicability to a specified fuel type such as low enriched UO2. In addition, whole core neutronmore » transport codes need an accurate intra-pin temperature distribution in order to calculate neutron cross sections. Fuel performance simulations are able to model the intra-pin fuel displacement as the fuel expands and densifies. These displacements must be accurately modeled in order to capture the eventual mechanical contact of the fuel and the clad; the correct radial gap width is needed for an accurate calculation of the temperature distribution of the fuel rod. Redwing is a MOOSE-based application that enables coupling between MPACT and BISON for transport and fuel performance coupling. MPACT is a 3D neutron transport and reactor core simulator based on the method of characteristics (MOC). The development of MPACT began at the University of Michigan (UM) and now is under the joint development of ORNL and UM as part of the DOE CASL Simulation Hub. MPACT is able to model the effects of local assembly elements and is able calculate intra-pin quantities such as the local power density on a volumetric mesh for any fuel type. BISON is a fuel performance application of Multi-physics Object Oriented Simulation Environment (MOOSE), which is under development at Idaho National Laboratory. BISON is able to solve the nonlinearly coupled mechanical deformation and heat transfer finite element equations that model a fuel element as it is depleted in a nuclear reactor. Redwing couples BISON and MPACT in a single application. Redwing maps and transfers the individual intra-pin quantities such as fission rate density, power density, and fast neutron flux from the MPACT volumetric mesh to the individual BISON finite element meshes. For a two-way coupling Redwing maps and transfers the individual pin temperature field and axially dependent coolant densities from the BISON mesh to the MPACT volumetric mesh. Details of the mapping are given. Redwing advances the simulation with the MPACT solution for each depletion time step and then advances the multiple BISON simulations for fuel performance calculations. Sub-cycle advancement can be applied to the individual BISON simulations and allows multiple time steps to be applied to the fuel performance simulations. Currently, only loose coupling where data from a previous time step is applied to the current time step is performed.« less

  11. Induced nanoparticle aggregation for short nucleic acid quantification by depletion isotachophoresis.

    PubMed

    Marczak, Steven; Senapati, Satyajyoti; Slouka, Zdenek; Chang, Hsueh-Chia

    2016-12-15

    A rapid (<20min) gel-membrane biochip platform for the detection and quantification of short nucleic acids is presented based on a sandwich assay with probe-functionalized gold nanoparticles and their separation into concentrated bands by depletion-generated gel isotachophoresis. The platform sequentially exploits the enrichment and depletion phenomena of an ion-selective cation-exchange membrane created under an applied electric field. Enrichment is used to concentrate the nanoparticles and targets at a localized position at the gel-membrane interface for rapid hybridization. The depletion generates an isotachophoretic zone without the need for different conductivity buffers, and is used to separate linked nanoparticles from isolated ones in the gel medium and then by field-enhanced aggregation of only the linked particles at the depletion front. The selective field-induced aggregation of the linked nanoparticles during the subsequent depletion step produces two lateral-flow like bands within 1cm for easy visualization and quantification as the aggregates have negligible electrophoretic mobility in the gel and the isolated nanoparticles are isotachophoretically packed against the migrating depletion front. The detection limit for 69-base single-stranded DNA targets is 10 pM (about 10 million copies for our sample volume) with high selectivity against nontargets and a three decade linear range for quantification. The selectivity and signal intensity are maintained in heterogeneous mixtures where the nontargets outnumber the targets 10,000 to 1. The selective field-induced aggregation of DNA-linked nanoparticles at the ion depletion front is attributed to their trailing position at the isotachophoretic front with a large field gradient. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Folate depletion changes gene expression of fatty acid metabolism, DNA synthesis, and circadian cycle in male mice.

    PubMed

    Champier, Jacques; Claustrat, Francine; Nazaret, Nicolas; Fèvre Montange, Michelle; Claustrat, Bruno

    2012-02-01

    Folate is essential for purine and thymidylate biosynthesis and in methyl transfer for DNA methylation. Folate deficiency alters the secretion of melatonin, a hormone involved in circadian rhythm entrainment, and causes hyperhomocysteinemia because of disruption of homocysteine metabolism. Adverse effects of homocysteine include the generation of free radicals, activation of proliferation or apoptosis, and alteration of gene expression. The liver is an important organ for folate metabolism, and its genome analysis has revealed numerous clock-regulated genes. The variations at the level of their expression during folate deficiency are not known. The aim of our study was to investigate the effects of folate deficiency on gene expression in the mouse liver. A control group receiving a synthetic diet and a folate-depleted group were housed for 4 weeks on a 12-hour/12-hour light/dark cycle. Three mice from each group were euthanized under dim red light at the beginning of the light cycle, and 3, at the beginning of the dark period. Gene expression was studied in a microarray analysis. Of the 53 genes showing modified daily expression in the controls, 52 showed a less marked or no difference after folate depletion. Only 1, lpin1, showed a more marked difference. Ten genes coding for proteins involved in lipid metabolism did not show a morning/evening difference in controls but did after folate depletion. This study shows that, in the mouse liver, dietary folate depletion leads to major changes in expression of several genes involved in fatty acid metabolism, DNA synthesis, and expression of circadian genes. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. The effect of catastrophic collisional fragmentation and diffuse medium accretion on a computational interstellar dust system

    NASA Technical Reports Server (NTRS)

    Liffman, Kurt

    1990-01-01

    The effects of catastrophic collisional fragmentation and diffuse medium accretion on a the interstellar dust system are computed using a Monte Carlo computer model. The Monte Carlo code has as its basis an analytic solution of the bulk chemical evolution of a two-phase interstellar medium, described by Liffman and Clayton (1989). The model is subjected to numerous different interstellar processes as it transfers from one interstellar phase to another. Collisional fragmentation was found to be the dominant physical process that shapes the size spectrum of interstellar dust. It was found that, in the diffuse cloud phase, 90 percent of the refractory material is locked up in the dust grains, primarily due to accretion in the molecular medium. This result is consistent with the observed depletions of silicon. Depletions were found to be affected only slightly by diffuse cloud accretion.

  14. Understnding Oxyaquic Classification in Light of Filed Data

    NASA Astrophysics Data System (ADS)

    Lindbo, David L.; Anderson, Debbie; Vick, Roy; Vepraskas, Michael; Amoozegar, Aziz

    2014-05-01

    Hydropedologic studies related to seasonal saturation and hydraulic conductivity add to our knowledge to make accurate land use interpretations, particularly as related to land application of waste (liquid and solids) and many urban land uses. Soils mapped in the Carolina Slate Belt in the southeastern region of the United States, including the benchmark Tatum and Chewacla Series, are no exception to this and proper identification of seasonal saturation in these soils is critical as urban and suburban development increases in this region. Soils related to the catena may lack the typical 2 chroma redox depletions commonly used to identify seasonal saturation even though high water table is often directly observed in these soils. When a seasonal high water table is determined, the soil may be classified as oxyaquic. However, if 2 chroma depletions are absent (or present at deeper depths than seasonal saturation) local or state land use codes may misidentify the depth to saturation. The hydropedologic data from this study has shown that the redox depletions in this area are indeed related to saturation. This fact has been debated by consultants and local health departments. Prior to this study one prevailing view was that the low chroma features were simply due to stripping or leaching of Fe in old cotton or tobacco fields and in no way was related to saturation. Based on the evidence in this study the interpretation of the redox depletions, oxyaquic conditions, and occurrence of episaturation will need to be reconsidered.

  15. RSC-dependent constructive and destructive interference between opposing arrays of phased nucleosomes in yeast

    PubMed Central

    Ganguli, Dwaipayan; Chereji, Răzvan V.; Iben, James R.; Cole, Hope A.

    2014-01-01

    RSC and SWI/SNF are related ATP-dependent chromatin remodeling machines that move nucleosomes, regulating access to DNA. We addressed their roles in nucleosome phasing relative to transcription start sites in yeast. SWI/SNF has no effect on phasing at the global level. In contrast, RSC depletion results in global nucleosome repositioning: Both upstream and downstream nucleosomal arrays shift toward the nucleosome-depleted region (NDR), with no change in spacing, resulting in a narrower and partly filled NDR. The global picture of RSC-depleted chromatin represents the average of a range of chromatin structures, with most genes showing a shift of the +1 or the −1 nucleosome into the NDR. Using RSC ChIP data reported by others, we show that RSC occupancy is highest on the coding regions of heavily transcribed genes, though not at their NDRs. We propose that RSC has a role in restoring chromatin structure after transcription. Analysis of gene pairs in different orientations demonstrates that phasing patterns reflect competition between phasing signals emanating from neighboring NDRs. These signals may be in phase, resulting in constructive interference and a regular array, or out of phase, resulting in destructive interference and fuzzy positioning. We propose a modified barrier model, in which a stable complex located at the NDR acts as a bidirectional phasing barrier. In RSC-depleted cells, this barrier has a smaller footprint, resulting in narrower NDRs. Thus, RSC plays a critical role in organizing yeast chromatin. PMID:25015381

  16. RSC-dependent constructive and destructive interference between opposing arrays of phased nucleosomes in yeast.

    PubMed

    Ganguli, Dwaipayan; Chereji, Răzvan V; Iben, James R; Cole, Hope A; Clark, David J

    2014-10-01

    RSC and SWI/SNF are related ATP-dependent chromatin remodeling machines that move nucleosomes, regulating access to DNA. We addressed their roles in nucleosome phasing relative to transcription start sites in yeast. SWI/SNF has no effect on phasing at the global level. In contrast, RSC depletion results in global nucleosome repositioning: Both upstream and downstream nucleosomal arrays shift toward the nucleosome-depleted region (NDR), with no change in spacing, resulting in a narrower and partly filled NDR. The global picture of RSC-depleted chromatin represents the average of a range of chromatin structures, with most genes showing a shift of the +1 or the -1 nucleosome into the NDR. Using RSC ChIP data reported by others, we show that RSC occupancy is highest on the coding regions of heavily transcribed genes, though not at their NDRs. We propose that RSC has a role in restoring chromatin structure after transcription. Analysis of gene pairs in different orientations demonstrates that phasing patterns reflect competition between phasing signals emanating from neighboring NDRs. These signals may be in phase, resulting in constructive interference and a regular array, or out of phase, resulting in destructive interference and fuzzy positioning. We propose a modified barrier model, in which a stable complex located at the NDR acts as a bidirectional phasing barrier. In RSC-depleted cells, this barrier has a smaller footprint, resulting in narrower NDRs. Thus, RSC plays a critical role in organizing yeast chromatin. Published by Cold Spring Harbor Laboratory Press.

  17. Lack of harmonization in sweat testing for cystic fibrosis - a national survey.

    PubMed

    Christiansen, Anne Lindegaard; Nybo, Mads

    2014-11-01

    Sweat testing is used in the diagnosis of cystic fibrosis. Interpretation of the sweat test depends, however, on the method performed since conductivity, osmolality and chloride concentration all can be measured as part of a sweat test. The aim of this study was to investigate how performance of the test is organized in Denmark. Departments conducting the sweat test were contacted and interviewed following a premade questionnaire. They were asked about methods performed, applied NPU (Nomenclature for Properties and Units) code, reference interval, recommended interpretation and referred literature. 14 departments performed the sweat test. One department measured chloride and sodium concentration, while 13 departments measured conductivity. One department used a non-existing NPU code, two departments applied NPU codes inconsistent with the method performed, four departments applied no NPU code and seven applied a correct NPU code. Ten of the departments measuring conductivity applied reference intervals. Nine departments measuring conductivity had recommendations of a normal area, a grey zone and a pathological value, while four departments only applied a normal and grey zone or a pathological value. Cut-off values for normal, grey and pathological areas were like the reference intervals inconsistent. There is inconsistent use of NPU codes, reference intervals and interpretation of sweat conductivity used in the process of diagnosing cystic fibrosis. Because diagnosing cystic fibrosis is a combined effort between local pediatric departments, biochemical and genetic departments and cystic fibrosis centers, a national harmonization is necessary to assure correct clinical use.

  18. Molecular Pathways: Disrupting polyamine homeostasis as a therapeutic strategy for neuroblastoma

    PubMed Central

    Evageliou, Nicholas F.; Hogarty, Michael D.

    2009-01-01

    MYC genes are deregulated in a plurality of human cancers. Through direct and indirect mechanisms the MYC network regulates the expression of >15% of the human genome, including both protein-coding and non-coding RNAs. This complexity has complicated efforts to define the principal pathways mediating MYC’s oncogenic activity. MYC plays a central role providing for the bioenergetic and biomass needs of proliferating cells, and polyamines are essential cell constituents supporting many of these functions. The rate-limiting enzyme in polyamine biosynthesis, ODC, is a bona fide MYC target, as are other regulatory enzymes in this pathway. A wealth of data link enhanced polyamine biosynthesis to cancer progression, and polyamine-depletion may limit malignant transformation of pre-neoplastic lesions. Studies using transgenic cancer models also supports that the effect of MYC on tumor initiation and progression can be attenuated through repression of polyamine production. High-risk neuroblastomas (an often lethal embryonal tumor in which MYC activation is paramount) deregulate numerous polyamine enzymes to promote expansion of intracellular polyamine pools. Selective inhibition of key enzymes in this pathway, e.g., using DFMO and/or SAM486, reduces tumorigenesis and synergizes with chemotherapy to regress tumors in pre-clinical models. Here we review the potential clinical application of these and additional polyamine-depletion agents to neuroblastoma and other advanced cancers in which MYC is operative. PMID:19789308

  19. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  20. Analysis of protein-coding genetic variation in 60,706 humans.

    PubMed

    Lek, Monkol; Karczewski, Konrad J; Minikel, Eric V; Samocha, Kaitlin E; Banks, Eric; Fennell, Timothy; O'Donnell-Luria, Anne H; Ware, James S; Hill, Andrew J; Cummings, Beryl B; Tukiainen, Taru; Birnbaum, Daniel P; Kosmicki, Jack A; Duncan, Laramie E; Estrada, Karol; Zhao, Fengmei; Zou, James; Pierce-Hoffman, Emma; Berghout, Joanne; Cooper, David N; Deflaux, Nicole; DePristo, Mark; Do, Ron; Flannick, Jason; Fromer, Menachem; Gauthier, Laura; Goldstein, Jackie; Gupta, Namrata; Howrigan, Daniel; Kiezun, Adam; Kurki, Mitja I; Moonshine, Ami Levy; Natarajan, Pradeep; Orozco, Lorena; Peloso, Gina M; Poplin, Ryan; Rivas, Manuel A; Ruano-Rubio, Valentin; Rose, Samuel A; Ruderfer, Douglas M; Shakir, Khalid; Stenson, Peter D; Stevens, Christine; Thomas, Brett P; Tiao, Grace; Tusie-Luna, Maria T; Weisburd, Ben; Won, Hong-Hee; Yu, Dongmei; Altshuler, David M; Ardissino, Diego; Boehnke, Michael; Danesh, John; Donnelly, Stacey; Elosua, Roberto; Florez, Jose C; Gabriel, Stacey B; Getz, Gad; Glatt, Stephen J; Hultman, Christina M; Kathiresan, Sekar; Laakso, Markku; McCarroll, Steven; McCarthy, Mark I; McGovern, Dermot; McPherson, Ruth; Neale, Benjamin M; Palotie, Aarno; Purcell, Shaun M; Saleheen, Danish; Scharf, Jeremiah M; Sklar, Pamela; Sullivan, Patrick F; Tuomilehto, Jaakko; Tsuang, Ming T; Watkins, Hugh C; Wilson, James G; Daly, Mark J; MacArthur, Daniel G

    2016-08-18

    Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. Here we describe the aggregation and analysis of high-quality exome (protein-coding region) DNA sequence data for 60,706 individuals of diverse ancestries generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of predicted protein-truncating variants, with 72% of these genes having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human 'knockout' variants in protein-coding genes.

  1. The stopping powers and energy straggling of heavy ions in polymer foils

    NASA Astrophysics Data System (ADS)

    Mikšová, R.; Macková, A.; Malinský, P.; Hnatowicz, V.; Slepička, P.

    2014-07-01

    The stopping power and energy straggling of 7Li, 12C and 16O ions in thin poly(etheretherketone) (PEEK), polyethylene terephthalate (PET) and polycarbonate (PC) foils were measured in the incident beam energy range of 9.4-11.8 MeV using an indirect transmission method. Ions scattered from a thin gold target at an angle of 150° were registered by a partially depleted PIPS detector, partly shielded with a polymer foil placed in front of the detector. Therefore, the signals from both direct and slowed down ions were visible in the same energy spectrum, which was evaluated by the ITAP code, developed at our laboratory. The ITAP code was employed to perform a Gaussian-fitting procedure to provide a complete analysis of each measured spectrum. The measured stopping powers were compared with the predictions obtained from the SRIM-2008 and MSTAR codes and with previous experimental data. The energy straggling data were compared with those calculated by using Bohr's, Lindhard-Scharff and Bethe-Livingston theories.

  2. A genome-wide survey of maternal and embryonic transcripts during Xenopus tropicalis development.

    PubMed

    Paranjpe, Sarita S; Jacobi, Ulrike G; van Heeringen, Simon J; Veenstra, Gert Jan C

    2013-11-06

    Dynamics of polyadenylation vs. deadenylation determine the fate of several developmentally regulated genes. Decay of a subset of maternal mRNAs and new transcription define the maternal-to-zygotic transition, but the full complement of polyadenylated and deadenylated coding and non-coding transcripts has not yet been assessed in Xenopus embryos. To analyze the dynamics and diversity of coding and non-coding transcripts during development, both polyadenylated mRNA and ribosomal RNA-depleted total RNA were harvested across six developmental stages and subjected to high throughput sequencing. The maternally loaded transcriptome is highly diverse and consists of both polyadenylated and deadenylated transcripts. Many maternal genes show peak expression in the oocyte and include genes which are known to be the key regulators of events like oocyte maturation and fertilization. Of all the transcripts that increase in abundance between early blastula and larval stages, about 30% of the embryonic genes are induced by fourfold or more by the late blastula stage and another 35% by late gastrulation. Using a gene model validation and discovery pipeline, we identified novel transcripts and putative long non-coding RNAs (lncRNA). These lncRNA transcripts were stringently selected as spliced transcripts generated from independent promoters, with limited coding potential and a codon bias characteristic of noncoding sequences. Many lncRNAs are conserved and expressed in a developmental stage-specific fashion. These data reveal dynamics of transcriptome polyadenylation and abundance and provides a high-confidence catalogue of novel and long non-coding RNAs.

  3. Mesoscopic Field-Effect-Induced Devices in Depleted Two-Dimensional Electron Systems

    NASA Astrophysics Data System (ADS)

    Bachsoliani, N.; Platonov, S.; Wieck, A. D.; Ludwig, S.

    2017-12-01

    Nanoelectronic devices embedded in the two-dimensional electron system (2DES) of a GaAs /(Al ,Ga )As heterostructure enable a large variety of applications ranging from fundamental research to high-speed transistors. Electrical circuits are thereby commonly defined by creating barriers for carriers by the selective depletion of a preexisting 2DES. We explore an alternative approach: we deplete the 2DES globally by applying a negative voltage to a global top gate and screen the electric field of the top gate only locally using nanoscale gates placed on the wafer surface between the plane of the 2DES and the top gate. Free carriers are located beneath the screen gates, and their properties can be controlled by means of geometry and applied voltages. This method promises considerable advantages for the definition of complex circuits by the electric-field effect, as it allows us to reduce the number of gates and simplify gate geometries. Examples are carrier systems with ring topology or large arrays of quantum dots. We present a first exploration of this method pursuing field effect, Hall effect, and Aharonov-Bohm measurements to study electrostatic, dynamic, and coherent properties.

  4. Nuclear Data Uncertainty Propagation in Depletion Calculations Using Cross Section Uncertainties in One-group or Multi-group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Díez, C.J., E-mail: cj.diez@upm.es; Cabellos, O.; Instituto de Fusión Nuclear, Universidad Politécnica de Madrid, 28006 Madrid

    Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has tomore » be performed in order to analyse the limitations of using one-group uncertainties.« less

  5. Two-photon excitation spectroscopy of carotenoid-containing and carotenoid-depleted LH2 complexes from purple bacteria.

    PubMed

    Stepanenko, Ilya; Kompanetz, Viktor; Makhneva, Zoya; Chekalin, Sergey; Moskalenko, Andrei; Razjivin, Andrei

    2009-08-27

    We applied two-photon fluorescence excitation spectroscopy to LH2 complex from purple bacteria Allochromatium minutissimum and Rhodobacter sphaeroides . Bacteriochlorophyll fluorescence was measured under two-photon excitation of the samples within the 1200-1500 nm region. Spectra were obtained for both carotenoid-containing and -depleted complexes of each bacterium to allow their direct comparison. The depletion of carotenoids did not alter the two-photon excitation spectra of either bacteria. The spectra featured a wide excitation band around 1350 nm (2x675 nm, 14,800 cm(-1)) which strongly resembled two-photon fluorescence excitation spectra of similar complexes published by other authors. We consider obtained experimental data to be evidence of direct two-photon excitation of bacteriochlorophyll excitonic states in this spectral region.

  6. Nuclear Data Uncertainty Propagation in Depletion Calculations Using Cross Section Uncertainties in One-group or Multi-group

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Cabellos, O.; Martínez, J. S.

    2015-01-01

    Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.

  7. Simulations of the plasma dynamics in high-current ion diodes

    NASA Astrophysics Data System (ADS)

    Boine-Frankenheim, O.; Pointon, T. D.; Mehlhorn, T. A.

    Our time-implicit fluid/Particle-In-Cell (PIC) code DYNAID [1]is applied to problems relevant for applied- B ion diode operation. We present simulations of the laser ion source, which will soon be employed on the SABRE accelerator at SNL, and of the dynamics of the anode source plasma in the applied electric and magnetic fields. DYNAID is still a test-bed for a higher-dimensional simulation code. Nevertheless, the code can already give new theoretical insight into the dynamics of plasmas in pulsed power devices.

  8. Integral experiments on thorium assemblies with D-T neutron source

    NASA Astrophysics Data System (ADS)

    Liu, Rong; Yang, Yiwei; Feng, Song; Zheng, Lei; Lai, Caifeng; Lu, Xinxin; Wang, Mei; Jiang, Li

    2017-09-01

    To validate nuclear data and code in the neutronics design of a hybrid reactor with thorium, integral experiments in two kinds of benchmark thorium assemblies with a D-T fusion neutron source have been performed. The one kind of 1D assemblies consists of polyethylene and depleted uranium shells. The other kind of 2D assemblies consists of three thorium oxide cylinders. The capture reaction rates, fission reaction rates, and (n, 2n) reaction rates in 232Th in the assemblies are measured by ThO2 foils. The leakage neutron spectra from the ThO2 cylinders are measured by a liquid scintillation detector. The experimental uncertainties in all the results are analyzed. The measured results are compared to the calculated ones with MCNP code and ENDF/B-VII.0 library data.

  9. National Emission Standards for Hazardous Air Pollutants (NESHAP) Memorandum of Agreement (MOA) Between NASA Headquarters and MSFC (Marshall Space Flight Center) for NASA Principal Center for Review of Clean Air Regulations

    NASA Technical Reports Server (NTRS)

    Caruso, Salvadore V.; Clark-Ingram, Marceia A.

    2000-01-01

    This paper presents a memorandum of agreement on Clean Air Regulations. NASA headquarters (code JE and code M) has asked MSFC to serve as principle center for review of Clean Air Act (CAA) regulations. The purpose of the principle center is to provide centralized support to NASA headquarters for the management and leadership of NASA's CAA regulation review process and to identify the potential impact of proposed CAA reguations on NASA program hardware and supporting facilities. The materials and processes utilized in the manufacture of NASA's programmatic hardware contain HAPs (Hazardous Air Pollutants), VOCs (Volatile Organic Compounds), and ODC (Ozone Depleting Chemicals). This paper is presented in viewgraph form.

  10. Preparation and Immunoaffinity Depletion of Fresh Frozen Tissue Homogenates for Mass Spectrometry-Based Proteomics in the Context of Drug Target/Biomarker Discovery.

    PubMed

    Prieto, DaRue A; Chan, King C; Johann, Donald J; Ye, Xiaoying; Whitely, Gordon; Blonder, Josip

    2017-01-01

    The discovery of novel drug targets and biomarkers via mass spectrometry (MS)-based proteomic analysis of clinical specimens has proven to be challenging. The wide dynamic range of protein concentration in clinical specimens and the high background/noise originating from highly abundant proteins in tissue homogenates and serum/plasma encompass two major analytical obstacles. Immunoaffinity depletion of highly abundant blood-derived proteins from serum/plasma is a well-established approach adopted by numerous researchers; however, the utilization of this technique for immunodepletion of tissue homogenates obtained from fresh frozen clinical specimens is lacking. We first developed immunoaffinity depletion of highly abundant blood-derived proteins from tissue homogenates, using renal cell carcinoma as a model disease, and followed this study by applying it to different tissue types. Tissue homogenate immunoaffinity depletion of highly abundant proteins may be equally important as is the recognized need for depletion of serum/plasma, enabling more sensitive MS-based discovery of novel drug targets, and/or clinical biomarkers from complex clinical samples. Provided is a detailed protocol designed to guide the researcher through the preparation and immunoaffinity depletion of fresh frozen tissue homogenates for two-dimensional liquid chromatography, tandem mass spectrometry (2D-LC-MS/MS)-based molecular profiling of tissue specimens in the context of drug target and/or biomarker discovery.

  11. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  12. Impact of spatial variation in snow water equivalent and snow ablation on spring snowcover depletion over an alpine ridge

    NASA Astrophysics Data System (ADS)

    Schirmer, Michael; Harder, Phillip; Pomeroy, John

    2016-04-01

    The spatial and temporal dynamics of mountain snowmelt are controlled by the spatial distribution of snow accumulation and redistribution and the pattern of melt energy applied to this snowcover. In order to better quantify the spatial variations of accumulation and ablation, Structure-from-Motion techniques were applied to sequential aerial photographs of an alpine ridge in the Canadian Rocky Mountains taken from an Unmanned Aerial Vehicle (UAV). Seven spatial maps of snow depth and changes to depth during late melt (May-July) were generated at very high resolutions covering an area of 800 x 600 m. The accuracy was assessed with over 100 GPS measurements and RMSE were found to be less than 10 cm. Low resolution manual measurements of density permitted calculation of snow water equivalent (SWE) and change in SWE (ablation rate). The results indicate a highly variable initial SWE distribution, which was five times more variable than the spatial variation in ablation rate. Spatial variation in ablation rate was still substantial, with a factor of two difference between north and south aspects and small scale variations due to local dust deposition. However, the impact of spatial variations in ablation rate on the snowcover depletion curve could not be discerned. The reason for this is that only a weak spatial correlation developed between SWE and ablation rate. These findings suggest that despite substantial variations in ablation rate, snowcover depletion curve calculations should emphasize the spatial variation of initial SWE rather than the variation in ablation rate. While there is scientific evidence from other field studies that support this, there are also studies that suggest that spatial variations in ablation rate can influence snowcover depletion curves in complex terrain, particularly in early melt. The development of UAV photogrammetry has provided an opportunity for further detailed measurement of ablation rates, SWE and snowcover depletion over complex terrain and UAV field studies are recommended to clarify the relative importance of SWE and melt variability on snowcover depletion in various environmental conditions.

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING AND CODING VERIFICATION (HAND ENTRY) (UA-D-14.0)

    EPA Science Inventory

    The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...

  14. Effectiveness of pulmonary rehabilitation in exercise capacity and quality of life in chronic obstructive pulmonary disease patients with and without global fat-free mass depletion.

    PubMed

    Berton, Danilo C; Silveira, Leonardo; Da Costa, Cassia C; De Souza, Rafael Machado; Winter, Claudia D; Zimermann Teixeira, Paulo José

    2013-08-01

    To investigate the effectiveness of pulmonary rehabilitation (PR) in exercise capacity and quality of life in patients with chronic obstructive pulmonary disease (COPD) with and without global fat-free mass (FFM) depletion. Retrospective case-control. Outpatient clinic, university center. COPD patients (N=102) that completed PR were initially evaluated. PR including whole-body and weight training for 12 weeks, 3 times per week. St. George Respiratory Questionnaire (SGRQ), 6-minute walk distance (6MWD), and FFM evaluation applied before and after PR. Patients were stratified according to their FFM status measured by bioelectric impedance. They were considered depleted if the FFM index was ≤ 15 kg/m(2) in women and ≤ 16 kg/m(2) in men. From the initial sample, all depleted patients (n=31) composed the FFM depleted group. It was composed predominantly by women (68%) with a mean age ± SD of 64.4 ± 7.3 years and a forced expiratory volume in 1 second of 33.6%=-13.2% predicted. Paired for sex and age, 31 nondepleted patients were selected from the initial sample to compose the nondepleted group. Improvement in the 6MWD was similar in these 2 groups after PR. Both groups improved SGRQ scores, although the observed power was small and did not allow adequate comparison between depleted and nondepleted patients. There was no difference between groups in weight change, whereas FFM tended to be greater in depleted patients. This increase had no correlation with the 6MWD or the SGRQ. Benefits of PR to exercise capacity were similar comparing FFM depleted and nondepleted COPD patients. Although FFM change tended to be greater in depleted patients, this increase had no definite relation with clinical outcomes. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. 17 CFR 229.406 - (Item 406) Code of ethics.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 406) Code of ethics. 229... 406) Code of ethics. (a) Disclose whether the registrant has adopted a code of ethics that applies to... code of ethics, explain why it has not done so. (b) For purposes of this Item 406, the term code of...

  16. LIDT-DD: A new self-consistent debris disc model that includes radiation pressure and couples dynamical and collisional evolution

    NASA Astrophysics Data System (ADS)

    Kral, Q.; Thébault, P.; Charnoz, S.

    2013-10-01

    Context. In most current debris disc models, the dynamical and the collisional evolutions are studied separately with N-body and statistical codes, respectively, because of stringent computational constraints. In particular, incorporating collisional effects (especially destructive collisions) into an N-body scheme has proven a very arduous task because of the exponential increase of particles it would imply. Aims: We present here LIDT-DD, the first code able to mix both approaches in a fully self-consistent way. Our aim is for it to be generic enough to be applied to any astrophysical case where we expect dynamics and collisions to be deeply interlocked with one another: planets in discs, violent massive breakups, destabilized planetesimal belts, bright exozodiacal discs, etc. Methods: The code takes its basic architecture from the LIDT3D algorithm for protoplanetary discs, but has been strongly modified and updated to handle the very constraining specificities of debris disc physics: high-velocity fragmenting collisions, radiation-pressure affected orbits, absence of gas that never relaxes initial conditions, etc. It has a 3D Lagrangian-Eulerian structure, where grains of a given size at a given location in a disc are grouped into super-particles or tracers whose orbits are evolved with an N-body code and whose mutual collisions are individually tracked and treated using a particle-in-a-box prescription designed to handle fragmenting impacts. To cope with the wide range of possible dynamics for same-sized particles at any given location in the disc, and in order not to lose important dynamical information, tracers are sorted and regrouped into dynamical families depending on their orbits. A complex reassignment routine that searches for redundant tracers in each family and reassignes them where they are needed, prevents the number of tracers from diverging. Results: The LIDT-DD code has been successfully tested on simplified cases for which robust results have been obtained in past studies: we retrieve the classical features of particle size distributions in unperturbed discs and the outer radial density profiles in ~r-1.5 outside narrow collisionally active rings as well as the depletion of small grains in dynamically cold discs. The potential of the new code is illustrated with the test case of the violent breakup of a massive planetesimal within a debris disc. Preliminary results show that we are able for the first time to quantify the timescale over which the signature of such massive break-ups can be detected. In addition to studying such violent transient events, the main potential future applications of the code are planet and disc interactions, and more generally, any configurations where dynamics and collisions are expected to be intricately connected.

  17. Hybrid concatenated codes and iterative decoding

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)

    2000-01-01

    Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.

  18. Photoelectrochemical molecular comb

    DOEpatents

    Thundat, Thomas G.; Ferrell, Thomas L.; Brown, Gilbert M.

    2006-08-15

    A method and apparatus for separating molecules. The apparatus includes a substrate having a surface. A film in contact with the surface defines a substrate/film interface. An electrode electrically connected to the film applies a voltage potential between the electrode and the substrate to form a depletion region in the substrate at the substrate/film interface. A photon energy source having an energy level greater than the potential is directed at the depletion region to form electron-hole pairs in the depletion region. At least one of the electron-hole pairs is separated by the potential into an independent electron and an independent hole having opposite charges and move in opposing directions. One of the electron and hole reach the substrate/film interface to create a photopotential in the film causing charged molecules in the film to move in response to the localized photovoltage.

  19. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  20. LITHIUM DEPLETION IS A STRONG TEST OF CORE-ENVELOPE RECOUPLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somers, Garrett; Pinsonneault, Marc H., E-mail: somers@astronomy.ohio-state.edu

    2016-09-20

    Rotational mixing is a prime candidate for explaining the gradual depletion of lithium from the photospheres of cool stars during the main sequence. However, previous mixing calculations have relied primarily on treatments of angular momentum transport in stellar interiors incompatible with solar and stellar data in the sense that they overestimate the internal differential rotation. Instead, recent studies suggest that stars are strongly differentially rotating at young ages but approach a solid body rotation during their lifetimes. We modify our rotating stellar evolution code to include an additional source of angular momentum transport, a necessary ingredient for explaining the openmore » cluster rotation pattern, and examine the consequences for mixing. We confirm that core-envelope recoupling with a ∼20 Myr timescale is required to explain the evolution of the mean rotation pattern along the main sequence, and demonstrate that it also provides a more accurate description of the Li depletion pattern seen in open clusters. Recoupling produces a characteristic pattern of efficient mixing at early ages and little mixing at late ages, thus predicting a flattening of Li depletion at a few Gyr, in agreement with the observed late-time evolution. Using Li abundances we argue that the timescale for core-envelope recoupling during the main sequence decreases sharply with increasing mass. We discuss the implications of this finding for stellar physics, including the viability of gravity waves and magnetic fields as agents of angular momentum transport. We also raise the possibility of intrinsic differences in initial conditions in star clusters using M67 as an example.« less

  1. Comparison of the effects of mechanical and osmotic pressures on the collagen fiber architecture of intact and proteoglycan-depleted articular cartilage.

    PubMed

    Saar, Galit; Shinar, Hadassah; Navon, Gil

    2007-04-01

    One of the functions of articular cartilage is to withstand recurrent pressure applied in everyday life. In previous studies, osmotic pressure has been used to mimic the effects of mechanical pressure. In the present study, the response of the collagen network of intact and proteoglycans (PG)-depleted cartilage to mechanical and osmotic pressures is compared. The technique used is one-dimensional (2)H double quantum filtered spectroscopic MRI, which gives information about the degree of order and the density of the collagen fibers at the different locations throughout the intact tissue. For the nonpressurized plugs, the depletion had no effect on these parameters. Major differences were found in the zones near the bone between the effects of the two types of application of pressure for both intact and depleted plugs. While the order is lost in these zones as a result of mechanical load, it is preserved under osmotic pressure. For both intact and PG-depleted plugs under osmotic stress most of the collagen fibers become disordered. Our results indicate that different modes of strain are produced by unidirectional mechanical load and the isotropic osmotic stress. Thus, osmotic stress cannot serve as a model for the effect of load on cartilage in vivo.

  2. The influence of ozone forcing on blocking in the Southern Hemisphere

    NASA Astrophysics Data System (ADS)

    Dennison, Fraser W.; McDonald, Adrian; Morgenstern, Olaf

    2016-12-01

    We investigate the influence of ozone depletion and recovery on tropospheric blocking in the Southern Hemisphere. Blocking events are identified using a persistent positive anomaly method applied to 500 hPa geopotential height. Using the National Institute for Water and Atmospheric Research-United Kingdom Chemistry and Aerosols chemistry-climate model, we compare reference runs that include forcing due to greenhouse gases (GHGs) and ozone-depleting substances to sensitivity simulations in which ozone-depleting substances are fixed at their 1960 abundances and other sensitivity simulations with GHGs fixed at their 1960 abundances. Blocking events in the South Atlantic are shown to follow stratospheric positive anomalies in the Southern Annular Mode (SAM) index; this is not the case for South Pacific blocking events. This relationship means that summer ozone depletion, and corresponding positive SAM anomalies, leads to an increased frequency of blocking in the South Atlantic while having little effect in the South Pacific. Similarly, ozone recovery, having the opposite effect on the SAM, leads to a decline in blocking frequency in the South Atlantic, although this may be somewhat counteracted by the effect of increasing GHGs.

  3. Radiological and toxicological assessment of an external heat (burn) test of the 105MM cartridge, APFSDS-T, XM-744

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilchrist, R.L.; Parker, G.B.; Mishima, J.

    1978-03-01

    The potential radiological and toxicological hazard of depleted uranium aerosol release was investigated. This type of release might arise from accidents with XM-774 ammunition involving great heat. Twelve rounds of packaged ammunition were subjected to an external heat (burn) test. Examination of the site on the day following the test revealed that all 12 depleted uranium penetrators were completely intact. Oxidation of the penetrators was not apparent, even on the most severely burned projectile located at ground zero. Eleven of the 12 projectiles were recovered with the sabots intact; some sabots appeared charred. It was concluded that no airborne releasemore » of depleted uranium had occurred and subsequently there had been no radiological or toxicological hazard from DU during this test. However, this conclusion may not apply to the release of depleted uranium in other types of fires involving this ammunition because other factors may affect the fire. These factors include type of fuel, number of ammunition rounds, and type of structure housing the ammunition.« less

  4. Molecularly imprinted composite cryogel for albumin depletion from human serum.

    PubMed

    Andaç, Müge; Baydemir, Gözde; Yavuz, Handan; Denizli, Adil

    2012-11-01

    A new composite protein-imprinted macroporous cryogel was prepared for depletion of albumin from human serum prior to use in proteom applications. Polyhydroxyethyl-methacylate-based molecularly imprinted polymer (MIP) composite cryogel was prepared with high gel fraction yields up to 83%, and its morphology and porosity were characterized by Fourier transform infrared, scanning electron microscopy, swelling studies, flow dynamics, and surface area measurements. Selective binding experiments were performed in the presence of competitive proteins human transferrin (HTR) and myoglobin (MYB). MIP composite cryogel exhibited a high binding capacity and selectivity for human serum albumin (HSA) in the presence of HTR and MYB. The competitive adsorption amount for HSA in MIP composite cryogel is 722.1 mg/dL in the presence of competitive proteins (HTR and MYB). MIP composite cryogel column was successfully applied in the fast protein liquid chromatography system for selective depletion of albumin in human serum. The depletion ratio was highly increased by embedding beads into cryogel (85%). Finally, MIP composite cryogel can be reused many times with no apparent decrease in HSA adsorption capacity. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Depletion of substance P and glutamate by capsaicin blocks respiratory rhythm in neonatal rat in vitro

    PubMed Central

    Morgado-Valle, Consuelo; Feldman, Jack L

    2004-01-01

    The specific role of the neuromodulator substance P (SP) and its target, the neurokinin 1 receptor (NK1R), in the generation and regulation of respiratory activity is not known. The preBötzinger complex (preBötC), an essential site for respiratory rhythm generation, contains glutamatergic NK1R-expressing neurones that are strongly modulated by exogenously applied SP or acute pharmacological blockade of NK1Rs. We investigated the effects of capsaicin, which depletes neuropeptides (including SP) and glutamate from presynaptic terminals, on respiratory motor output in medullary slice preparations of neonatal rat that generate respiratory-related activity. Bath application of capsaicin slowed respiratory motor output in a dose- and time-dependent manner. Respiratory rhythm could be restored by bath application of SP or glutamate transporter blockers. Capsaicin also evoked dose-dependent glutamate release and depleted SP in fibres within the preBötC. Our results suggest that depletion of SP (or other peptides) and/or glutamate by capsaicin causes a cessation of respiratory rhythm in neonatal rat slices. PMID:14724197

  6. Depletion of substance P and glutamate by capsaicin blocks respiratory rhythm in neonatal rat in vitro.

    PubMed

    Morgado-Valle, Consuelo; Feldman, Jack L

    2004-03-16

    The specific role of the neuromodulator substance P (SP) and its target, the neurokinin 1 receptor (NK1R), in the generation and regulation of respiratory activity is not known. The preBötzinger complex (preBötC), an essential site for respiratory rhythm generation, contains glutamatergic NK1R-expressing neurones that are strongly modulated by exogenously applied SP or acute pharmacological blockade of NK1Rs. We investigated the effects of capsaicin, which depletes neuropeptides (including SP) and glutamate from presynaptic terminals, on respiratory motor output in medullary slice preparations of neonatal rat that generate respiratory-related activity. Bath application of capsaicin slowed respiratory motor output in a dose- and time-dependent manner. Respiratory rhythm could be restored by bath application of SP or glutamate transporter blockers. Capsaicin also evoked dose-dependent glutamate release and depleted SP in fibres within the preBötC. Our results suggest that depletion of SP (or other peptides) and/or glutamate by capsaicin causes a cessation of respiratory rhythm in neonatal rat slices.

  7. A Verification-Driven Approach to Traceability and Documentation for Auto-Generated Mathematical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.; Fischer, Bernd

    2009-01-01

    Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.

  8. Connection anonymity analysis in coded-WDM PONs

    NASA Astrophysics Data System (ADS)

    Sue, Chuan-Ching

    2008-04-01

    A coded wavelength division multiplexing passive optical network (WDM PON) is presented for fiber to the home (FTTH) systems to protect against eavesdropping. The proposed scheme applies spectral amplitude coding (SAC) with a unipolar maximal-length sequence (M-sequence) code matrix to generate a specific signature address (coding) and to retrieve its matching address codeword (decoding) by exploiting the cyclic properties inherent in array waveguide grating (AWG) routers. In addition to ensuring the confidentiality of user data, the proposed coded-WDM scheme is also a suitable candidate for the physical layer with connection anonymity. Under the assumption that the eavesdropper applies a photo-detection strategy, it is shown that the coded WDM PON outperforms the conventional TDM PON and WDM PON schemes in terms of a higher degree of connection anonymity. Additionally, the proposed scheme allows the system operator to partition the optical network units (ONUs) into appropriate groups so as to achieve a better degree of anonymity.

  9. Neural code alterations and abnormal time patterns in Parkinson’s disease

    NASA Astrophysics Data System (ADS)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  10. Radial Diffusion study of the 1 June 2013 CME event using MHD simulations.

    NASA Astrophysics Data System (ADS)

    Patel, M.; Hudson, M.; Wiltberger, M. J.; Li, Z.; Boyd, A. J.

    2016-12-01

    The June 1, 2013 storm was a CME-shock driven geomagnetic storm (Dst = -119 nT) that caused a dropout affecting all radiation belt electron energies measured by the Energetic Particle, Composition and Thermal Plasma Suite (ECT) instrument on Van Allen Probes at higher L-shells following dynamic pressure enhancement in the solar wind. Lower energies (up to about 700 keV) were enhanced by the storm while MeV electrons were depleted throughout the belt. We focus on depletion through radial diffusion caused by the enhanced ULF wave activity due to the CME-shock. This study utilities the Lyon-Fedder-Mobarry (LFM) model, a 3D global magnetospheric simulation code based on the ideal MHD equations, coupled with the Magnetosphere Ionosphere Coupler (MIX) and Rice Convection Model (RCM). The MHD electric and magnetic fields with equations described by Fei et al. [JGR, 2006] are used to calculate radial diffusion coefficients (DLL). These DLL values are input into a radial diffusion code to recreate the dropouts observed by the Van Allen Probes. The importance of understanding the complex role that ULF waves play in radial transport and the effects of CME-driven storms on the relativistic energy electrons in the radiation belts can be accomplished using MHD simulations to obtain diffusion coefficients, initial phase space density and the outer boundary condition from the ECT instrument suite and a radial diffusion model to reproduce observed fluxes which compare favorably with Van Allen Probes ECT measurements.

  11. APOLLO: a general code for transport, slowing-down and thermalization calculations in heterogeneous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavenoky, A.

    1973-01-01

    From national topical meeting on mathematical models and computational techniques for analysis of nuclear systems; Ann Arbor, Michigan, USA (8 Apr 1973). In mathematical models and computational techniques for analysis of nuclear systems. APOLLO calculates the space-and-energy-dependent flux for a one dimensional medium, in the multigroup approximation of the transport equation. For a one dimensional medium, refined collision probabilities have been developed for the resolution of the integral form of the transport equation; these collision probabilities increase accuracy and save computing time. The interaction between a few cells can also be treated by the multicell option of APOLLO. The diffusionmore » coefficient and the material buckling can be computed in the various B and P approximations with a linearly anisotropic scattering law, even in the thermal range of the spectrum. Eventually this coefficient is corrected for streaming by use of Benoist's theory. The self-shielding of the heavy isotopes is treated by a new and accurate technique which preserves the reaction rates of the fundamental fine structure flux. APOLLO can perform a depletion calculation for one cell, a group of cells or a complete reactor. The results of an APOLLO calculation are the space-and-energy-dependent flux, the material buckling or any reaction rate; these results can also be macroscopic cross sections used as input data for a 2D or 3D depletion and diffusion code in reactor geometry. 10 references. (auth)« less

  12. Adaptive partially hidden Markov models with application to bilevel image coding.

    PubMed

    Forchhammer, S; Rasmussen, T S

    1999-01-01

    Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.

  13. Network analysis for the visualization and analysis of qualitative data.

    PubMed

    Pokorny, Jennifer J; Norman, Alex; Zanesco, Anthony P; Bauer-Wu, Susan; Sahdra, Baljinder K; Saron, Clifford D

    2018-03-01

    We present a novel manner in which to visualize the coding of qualitative data that enables representation and analysis of connections between codes using graph theory and network analysis. Network graphs are created from codes applied to a transcript or audio file using the code names and their chronological location. The resulting network is a representation of the coding data that characterizes the interrelations of codes. This approach enables quantification of qualitative codes using network analysis and facilitates examination of associations of network indices with other quantitative variables using common statistical procedures. Here, as a proof of concept, we applied this method to a set of interview transcripts that had been coded in 2 different ways and the resultant network graphs were examined. The creation of network graphs allows researchers an opportunity to view and share their qualitative data in an innovative way that may provide new insights and enhance transparency of the analytical process by which they reach their conclusions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Hollow tin/chromium whiskers

    NASA Astrophysics Data System (ADS)

    Cheng, Jing; Vianco, Paul T.; Li, James C. M.

    2010-05-01

    Tin whiskers have been an engineering challenge for over five decades. The mechanism has not been agreed upon thus far. This experiment aimed to identify a mechanism by applying compressive stresses to a tin film evaporated on silicon substrate with an adhesion layer of chromium in between. A phenomenon was observed in which hollow whiskers grew inside depleted areas. Using focused ion beam, the hollow whiskers were found to contain both tin and chromium. At the bottom of the depleted areas, thin tin/tin oxide film remained over the chromium layer. It indicates that tin transport occurred along the interface between tin and chromium layers.

  15. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    NASA Technical Reports Server (NTRS)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  16. A new method of applying a controlled soil water stress, and its effect on the growth of cotton and soybean seedlings at ambient and elevated carbon dioxide

    USDA-ARS?s Scientific Manuscript database

    While numerous studies have shown that elevated carbon dioxide can delay soil water depletion by causing partial stomatal closure, few studies have compared responses of plant growth to the same soil water deficits imposed at ambient and elevated carbon dioxide. We applied a vacuum to ceramic cups ...

  17. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    PubMed

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  18. Status of BOUT fluid turbulence code: improvements and verification

    NASA Astrophysics Data System (ADS)

    Umansky, M. V.; Lodestro, L. L.; Xu, X. Q.

    2006-10-01

    BOUT is an electromagnetic fluid turbulence code for tokamak edge plasma [1]. BOUT performs time integration of reduced Braginskii plasma fluid equations, using spatial discretization in realistic geometry and employing a standard ODE integration package PVODE. BOUT has been applied to several tokamak experiments and in some cases calculated spectra of turbulent fluctuations compared favorably to experimental data. On the other hand, the desire to understand better the code results and to gain more confidence in it motivated investing effort in rigorous verification of BOUT. Parallel to the testing the code underwent substantial modification, mainly to improve its readability and tractability of physical terms, with some algorithmic improvements as well. In the verification process, a series of linear and nonlinear test problems was applied to BOUT, targeting different subgroups of physical terms. The tests include reproducing basic electrostatic and electromagnetic plasma modes in simplified geometry, axisymmetric benchmarks against the 2D edge code UEDGE in real divertor geometry, and neutral fluid benchmarks against the hydrodynamic code LCPFCT. After completion of the testing, the new version of the code is being applied to actual tokamak edge turbulence problems, and the results will be presented. [1] X. Q. Xu et al., Contr. Plas. Phys., 36,158 (1998). *Work performed for USDOE by Univ. Calif. LLNL under contract W-7405-ENG-48.

  19. Statistical properties of DNA sequences

    NASA Technical Reports Server (NTRS)

    Peng, C. K.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Simons, M.; Stanley, H. E.

    1995-01-01

    We review evidence supporting the idea that the DNA sequence in genes containing non-coding regions is correlated, and that the correlation is remarkably long range--indeed, nucleotides thousands of base pairs distant are correlated. We do not find such a long-range correlation in the coding regions of the gene. We resolve the problem of the "non-stationarity" feature of the sequence of base pairs by applying a new algorithm called detrended fluctuation analysis (DFA). We address the claim of Voss that there is no difference in the statistical properties of coding and non-coding regions of DNA by systematically applying the DFA algorithm, as well as standard FFT analysis, to every DNA sequence (33301 coding and 29453 non-coding) in the entire GenBank database. Finally, we describe briefly some recent work showing that the non-coding sequences have certain statistical features in common with natural and artificial languages. Specifically, we adapt to DNA the Zipf approach to analyzing linguistic texts. These statistical properties of non-coding sequences support the possibility that non-coding regions of DNA may carry biological information.

  20. A Method for Selective Depletion of Zn(II) Ions from Complex Biological Media and Evaluation of Cellular Consequences of Zn(II) Deficiency

    PubMed Central

    Richardson, Christopher E. R.; Cunden, Lisa S.; Butty, Vincent L.; Nolan, Elizabeth M.; Lippard, Stephen J.; Shoulders, Matthew D.

    2018-01-01

    We describe the preparation, evaluation, and application of an S100A12 protein-conjugated solid support, hereafter the “A12-resin,” that can remove 99% of Zn(II) from complex biological solutions without significantly perturbing the concentrations of other metal ions. The A12-resin can be applied to selectively deplete Zn(II) from diverse tissue culture media and from other biological fluids, including human serum. To further demonstrate the utility of this approach, we investigated metabolic, transcriptomic, and metallomic responses of HEK293 cells cultured in medium depleted of Zn(II) using S100A12. The resulting data provide insight into how cells respond to acute Zn(II) deficiency. We expect that the A12-resin will facilitate interrogation of disrupted Zn(II) homeostasis in biological settings, uncovering novel roles for Zn(II) in biology. PMID:29334734

  1. How Actuated Particles Effectively Capture Biomolecular Targets

    PubMed Central

    2017-01-01

    Because of their high surface-to-volume ratio and adaptable surface functionalization, particles are widely used in bioanalytical methods to capture molecular targets. In this article, a comprehensive study is reported of the effectiveness of protein capture by actuated magnetic particles. Association rate constants are quantified in experiments as well as in Brownian dynamics simulations for different particle actuation configurations. The data reveal how the association rate depends on the particle velocity, particle density, and particle assembly characteristics. Interestingly, single particles appear to exhibit target depletion zones near their surface, caused by the high density of capture molecules. The depletion effects are even more limiting in cases with high particle densities. The depletion effects are overcome and protein capture rates are enhanced by applying dynamic particle actuation, resulting in an increase in the association rate constants by up to 2 orders of magnitude. PMID:28192952

  2. Regulated Formation of lncRNA-DNA Hybrids Enables Faster Transcriptional Induction and Environmental Adaptation.

    PubMed

    Cloutier, Sara C; Wang, Siwen; Ma, Wai Kit; Al Husini, Nadra; Dhoondia, Zuzer; Ansari, Athar; Pascuzzi, Pete E; Tran, Elizabeth J

    2016-02-04

    Long non-coding (lnc)RNAs, once thought to merely represent noise from imprecise transcription initiation, have now emerged as major regulatory entities in all eukaryotes. In contrast to the rapidly expanding identification of individual lncRNAs, mechanistic characterization has lagged behind. Here we provide evidence that the GAL lncRNAs in the budding yeast S. cerevisiae promote transcriptional induction in trans by formation of lncRNA-DNA hybrids or R-loops. The evolutionarily conserved RNA helicase Dbp2 regulates formation of these R-loops as genomic deletion or nuclear depletion results in accumulation of these structures across the GAL cluster gene promoters and coding regions. Enhanced transcriptional induction is manifested by lncRNA-dependent displacement of the Cyc8 co-repressor and subsequent gene looping, suggesting that these lncRNAs promote induction by altering chromatin architecture. Moreover, the GAL lncRNAs confer a competitive fitness advantage to yeast cells because expression of these non-coding molecules correlates with faster adaptation in response to an environmental switch. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Fundamental Studies in the Molecular Basis of Laser Induced Retinal Damage

    DTIC Science & Technology

    1988-01-01

    Cornell University School of Applied & Engineering Physics Ithaca, NY 14853 DOD DISTRIBUTION STATEMENT Approved for public release; distribution unlimited...Code) 7b. ADDRESS (City, State, and ZIP Code) School of Applied & Engineering Physics Ithaca, NY 14853 8a. NAME OF FUNDING/SPONSORING Bb. OFFICE SYMBOL

  4. Fundamental Studies in the Molecular Basis of Laser Induced Retinal Damage

    DTIC Science & Technology

    1988-01-01

    Cornell University .LECT l School of Applied & Engineering PhysicsIthaca, NY 14853 0 JAN 198D DOD DISTRIBUTION STATEMENT Approved for public release...State, and ZIP Code) 7b. ADDRESS (City, State, and ZIP Code) School of Applied & Engineering Physics Ithaca, NY 14853 Ba. NAME OF FUNDING/ SPONSORING

  5. 48 CFR 304.7001 - Numbering acquisitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... contracting office identification codes currently in use is contained in the DCIS Users' Manual, available at... than one code may apply in a specific situation, or for additional codes, refer to the DCIS Users' Manual or consult with the cognizant DCIS coordinator/focal point for guidance on which code governs...

  6. Definite Integrals, Some Involving Residue Theory Evaluated by Maple Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Kimiko o

    2010-01-01

    The calculus of residue is applied to evaluate certain integrals in the range (-{infinity} to {infinity}) using the Maple symbolic code. These integrals are of the form {integral}{sub -{infinity}}{sup {infinity}} cos(x)/[(x{sup 2} + a{sup 2})(x{sup 2} + b{sup 2}) (x{sup 2} + c{sup 2})]dx and similar extensions. The Maple code is also applied to expressions in maximum likelihood estimator moments when sampling from the negative binomial distribution. In general the Maple code approach to the integrals gives correct answers to specified decimal places, but the symbolic result may be extremely long and complex.

  7. Code Switching and Code Superimposition in Music. Working Papers in Sociolinguistics, No. 63.

    ERIC Educational Resources Information Center

    Slobin, Mark

    This paper illustrates how the sociolinguistic concept of code switching applies to the use of different styles of music. The two bases for the analogy are Labov's definition of code-switching as "moving from one consistent set of co-occurring rules to another," and the finding of sociolinguistics that code switching tends to be part of…

  8. Monitoring Aquifer Depletion from Space: Case Studies from the Saharan and Arabian Aquifers

    NASA Astrophysics Data System (ADS)

    Ahmed, M.; Sultan, M.; Wahr, J. M.; Yan, E.

    2013-12-01

    Access to potable fresh water resources is a human right and a basic requirement for economic development in any society. In arid and semi-arid areas, the characterization and understanding of the geologic and hydrologic settings of, and the controlling factors affecting, these resources is gaining increasing importance due to the challenges posed by increasing population. In these areas, there is immense natural fossil fresh water resources stored in large extensive aquifers, the transboundary aquifers. Yet, natural phenomena (e.g., rainfall patterns and climate change) together with human-related factors (e.g., population growth, unsustainable over-exploitation, and pollution) are threatening the sustainability of these resources. In this study, we are developing and applying an integrated cost-effective approach to investigate the nature (i.e., natural and anthropogenic) and the controlling factors affecting the hydrologic settings of the Saharan (i.e., Nubian Sandstone Aquifer System [NSAS], Northwest Sahara Aquifer System [NWSA]) and Arabian (i.e., Arabian Peninsula Aquifer System [APAS]) aquifer systems. Analysis of the Gravity Recovery and Climate Experiment (GRACE)-derived Terrestrial Water Storage (TWS) inter-annual trends over the NSAS and the APAS revealed two areas of significant TWS depletions; the first correlated with the Dakhla Aquifer System (DAS) in the NSAS and second with the Saq Aquifer System (SAS) in the APAS. Annual depletion rates were estimated at 1.3 × 0.66 × 109 m3/yr and 6.95 × 0.68 × 109 m3/yr for DAS and SAS, respectively. Findings include (1) excessive groundwater extraction, not climatic changes, is responsible for the observed TWS depletions ;(2) the DAS could be consumed in 350 years if extraction rates continue to double every 50 years and the APAS available reserves could be consumed within 60-140 years at present extraction (7.08 × 109 m3/yr) and depletion rates; and (3) observed depletions over DAS and SAS and their absence across the remaining segments of the NSAS and the APAS suggest the aquifers are at near-steady conditions except for the DAS and SAS that are witnessing unsteady transient conditions. Implications for applying the methodologies advocated for assessment and optimum management of a large suite of fossil aquifers worldwide are clear.

  9. Creatine pretreatment protects cortical axons from energy depletion in vitro

    PubMed Central

    Shen, Hua; Goldberg, Mark P.

    2012-01-01

    Creatine is a natural nitrogenous guanidino compound involved in bioenergy metabolism. Although creatine has been shown to protect neurons of the central nervous system (CNS) from experimental hypoxia/ischemia, it remains unclear if creatine may also protect CNS axons, and if the potential axonal protection depends on glial cells. To evaluate the direct impact of creatine on CNS axons, cortical axons were cultured in a separate compartment from their somas and proximal neurites using a modified two-compartment culture device. Axons in the axon compartment were subjected to acute energy depletion, an in vitro model of white matter ischemia, by exposure to 6 mM sodium azide for 30 min in the absence of glucose and pyruvate. Energy depletion reduced axonal ATP by 65%, depolarized axonal resting potential, and damaged 75% of axons. Application of creatine (10 mM) to both compartments of the culture at 24 h prior to energy depletion significantly reduced axonal damage by 50%. In line with the role of creatine in the bioenergy metabolism, this application also alleviated the axonal ATP loss and depolarization. Inhibition of axonal depolarization by blocking sodium influx with tetrodotoxin also effectively reduced the axonal damage caused by energy depletion. Further study revealed that the creatine effect was independent of glial cells, as axonal protection was sustained even when creatine was applied only to the axon compartment (free from somas and glial cells) for as little as 2 h. In contrast, application of creatine after energy depletion did not protect axons. The data provide the first evidence that creatine pretreatment may directly protect CNS axons from energy deficiency. PMID:22521466

  10. Nature-based solutions to promote human resilience and wellbeing in cities during increasingly hot summers.

    PubMed

    Panno, Angelo; Carrus, Giuseppe; Lafortezza, Raffaele; Mariani, Luigi; Sanesi, Giovanni

    2017-11-01

    Air temperatures are increasing because of global climate change. A warming phenomenon strongly related to global climate change is the urban heat island. It has been shown that the hotter temperatures occurring in cities during the summer negatively affect human wellbeing, but little is known about the potential mechanisms underlying the relationships between hotter temperatures, cognitive psychological resources and wellbeing. The aim of the present research is to understand whether, and how, spending time in urban green spaces, which can be considered as a specific kind of Nature-Based Solution (NBS), helps the recovery of cognitive resources and wellbeing. The main hypothesis is that contact with urban green is related to wellbeing through the depletion of cognitive resources (i.e., ego depletion). Moreover, we expected that individuals showing higher scores of ego depletion also report a higher estimate of the maximum temperature reached during the summer. The results of a survey (N = 115) conducted among visitors to Parco Nord Milano, a large urban park located in Milan (Italy), point out that people visiting the park during the summer show a higher level of wellbeing as well as a lower level of ego depletion. A mediation analysis shows that visiting urban green spaces is associated with greater wellbeing through less ego depletion. Our results also point out that, as expected, people showing a higher level of ego depletion tend to overestimate the maximum air temperature. Implications for future studies and applied interventions regarding the role of NBS to promote human wellbeing are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Technical Basis for Peak Reactivity Burnup Credit for BWR Spent Nuclear Fuel in Storage and Transportation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Ade, Brian J; Bowman, Stephen M

    2015-01-01

    Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission have initiated a multiyear project to investigate application of burnup credit for boiling-water reactor (BWR) fuel in storage and transportation casks. This project includes two phases. The first phase (1) investigates applicability of peak reactivity methods currently used in spent fuel pools (SFPs) to storage and transportation systems and (2) evaluates validation of both reactivity (k eff) calculations and burnup credit nuclide concentrations within these methods. The second phase will focus on extending burnup credit beyond peak reactivity. This paper documents the first phase, including an analysis of latticemore » design parameters and depletion effects, as well as both validation components. Initial efforts related to extended burnup credit are discussed in a companion paper. Peak reactivity analyses have been used in criticality analyses for licensing of BWR fuel in SFPs over the last 20 years. These analyses typically combine credit for the gadolinium burnable absorber present in the fuel with a modest amount of burnup credit. Gadolinium burnable absorbers are used in BWR assemblies to control core reactivity. The burnable absorber significantly reduces assembly reactivity at beginning of life, potentially leading to significant increases in assembly reactivity for burnups less than 15–20 GWd/MTU. The reactivity of each fuel lattice is dependent on gadolinium loading. The number of gadolinium-bearing fuel pins lowers initial lattice reactivity, but it has a small impact on the burnup and reactivity of the peak. The gadolinium concentration in each pin has a small impact on initial lattice reactivity but a significant effect on the reactivity of the peak and the burnup at which the peak occurs. The importance of the lattice parameters and depletion conditions are primarily determined by their impact on the gadolinium depletion. Criticality code validation for BWR burnup credit at peak reactivity requires a different set of experiments than for pressurized-water reactor burnup credit analysis because of differences in actinide compositions, presence of residual gadolinium absorber, and lower fission product concentrations. A survey of available critical experiments is presented along with a sample criticality code validation and determination of undercoverage penalties for some nuclides. The validation of depleted fuel compositions at peak reactivity presents many challenges which largely result from a lack of radiochemical assay data applicable to BWR fuel in this burnup range. In addition, none of the existing low burnup measurement data include residual gadolinium measurements. An example bias and uncertainty associated with validation of actinide-only fuel compositions is presented.« less

  12. Low-resistivity photon-transparent window attached to photo-sensitive silicon detector

    DOEpatents

    Holland, Stephen Edward

    2000-02-15

    The invention comprises a combination of a low resistivity, or electrically conducting, silicon layer that is transparent to long or short wavelength photons and is attached to the backside of a photon-sensitive layer of silicon, such as a silicon wafer or chip. The window is applied to photon sensitive silicon devices such as photodiodes, charge-coupled devices, active pixel sensors, low-energy x-ray sensors and other radiation detectors. The silicon window is applied to the back side of a photosensitive silicon wafer or chip so that photons can illuminate the device from the backside without interference from the circuit printed on the frontside. A voltage sufficient to fully deplete the high-resistivity photosensitive silicon volume of charge carriers is applied between the low-resistivity back window and the front, patterned, side of the device. This allows photon-induced charge created at the backside to reach the front side of the device and to be processed by any circuitry attached to the front side. Using the inventive combination, the photon sensitive silicon layer does not need to be thinned beyond standard fabrication methods in order to achieve full charge-depletion in the silicon volume. In one embodiment, the inventive backside window is applied to high resistivity silicon to allow backside illumination while maintaining charge isolation in CCD pixels.

  13. Monte Carlo characterization of PWR spent fuel assemblies to determine the detectability of pin diversion

    NASA Astrophysics Data System (ADS)

    Burdo, James S.

    This research is based on the concept that the diversion of nuclear fuel pins from Light Water Reactor (LWR) spent fuel assemblies is feasible by a careful comparison of spontaneous fission neutron and gamma levels in the guide tube locations of the fuel assemblies. The goal is to be able to determine whether some of the assembly fuel pins are either missing or have been replaced with dummy or fresh fuel pins. It is known that for typical commercial power spent fuel assemblies, the dominant spontaneous neutron emissions come from Cm-242 and Cm-244. Because of the shorter half-life of Cm-242 (0.45 yr) relative to that of Cm-244 (18.1 yr), Cm-244 is practically the only neutron source contributing to the neutron source term after the spent fuel assemblies are more than two years old. Initially, this research focused upon developing MCNP5 models of PWR fuel assemblies, modeling their depletion using the MONTEBURNS code, and by carrying out a preliminary depletion of a ¼ model 17x17 assembly from the TAKAHAMA-3 PWR. Later, the depletion and more accurate isotopic distribution in the pins at discharge was modeled using the TRITON depletion module of the SCALE computer code. Benchmarking comparisons were performed with the MONTEBURNS and TRITON results. Subsequently, the neutron flux in each of the guide tubes of the TAKAHAMA-3 PWR assembly at two years after discharge as calculated by the MCNP5 computer code was determined for various scenarios. Cases were considered for all spent fuel pins present and for replacement of a single pin at a position near the center of the assembly (10,9) and at the corner (17,1). Some scenarios were duplicated with a gamma flux calculation for high energies associated with Cm-244. For each case, the difference between the flux (neutron or gamma) for all spent fuel pins and with a pin removed or replaced is calculated for each guide tube. Different detection criteria were established. The first was whether the relative error of the difference was less than 1.00, allowing for the existence of the difference within the margin of error. The second was whether the difference between the two values was big enough to prevent their error bars from overlapping. Error analysis was performed both using a one second count and pseudo-Maxwell statistics for a projected 60 second count, giving four criteria for detection. The number of guide tubes meeting these criteria was compared and graphed for each case. Further analysis at extremes of high and low enrichment and long and short burnup times was done using data from assemblies at the Beaver Valley 1 and 2 PWR. In all neutron flux cases, at least two guide tube locations meet all the criteria for detection of pin diversion. At least one location in almost all of the gamma flux cases does. These results show that placing detectors in the empty guide tubes of spent fuel bundles to identify possible pin diversion is feasible.

  14. Learning by Doing: Teaching Decision Making through Building a Code of Ethics.

    ERIC Educational Resources Information Center

    Hawthorne, Mark D.

    2001-01-01

    Notes that applying abstract ethical principles to the practical business of building a code of applied ethics for a technical communication department teaches students that they share certain unarticulated or unconscious values that they can translate into ethical principles. Suggests that combining abstract theory with practical policy writing…

  15. A novel use of QR code stickers after orthopaedic cast application.

    PubMed

    Gough, A T; Fieraru, G; Gaffney, Pav; Butler, M; Kincaid, R J; Middleton, R G

    2017-07-01

    INTRODUCTION We present a novel solution to ensure that information and contact details are always available to patients while in cast. An information sticker containing both telephone numbers and a Quick Response (QR) code is applied to the cast. When scanned with a smartphone, the QR code loads the plaster team's webpage. This contains information and videos about cast care, complications and enhancing recovery. METHODS A sticker was designed and applied to all synthetic casts fitted in our fracture clinic. On cast removal, patients completed a questionnaire about the sticker. A total of 101 patients were surveyed between November 2015 and February 2016. The questionnaire comprised ten binary choice questions. RESULTS The vast majority (97%) of patients had the sticker still on their cast when they returned to clinic for cast removal. Eighty-four per cent of all patients felt reassured by the presence of the QR code sticker. Nine per cent used the contact details on the cast to seek advice. Over half (56%) had a smartphone and a third (33%) of these scanned the QR code. Of those who scanned the code, 95% found the information useful. CONCLUSIONS This study indicates that use of a QR code reassures patients and is an effective tool in the proactive management of potential cast problems. The QR code sticker is now applied to all casts across our trust. In line with NHS England's Five Year Forward View calling for enhanced use of smartphone technology, our trust is continuing to expand its portfolio of patient information accessible via QR codes. Other branches of medicine may benefit from incorporating QR codes as portals to access such information.

  16. Remanent Activation in the Mini-SHINE Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micklich, Bradley J.

    2015-04-16

    Argonne National Laboratory is assisting SHINE Medical Technologies in developing a domestic source of the medical isotope 99Mo through the fission of low-enrichment uranium in a uranyl sulfate solution. In Phase 2 of these experiments, electrons from a linear accelerator create neutrons by interacting in a depleted uranium target, and these neutrons are used to irradiate the solution. The resulting neutron and photon radiation activates the target, the solution vessels, and a shielded cell that surrounds the experimental apparatus. When the experimental campaign is complete, the target must be removed into a shielding cask, and the experimental components must bemore » disassembled. The radiation transport code MCNPX and the transmutation code CINDER were used to calculate the radionuclide inventories of the solution, the target assembly, and the shielded cell, and to determine the dose rates and shielding requirements for selected removal scenarios for the target assembly and the solution vessels.« less

  17. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...

  18. 77 FR 27164 - Butylate, Clethodim, Dichlorvos, Dicofol, Isopropyl Carbanilate, et al.; Proposed Tolerance Actions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-09

    ...: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be exhaustive, but... apply to me? You may be potentially affected by this action if you are an agricultural producer, food...

  19. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  20. αCP Poly(C) Binding Proteins Act as Global Regulators of Alternative Polyadenylation

    PubMed Central

    Ji, Xinjun; Wan, Ji; Vishnu, Melanie

    2013-01-01

    We have previously demonstrated that the KH-domain protein αCP binds to a 3′ untranslated region (3′UTR) C-rich motif of the nascent human alpha-globin (hα-globin) transcript and enhances the efficiency of 3′ processing. Here we assess the genome-wide impact of αCP RNA-protein (RNP) complexes on 3′ processing with a specific focus on its role in alternative polyadenylation (APA) site utilization. The major isoforms of αCP were acutely depleted from a human hematopoietic cell line, and the impact on mRNA representation and poly(A) site utilization was determined by direct RNA sequencing (DRS). Bioinformatic analysis revealed 357 significant alterations in poly(A) site utilization that could be specifically linked to the αCP depletion. These APA events correlated strongly with the presence of C-rich sequences in close proximity to the impacted poly(A) addition sites. The most significant linkage was the presence of a C-rich motif within a window 30 to 40 bases 5′ to poly(A) signals (AAUAAA) that were repressed upon αCP depletion. This linkage is consistent with a general role for αCPs as enhancers of 3′ processing. These findings predict a role for αCPs in posttranscriptional control pathways that can alter the coding potential and/or levels of expression of subsets of mRNAs in the mammalian transcriptome. PMID:23629627

  1. An Interaction between RRP6 and SU(VAR)3-9 Targets RRP6 to Heterochromatin and Contributes to Heterochromatin Maintenance in Drosophila melanogaster.

    PubMed

    Eberle, Andrea B; Jordán-Pla, Antonio; Gañez-Zapater, Antoni; Hessle, Viktoria; Silberberg, Gilad; von Euler, Anne; Silverstein, Rebecca A; Visa, Neus

    2015-09-01

    RNA surveillance factors are involved in heterochromatin regulation in yeast and plants, but less is known about the possible roles of ribonucleases in the heterochromatin of animal cells. Here we show that RRP6, one of the catalytic subunits of the exosome, is necessary for silencing heterochromatic repeats in the genome of Drosophila melanogaster. We show that a fraction of RRP6 is associated with heterochromatin, and the analysis of the RRP6 interaction network revealed physical links between RRP6 and the heterochromatin factors HP1a, SU(VAR)3-9 and RPD3. Moreover, genome-wide studies of RRP6 occupancy in cells depleted of SU(VAR)3-9 demonstrated that SU(VAR)3-9 contributes to the tethering of RRP6 to a subset of heterochromatic loci. Depletion of the exosome ribonucleases RRP6 and DIS3 stabilizes heterochromatic transcripts derived from transposons and repetitive sequences, and renders the heterochromatin less compact, as shown by micrococcal nuclease and proximity-ligation assays. Such depletion also increases the amount of HP1a bound to heterochromatic transcripts. Taken together, our results suggest that SU(VAR)3-9 targets RRP6 to a subset of heterochromatic loci where RRP6 degrades chromatin-associated non-coding RNAs in a process that is necessary to maintain the packaging of the heterochromatin.

  2. An Interaction between RRP6 and SU(VAR)3-9 Targets RRP6 to Heterochromatin and Contributes to Heterochromatin Maintenance in Drosophila melanogaster

    PubMed Central

    Eberle, Andrea B.; Jordán-Pla, Antonio; Gañez-Zapater, Antoni; Hessle, Viktoria; Silberberg, Gilad; von Euler, Anne; Silverstein, Rebecca A.; Visa, Neus

    2015-01-01

    RNA surveillance factors are involved in heterochromatin regulation in yeast and plants, but less is known about the possible roles of ribonucleases in the heterochromatin of animal cells. Here we show that RRP6, one of the catalytic subunits of the exosome, is necessary for silencing heterochromatic repeats in the genome of Drosophila melanogaster. We show that a fraction of RRP6 is associated with heterochromatin, and the analysis of the RRP6 interaction network revealed physical links between RRP6 and the heterochromatin factors HP1a, SU(VAR)3-9 and RPD3. Moreover, genome-wide studies of RRP6 occupancy in cells depleted of SU(VAR)3-9 demonstrated that SU(VAR)3-9 contributes to the tethering of RRP6 to a subset of heterochromatic loci. Depletion of the exosome ribonucleases RRP6 and DIS3 stabilizes heterochromatic transcripts derived from transposons and repetitive sequences, and renders the heterochromatin less compact, as shown by micrococcal nuclease and proximity-ligation assays. Such depletion also increases the amount of HP1a bound to heterochromatic transcripts. Taken together, our results suggest that SU(VAR)3-9 targets RRP6 to a subset of heterochromatic loci where RRP6 degrades chromatin-associated non-coding RNAs in a process that is necessary to maintain the packaging of the heterochromatin. PMID:26389589

  3. The First ASME Code Stamped Cryomodule at SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, M P; Crofford, M T; Douglas, D L

    The first spare cryomodule for the Spallation Neutron Source (SNS) has been designed, fabricated, and tested by SNS personnel. The approach to design for this cryomodule was to hold critical design features identical to the original design such as bayonet positions, coupler positions, cold mass assembly, and overall footprint. However, this is the first SNS cryomodule that meets the pressure requirements put forth in the 10 CFR 851: Worker Safety and Health Program. The most significant difference is that Section VIII of the ASME Boiler and Pressure Vessel Code was applied to the vacuum vessel of this cryomodule. Applying themore » pressure code to the helium vessels within the cryomodule was considered. However, it was determined to be schedule prohibitive because it required a code case for materials that are not currently covered by the code. Good engineering practice was applied to the internal components to verify the quality and integrity of the entire cryomodule. The design of the cryomodule, fabrication effort, and cryogenic test results will be reported in this paper.« less

  4. Polymerization of non-complementary RNA: systematic symmetric nucleotide exchanges mainly involving uracil produce mitochondrial RNA transcripts coding for cryptic overlapping genes.

    PubMed

    Seligmann, Hervé

    2013-03-01

    Usual DNA→RNA transcription exchanges T→U. Assuming different systematic symmetric nucleotide exchanges during translation, some GenBank RNAs match exactly human mitochondrial sequences (exchange rules listed in decreasing transcript frequencies): C↔U, A↔U, A↔U+C↔G (two nucleotide pairs exchanged), G↔U, A↔G, C↔G, none for A↔C, A↔G+C↔U, and A↔C+G↔U. Most unusual transcripts involve exchanging uracil. Independent measures of rates of rare replicational enzymatic DNA nucleotide misinsertions predict frequencies of RNA transcripts systematically exchanging the corresponding misinserted nucleotides. Exchange transcripts self-hybridize less than other gene regions, self-hybridization increases with length, suggesting endoribonuclease-limited elongation. Blast detects stop codon depleted putative protein coding overlapping genes within exchange-transcribed mitochondrial genes. These align with existing GenBank proteins (mainly metazoan origins, prokaryotic and viral origins underrepresented). These GenBank proteins frequently interact with RNA/DNA, are membrane transporters, or are typical of mitochondrial metabolism. Nucleotide exchange transcript frequencies increase with overlapping gene densities and stop densities, indicating finely tuned counterbalancing regulation of expression of systematic symmetric nucleotide exchange-encrypted proteins. Such expression necessitates combined activities of suppressor tRNAs matching stops, and nucleotide exchange transcription. Two independent properties confirm predicted exchanged overlap coding genes: discrepancy of third codon nucleotide contents from replicational deamination gradients, and codon usage according to circular code predictions. Predictions from both properties converge, especially for frequent nucleotide exchange types. Nucleotide exchanging transcription apparently increases coding densities of protein coding genes without lengthening genomes, revealing unsuspected functional DNA coding potential. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. ON THE POSSIBILITY OF SIGNIFICANT ELECTRON DEPLETION DUE TO NANOGRAIN CHARGING IN THE COMA OF COMET 67P/CHURYUMOV-GERASIMENKO NEAR PERIHELION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigren, E.; Eriksson, A. I.; Wahlund, J.-E.

    2015-01-10

    We approach the complicated phenomena of gas-dust interactions in a cometary ionosphere, focusing in particular on the possibility of significant depletion in electron number density due to grain charging. Our one-dimensional ionospheric model, accounting for grain charging processes, is applied to the subsolar direction and the diamagnetic cavity of 67P/Churyuomov-Gerasimenko, the target comet for the ESA Rosetta mission, at perihelion (∼1.25-1.30 AU). We argue on the one hand that grains with radii >100 nm are unlikely to significantly affect the overall ionospheric particle balance within this environment, at least for cometocentric distances >10 km. On the other hand, if nanograins with radii inmore » the 1-3 nm range are ejected to the coma at a level of ∼1% with respect to the mass of the sublimated gas, a significant electron depletion is expected up to cometocentric distances of several tens of kilometers. We relate these results to the recent Cassini discoveries of very pronounced electron depletion compared with the positive ion population in the plume of Enceladus, which has been attributed to nanograin charging.« less

  6. Evaluating bandgap distributions of carbon nanotubes via scanning electron microscopy imaging of the Schottky barriers.

    PubMed

    He, Yujun; Zhang, Jin; Li, Dongqi; Wang, Jiangtao; Wu, Qiong; Wei, Yang; Zhang, Lina; Wang, Jiaping; Liu, Peng; Li, Qunqing; Fan, Shoushan; Jiang, Kaili

    2013-01-01

    We show that the Schottky barrier at the metal-single walled carbon nanotube (SWCNT) contact can be clearly observed in scanning electron microscopy (SEM) images as a bright contrast segment with length up to micrometers due to the space charge distribution in the depletion region. The lengths of the charge depletion increase with the diameters of semiconducting SWCNTs (s-SWCNTs) when connected to one metal electrode, which enables direct and efficient evaluation of the bandgap distributions of s-SWCNTs. Moreover, this approach can also be applied for a wide variety of semiconducting nanomaterials, adding a new function to conventional SEM.

  7. MPD work at MIT

    NASA Technical Reports Server (NTRS)

    Martinez-Sanchez, Manuel

    1991-01-01

    MPD work at MIT is presented in the form of the view-graphs. The following subject areas are covered: the MIT program, its goals, achievements, and roadblocks; quasi one-dimensional modeling; two-dimensional modeling - transport effects and Hall effect; microscopic instabilities in MPD flows and modified two stream instability; electrothermal stability theory; separation of onset and anode depletion; exit plane spectroscopic measurements; phenomena of onset as performance limiter; explanations of onset; geometry effects on onset; onset at full ionization and its consequences; relationship to anode depletion; summary on self-field MPD; applied field MPD - the logical growth path; the case for AF; the challenges of AF MPD; and recommendations.

  8. Impact of the Primary Care Exception on Family Medicine Resident Coding.

    PubMed

    Cawse-Lucas, Jeanne; Evans, David V; Ruiz, David R; Allcut, Elizabeth A; Andrilla, C Holly A; Thompson, Matthew; Norris, Thomas E

    2016-03-01

    The Medicare Primary Care Exception (PCE) allows residents to see and bill for less-complex patients independently in the primary care setting, requiring attending physicians only to see patients for higher-level visits and complete physical exams in order to bill for them as such. Primary care residencies apply the PCE in various ways. We investigated the impact of the PCE on resident coding practices. Family medicine residency directors in a five-state region completed a survey regarding interpretation and application of the PCE, including the number of established patient evaluation and management codes entered by residents and attending faculty at their institution. The percentage of high-level codes was compared between residencies using chi-square tests. We analyzed coding data for 125,016 visits from 337 residents and 172 faculty physicians in 15 of 18 eligible family medicine residencies. Among programs applying the PCE criteria to all patients, residents billed 86.7% low-mid complexity and 13.3% high-complexity visits. In programs that only applied the PCE to Medicare patients, residents billed 74.9% low-mid complexity visits and 25.2% high-complexity visits. Attending physicians coded more high-complexity visits at both types of programs. The estimated revenue loss over the 1,650 RRC-required outpatient visits was $2,558.66 per resident and $57,569.85 per year for the average residency in our sample. Residents at family medicine programs that apply the PCE to all patients bill significantly fewer high-complexity visits. This finding leads to compliance and regulatory concerns and suggests significant revenue loss. Further study is required to determine whether this discrepancy also reflects inaccuracy in coding.

  9. Practical implications of applied irrigation research

    USDA-ARS?s Scientific Manuscript database

    Groundwater is essential to irrigated agriculture in the semi-arid Texas High Plains. Concerns over groundwater depletion have led to increased emphasis on water conservation. Irrigation scheduling coupled with accurate crop water use (ET) estimation is one of the most effective means to both conser...

  10. Cadmium (II) removal mechanisms in microbial electrolysis cells.

    PubMed

    Colantonio, Natalie; Kim, Younggy

    2016-07-05

    Cadmium is a toxic heavy metal, causing serious environmental and human health problems. Conventional methods for removing cadmium from wastewater are expensive and inefficient for low concentrations. Microbial electrolysis cells (MECs) can simultaneously treat wastewater, produce hydrogen gas, and remove heavy metals with low energy requirements. Lab-scale MECs were operated to remove cadmium under various electric conditions: applied voltages of 0.4, 0.6, 0.8, and 1.0 V; and a fixed cathode potential of -1.0 V vs. Ag/AgCl. Regardless of the electric condition, rapid removal of cadmium was demonstrated (50-67% in 24 h); however, cadmium concentration in solution increased after the electric current dropped with depleted organic substrate under applied voltage conditions. For the fixed cathode potential, the electric current was maintained even after substrate depletion and thus cadmium concentration did not increase. These results can be explained by three different removal mechanisms: cathodic reduction; Cd(OH)2 precipitation; and CdCO3 precipitation. When the current decreased with depleted substrates, local pH at the cathode was no longer high due to slowed hydrogen evolution reaction (2H(+)+2e(-)→H2); thus, the precipitated Cd(OH)2 and CdCO3 started dissolving. To prevent their dissolution, sufficient organic substrates should be provided when MECs are used for cadmium removal. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Experimental scrambling and noise reduction applied to the optical encryption of QR codes.

    PubMed

    Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto

    2014-08-25

    In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.

  12. 21 CFR 19.6 - Code of ethics for government service.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Code of ethics for government service. 19.6... STANDARDS OF CONDUCT AND CONFLICTS OF INTEREST General Provisions § 19.6 Code of ethics for government service. The following code of ethics, adopted by Congress on July 11, 1958, shall apply to all Food and...

  13. Emergent rules for codon choice elucidated by editing rare arginine codons in Escherichia coli

    PubMed Central

    Napolitano, Michael G.; Landon, Matthieu; Gregg, Christopher J.; Lajoie, Marc J.; Govindarajan, Lakshmi; Mosberg, Joshua A.; Kuznetsov, Gleb; Goodman, Daniel B.; Vargas-Rodriguez, Oscar; Isaacs, Farren J.; Söll, Dieter; Church, George M.

    2016-01-01

    The degeneracy of the genetic code allows nucleic acids to encode amino acid identity as well as noncoding information for gene regulation and genome maintenance. The rare arginine codons AGA and AGG (AGR) present a case study in codon choice, with AGRs encoding important transcriptional and translational properties distinct from the other synonymous alternatives (CGN). We created a strain of Escherichia coli with all 123 instances of AGR codons removed from all essential genes. We readily replaced 110 AGR codons with the synonymous CGU codons, but the remaining 13 “recalcitrant” AGRs required diversification to identify viable alternatives. Successful replacement codons tended to conserve local ribosomal binding site-like motifs and local mRNA secondary structure, sometimes at the expense of amino acid identity. Based on these observations, we empirically defined metrics for a multidimensional “safe replacement zone” (SRZ) within which alternative codons are more likely to be viable. To evaluate synonymous and nonsynonymous alternatives to essential AGRs further, we implemented a CRISPR/Cas9-based method to deplete a diversified population of a wild-type allele, allowing us to evaluate exhaustively the fitness impact of all 64 codon alternatives. Using this method, we confirmed the relevance of the SRZ by tracking codon fitness over time in 14 different genes, finding that codons that fall outside the SRZ are rapidly depleted from a growing population. Our unbiased and systematic strategy for identifying unpredicted design flaws in synthetic genomes and for elucidating rules governing codon choice will be crucial for designing genomes exhibiting radically altered genetic codes. PMID:27601680

  14. A coding single-nucleotide polymorphism in lysine demethylase KDM4A associates with increased sensitivity to mTOR inhibitors.

    PubMed

    Van Rechem, Capucine; Black, Joshua C; Greninger, Patricia; Zhao, Yang; Donado, Carlos; Burrowes, Paul D; Ladd, Brendon; Christiani, David C; Benes, Cyril H; Whetstine, Johnathan R

    2015-03-01

    SNPs occur within chromatin-modulating factors; however, little is known about how these variants within the coding sequence affect cancer progression or treatment. Therefore, there is a need to establish their biochemical and/or molecular contribution, their use in subclassifying patients, and their impact on therapeutic response. In this report, we demonstrate that coding SNP-A482 within the lysine tridemethylase gene KDM4A/JMJD2A has different allelic frequencies across ethnic populations, associates with differential outcome in patients with non-small cell lung cancer (NSCLC), and promotes KDM4A protein turnover. Using an unbiased drug screen against 87 preclinical and clinical compounds, we demonstrate that homozygous SNP-A482 cells have increased mTOR inhibitor sensitivity. mTOR inhibitors significantly reduce SNP-A482 protein levels, which parallels the increased drug sensitivity observed with KDM4A depletion. Our data emphasize the importance of using variant status as candidate biomarkers and highlight the importance of studying SNPs in chromatin modifiers to achieve better targeted therapy. This report documents the first coding SNP within a lysine demethylase that associates with worse outcome in patients with NSCLC. We demonstrate that this coding SNP alters the protein turnover and associates with increased mTOR inhibitor sensitivity, which identifies a candidate biomarker for mTOR inhibitor therapy and a therapeutic target for combination therapy. ©2015 American Association for Cancer Research.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, H.R.

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  16. An Analysis of Language Code Used by the Cross-Married Couples, Banjarese-Javanese Ethnics: A Case Study in South Kalimantan Province, Indonesia

    ERIC Educational Resources Information Center

    Supiani

    2016-01-01

    This research aims to describe the use of language code applied by the participants and to find out the factors influencing the choice of language codes. This research is qualitative research that describe the use of language code in the cross married couples. The data are taken from the discourses about language code phenomena dealing with the…

  17. Current and anticipated use of thermal-hydraulic codes for BWR transient and accident analyses in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arai, Kenji; Ebata, Shigeo

    1997-07-01

    This paper summarizes the current and anticipated use of the thermal-hydraulic and neutronic codes for the BWR transient and accident analyses in Japan. The codes may be categorized into the licensing codes and the best estimate codes for the BWR transient and accident analyses. Most of the licensing codes have been originally developed by General Electric. Some codes have been updated based on the technical knowledge obtained in the thermal hydraulic study in Japan, and according to the BWR design changes. The best estimates codes have been used to support the licensing calculations and to obtain the phenomenological understanding ofmore » the thermal hydraulic phenomena during a BWR transient or accident. The best estimate codes can be also applied to a design study for a next generation BWR to which the current licensing model may not be directly applied. In order to rationalize the margin included in the current BWR design and develop a next generation reactor with appropriate design margin, it will be required to improve the accuracy of the thermal-hydraulic and neutronic model. In addition, regarding the current best estimate codes, the improvement in the user interface and the numerics will be needed.« less

  18. 77 FR 40271 - Pasteuria

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-09

    ... production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532... all food commodities when applied as a nematicide and used in accordance with label directions and... Food, Drug, and Cosmetic Act (FFDCA), requesting an exemption from the requirement of a tolerance. This...

  19. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  20. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  1. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  2. REE speciation in low-temperature acidic waters and the competitive effects of aluminum

    USGS Publications Warehouse

    Gimeno, Serrano M.J.; Auque, Sanz L.F.; Nordstrom, D. Kirk

    2000-01-01

    The effect of simultaneous competitive speciation of dissolved rare earth elements (REEs) in acidic waters (pH 3.3 to 5.2) has been evaluated by applying the PHREEQE code to the speciation of water analyses from Spain, Brazil, USA, and Canada. The main ions that might affect REE are Al3+, F-, SO42-, and PO43-. Fluoride, normally a significant complexer of REEs, is strongly associated with Al3+ in acid waters and consequently has little influence on REEs. The inclusion of aluminum concentrations in speciation calculations for acidic waters is essential for reliable speciation of REEs. Phosphate concentrations are too low (10-4 to 10-7 m) to affect REE speciation. Consequently, SO42- is the only important complexing ligand for REEs under these conditions. According to Millero [Millero, F.J., 1992. Stability constants for the formation of rare earth inorganic complexes as a function of ionic strength. Geochim. Cosmochim. Acta, 56, 3123-3132], the lanthanide sulfate stability constants are nearly constant with increasing atomic number so that no REE fractionation would be anticipated from aqueous complexation in acidic waters. Hence, REE enrichments or depletions must arise from mass transfer reactions. (C) 2000 Elsevier Science B.V. All rights reserved.

  3. Less haste, less waste: on recycling and its limits in strand displacement systems

    PubMed Central

    Condon, Anne; Hu, Alan J.; Maňuch, Ján; Thachuk, Chris

    2012-01-01

    We study the potential for molecule recycling in chemical reaction systems and their DNA strand displacement realizations. Recycling happens when a product of one reaction is a reactant in a later reaction. Recycling has the benefits of reducing consumption, or waste, of molecules and of avoiding fuel depletion. We present a binary counter that recycles molecules efficiently while incurring just a moderate slowdown compared with alternative counters that do not recycle strands. This counter is an n-bit binary reflecting Gray code counter that advances through 2n states. In the strand displacement realization of this counter, the waste—total number of nucleotides of the DNA strands consumed—is polynomial in n, the number of bits of the counter, while the waste of alternative counters grows exponentially in n. We also show that our n-bit counter fails to work correctly when many (Θ(n)) copies of the species that represent the bits of the counter are present initially. The proof applies more generally to show that in chemical reaction systems where all but one reactant of each reaction are catalysts, computations longer than a polynomial function of the size of the system are not possible when there are polynomially many copies of the system present. PMID:22649584

  4. Theory-based model for the pedestal, edge stability and ELMs in tokamaks

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Bateman, G.; Brennan, D. P.; Schnack, D. D.; Snyder, P. B.; Voitsekhovitch, I.; Kritz, A. H.; Janeschitz, G.; Kruger, S.; Onjun, T.; Pacher, G. W.; Pacher, H. D.

    2006-04-01

    An improved model for triggering edge localized mode (ELM) crashes is developed for use within integrated modelling simulations of the pedestal and ELM cycles at the edge of H-mode tokamak plasmas. The new model is developed by using the BALOO, DCON and ELITE ideal MHD stability codes to derive parametric expressions for the ELM triggering threshold. The whole toroidal mode number spectrum is studied with these codes. The DCON code applies to low mode numbers, while the BALOO code applies to only high mode numbers and the ELITE code applies to intermediate and high mode numbers. The variables used in the parametric stability expressions are the normalized pressure gradient and the parallel current density, which drive ballooning and peeling modes. Two equilibria motivated by DIII-D geometry with different plasma triangularities are studied. It is found that the stable region in the high triangularity discharge covers a much larger region of parameter space than the corresponding stability region in the low triangularity discharge. The new ELM trigger model is used together with a previously developed model for pedestal formation and ELM crashes in the ASTRA integrated modelling code to follow the time evolution of the temperature profiles during ELM cycles. The ELM frequencies obtained in the simulations of low and high triangularity discharges are observed to increase with increasing heating power. There is a transition from second stability to first ballooning mode stability as the heating power is increased in the high triangularity simulations. The results from the ideal MHD stability codes are compared with results from the resistive MHD stability code NIMROD.

  5. Depletion of nucleus accumbens dopamine leads to impaired reward and aversion processing in mice: Relevance to motivation pathologies.

    PubMed

    Bergamini, Giorgio; Sigrist, Hannes; Ferger, Boris; Singewald, Nicolas; Seifritz, Erich; Pryce, Christopher R

    2016-10-01

    Dopamine (DA) neurotransmission, particularly the ventral tegmental area-nucleus accumbens (VTA-NAcc) projection, underlies reward and aversion processing, and deficient DA function could underlie motivational impairments in psychiatric disorders. 6-hydroxydopamine (6-OHDA) injection is an established method for chronic DA depletion, principally applied in rat to study NAcc DA regulation of reward motivation. Given the increasing focus on studying environmental and genetic regulation of DA function in mouse models, it is important to establish the effects of 6-OHDA DA depletion in mice, in terms of reward and aversion processing. This mouse study investigated effects of 6-OHDA-induced NAcc DA depletion using the operant behavioural test battery of progressive ratio schedule (PRS), learned non-reward (LNR), learned helplessness (LH), treadmill, and in addition Pavlovian fear conditioning. 6-OHDA NAcc DA depletion, confirmed by ex vivo HPLC-ED, reduced operant responding: for gustatory reward under effortful conditions in the PRS test; to a stimulus recently associated with gustatory non-reward in the LNR test; to escape footshock recently experienced as uncontrollable in the LH test; and to avoid footshock by physical effort in the treadmill test. Evidence for specificity of effects to NAcc DA was provided by lack of effect of medial prefrontal cortex DA depletion in the LNR and LH tests. These findings add significantly to the evidence that NAcc DA is a major regulator of behavioural responding, particularly at the motivational level, to both reward and aversion. They demonstrate the suitability of mouse models for translational study of causation and reversal of pathophysiological DA function underlying motivation psychopathologies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Self-control depletion in tufted capuchin monkeys (Sapajus spp.): does delay of gratification rely on a limited resource?

    PubMed Central

    Petrillo, Francesca De; Gori, Emanuele; Truppa, Valentina; Ariely, Dan; Addessi, Elsa

    2015-01-01

    Self-control failure has enormous personal and societal consequences. One of the most debated models explaining why self-control breaks down is the Strength Model, according to which self-control depends on a limited resource. Either previous acts of self-control or taking part in highly demanding cognitive tasks have been shown to reduce self-control, possibly due to a reduction in blood glucose levels. However, several studies yielded negative findings, and recent meta-analyses questioned the robustness of the depletion effect in humans. We investigated, for the first time, whether the Strength Model applies to a non-human primate species, the tufted capuchin monkey. We tested five capuchins in a self-control task (the Accumulation task) in which food items were accumulated within individual’s reach for as long as the subject refrained from taking them. We evaluated whether capuchins’ performance decreases: (i) when tested before receiving their daily meal rather than after consuming it (Energy Depletion Experiment), and (ii) after being tested in two tasks with different levels of cognitive complexity (Cognitive Depletion Experiment). We also tested, in both experiments, how implementing self-control in each trial of the Accumulation task affected this capacity within each session and/or across consecutive sessions. Repeated acts of self-control in each trial of the Accumulation task progressively reduced this capacity within each session, as predicted by the Strength Model. However, neither experiencing a reduction in energy level nor taking part in a highly demanding cognitive task decreased performance in the subsequent Accumulation task. Thus, whereas capuchins seem to be vulnerable to within-session depletion effects, to other extents our findings are in line with the growing body of studies that failed to find a depletion effect in humans. Methodological issues potentially affecting the lack of depletion effects in capuchins are discussed. PMID:26322001

  7. Self-control depletion in tufted capuchin monkeys (Sapajus spp.): does delay of gratification rely on a limited resource?

    PubMed

    Petrillo, Francesca De; Micucci, Antonia; Gori, Emanuele; Truppa, Valentina; Ariely, Dan; Addessi, Elsa

    2015-01-01

    Self-control failure has enormous personal and societal consequences. One of the most debated models explaining why self-control breaks down is the Strength Model, according to which self-control depends on a limited resource. Either previous acts of self-control or taking part in highly demanding cognitive tasks have been shown to reduce self-control, possibly due to a reduction in blood glucose levels. However, several studies yielded negative findings, and recent meta-analyses questioned the robustness of the depletion effect in humans. We investigated, for the first time, whether the Strength Model applies to a non-human primate species, the tufted capuchin monkey. We tested five capuchins in a self-control task (the Accumulation task) in which food items were accumulated within individual's reach for as long as the subject refrained from taking them. We evaluated whether capuchins' performance decreases: (i) when tested before receiving their daily meal rather than after consuming it (Energy Depletion Experiment), and (ii) after being tested in two tasks with different levels of cognitive complexity (Cognitive Depletion Experiment). We also tested, in both experiments, how implementing self-control in each trial of the Accumulation task affected this capacity within each session and/or across consecutive sessions. Repeated acts of self-control in each trial of the Accumulation task progressively reduced this capacity within each session, as predicted by the Strength Model. However, neither experiencing a reduction in energy level nor taking part in a highly demanding cognitive task decreased performance in the subsequent Accumulation task. Thus, whereas capuchins seem to be vulnerable to within-session depletion effects, to other extents our findings are in line with the growing body of studies that failed to find a depletion effect in humans. Methodological issues potentially affecting the lack of depletion effects in capuchins are discussed.

  8. The depletion of donor macrophages reduces ischaemia-reperfusion injury after mouse lung transplantation.

    PubMed

    Tsushima, Yukio; Jang, Jae-Hwi; Yamada, Yoshito; Schwendener, Reto; Suzuki, Kenji; Weder, Walter; Jungraithmayr, Wolfgang

    2014-04-01

    Macrophages (M) are one of the most important cells of the innate immune system for first line defense. Upon transplantation (Tx), M play a prominent role during lung ischaemia reperfusion (I/R) injury. Here, we hypothesize that the depletion of donor M ameliorates the post-transplant lung I/R injury. Orthotopic single-lung Tx was performed between syngeneic BALB/c mice after a cold ischaemic time of 8 h and a reperfusion time of 10 h. Prior to graft implantation, alveolar macrophages of donor lungs were selectively depleted applying the 'suicide technique' by intratracheal application of clodronate liposomes (experimental, n = 6) vs the application of empty liposomes (control, n = 6). Cell count (number of F4/80(+)-macrophages) and graft injury were evaluated by histology and immunohistochemistry, and levels of lactat dehydrogenase (LDH) (apoptosis assay), enzyme linked immunosorbent assay for nuclear protein high-mobility-group-protein B1 (HMGB1), tumor necrosis factor alpha (TNF-α) and transforming growth factor beta1 (TGF-β1) in plasma were analysed. Clodronate liposomes successfully reduced 70% of M from donor lungs when compared with grafts treated with empty liposome only. M-depleted transplants showed improved histology and revealed considerably less graft damage when compared with control recipients (LDH, P = 0.03; HMGB1, P = 0.3). Oxygenation capacity was ameliorated in M-depleted transplants, if not significant (P = 0.114); however, wet/dry ratio did not differ between groups (P = 0.629). The inflammatory response was significantly reduced in M-depleted mice when compared with control recipients (TNF-α, P = 0.042; TGF-β1, P = 0.039). The selective depletion of M in donor lung transplants can be successfully performed and results in a sustained anti-inflammatory response upon I/R-injury. The beneficial effect of this preconditioning method should be further evaluated as a promising tool for the attenuation of I/R prior to graft implantation in clinical Tx.

  9. Code Analysis and Refactoring with Clang Tools, Version 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, Timothy M.

    2016-12-23

    Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.

  10. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  11. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  12. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE PAGES

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  13. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  14. TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less

  15. Targeted quantification of low ng/mL level proteins in human serum without immunoaffinity depletion

    PubMed Central

    Shi, Tujin; Sun, Xuefei; Gao, Yuqian; Fillmore, Thomas L.; Schepmoes, Athena A.; Zhao, Rui; He, Jintang; Moore, Ronald J.; Kagan, Jacob; Rodland, Karin D.; Liu, Tao; Liu, Alvin Y.; Smith, Richard D.; Tang, Keqi; Camp, David G.; Qian, Wei-Jun

    2013-01-01

    We recently reported an antibody-free targeted protein quantification strategy, termed high-pressure, high-resolution separations with intelligent selection and multiplexing (PRISM) for achieving significantly enhanced sensitivity using selected reaction monitoring (SRM) mass spectrometry. Integrating PRISM with front-end IgY14 immunoaffinity depletion, sensitive detection of targeted proteins at 50–100 pg/mL levels in human blood plasma/serum was demonstrated. However, immunoaffinity depletion is often associated with undesired losses of target proteins of interest. Herein we report further evaluation of PRISM-SRM quantification of low-abundance serum proteins without immunoaffinity depletion. Limits of quantification (LOQ) at low ng/mL levels with a median coefficient of variation (CV) of ~12% were achieved for proteins spiked into human female serum. PRISM-SRM provided >100-fold improvement in the LOQ when compared to conventional LC-SRM measurements. PRISM-SRM was then applied to measure several low-abundance endogenous serum proteins, including prostate-specific antigen (PSA), in clinical prostate cancer patient sera. PRISM-SRM enabled confident detection of all target endogenous serum proteins except the low pg/mL-level cardiac troponin T. A correlation coefficient >0.99 was observed for PSA between the results from PRISM-SRM and immunoassays. Our results demonstrate that PRISM-SRM can successful quantify low ng/mL proteins in human plasma or serum without depletion. We anticipate broad applications for PRISM-SRM quantification of low-abundance proteins in candidate biomarker verification and systems biology studies. PMID:23763644

  16. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  17. Psychometric challenges and proposed solutions when scoring facial emotion expression codes.

    PubMed

    Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver

    2014-12-01

    Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.

  18. Plasma Theory and Simulation.

    DTIC Science & Technology

    1978-07-01

    l l) A paper t i t led “Part icle-Fluid Hybrid Codes Applied to Beam- Plasma , Ring -Plasma Instabi l i ties ” was presented at Monterey (see Section V...ic le-Fluid Hybr id Codes Applied to Beam- Plasma , Ring -Plasma Ins tab i l i t ies”. (2) A. Peiravi and C. K. Birdsall , “Self-Heating of id Therma l

  19. 76 FR 53497 - Florida Power and Light Company; St. Lucie Plant, Units 1 and 2; Environmental Assessment and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... Appendix G to the Code for calculating K IM factors, and instead applies FEM [finite element modeling..., Units 1 and 2 are calculated using the CE NSSS finite element modeling methods. The Need for the... Society of Mechanical Engineers (ASME) Code, Section XI, Appendix G) or determined by applying finite...

  20. Optimization of a Precolumn OPA Derivatization HPLC Assay for Monitoring of l-Asparagine Depletion in Serum during l-Asparaginase Therapy.

    PubMed

    Zhang, Mei; Zhang, Yong; Ren, Siqi; Zhang, Zunjian; Wang, Yongren; Song, Rui

    2018-06-06

    A method for monitoring l-asparagine (ASN) depletion in patients' serum using reversed-phase high-performance liquid chromatography with precolumn o-phthalaldehyde and ethanethiol (ET) derivatization is described. In order to improve the signal and stability of analytes, several important factors including precipitant reagent, derivatization conditions and detection wavelengths were optimized. The recovery of the analytes in biological matrix was the highest when 4% sulfosalicylic acid (1:1, v/v) was used as a precipitant reagent. Optimal fluorescence detection parameters were determined as λex = 340 nm and λem = 444 nm for maximal signal. The signal of analytes was the highest when the reagent ET and borate buffer of pH 9.9 were used in the derivatization solution. And the corresponding derivative products were stable up to 19 h. The validated method had been successfully applied to monitor ASN depletion and l-aspartic acid, l-glutamine, l-glutamic acid levels in pediatric patients during l-asparaginase therapy.

  1. Conjunctive-management models for sustained yield of stream-aquifer systems

    USGS Publications Warehouse

    Barlow, P.M.; Ahlfeld, D.P.; Dickerman, D.C.

    2003-01-01

    Conjunctive-management models that couple numerical simulation with linear optimization were developed to evaluate trade-offs between groundwater withdrawals and streamflow depletions for alluvial-valley stream-aquifer systems representative of those of the northeastern United States. A conjunctive-management model developed for a hypothetical stream-aquifer system was used to assess the effect of interannual hydrologic variability on minimum monthly streamflow requirements. The conjunctive-management model was applied to the Hunt-Annaquatucket-Pettaquamscutt stream-aquifer system of central Rhode Island. Results show that it is possible to increase the amount of current withdrawal from the aquifer by as much as 50% by modifying current withdrawal schedules, modifying the number and configuration of wells in the supply-well network, or allowing increased streamflow depletion in the Annaquatucket and Pettaquamscutt rivers. Alternatively, it is possible to reduce current rates of streamflow depletion in the Hunt River by as much as 35% during the summer, but such reductions would result increases in groundwater withdrawals.

  2. Ran-dependent TPX2 activation promotes acentrosomal microtubule nucleation in neurons

    PubMed Central

    Chen, Wen-Shin; Chen, Yi-Ju; Huang, Yung-An; Hsieh, Bing-Yuan; Chiu, Ho-Chieh; Kao, Pei-Ying; Chao, Chih-Yuan; Hwang, Eric

    2017-01-01

    The microtubule (MT) cytoskeleton is essential for the formation of morphologically appropriate neurons. The existence of the acentrosomal MT organizing center in neurons has been proposed but its identity remained elusive. Here we provide evidence showing that TPX2 is an important component of this acentrosomal MT organizing center. First, neurite elongation is compromised in TPX2-depleted neurons. In addition, TPX2 localizes to the centrosome and along the neurite shaft bound to MTs. Depleting TPX2 decreases MT formation frequency specifically at the tip and the base of the neurite, and these correlate precisely with the regions where active GTP-bound Ran proteins are enriched. Furthermore, overexpressing the downstream effector of Ran, importin, compromises MT formation and neuronal morphogenesis. Finally, applying a Ran-importin signaling interfering compound phenocopies the effect of TPX2 depletion on MT dynamics. Together, these data suggest a model in which Ran-dependent TPX2 activation promotes acentrosomal MT nucleation in neurons. PMID:28205572

  3. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  4. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  5. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  6. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  7. NR-code: Nonlinear reconstruction code

    NASA Astrophysics Data System (ADS)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  8. 77 FR 8736 - Pasteuria nishizawae

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-15

    ... (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This... for residues of Pasteuria nishizawae--Pn1 in or on all food commodities when applied as a nematicide... petition to EPA under the Federal Food, Drug, and Cosmetic Act (FFDCA), requesting an exemption from the...

  9. E/M coding problems plague physicians, coders.

    PubMed

    King, Mitchell S; Lipsky, Martin S; Sharp, Lisa

    2002-01-01

    As the government turns its high beams on fraudulent billing, physician E/M coding is raising questions. With several studies spotlighting the difficulty physicians have in applying CPT E/M codes, the authors wanted to know if credentialed coders had the same problem. Here's what they found.

  10. Karyopherin-mediated nuclear import of the homing endonuclease VMA1-derived endonuclease is required for self-propagation of the coding region.

    PubMed

    Nagai, Yuri; Nogami, Satoru; Kumagai-Sano, Fumi; Ohya, Yoshikazu

    2003-03-01

    VMA1-derived endonuclease (VDE), a site-specific endonuclease in Saccharomyces cerevisiae, enters the nucleus to generate a double-strand break in the VDE-negative allelic locus, mediating the self-propagating gene conversion called homing. Although VDE is excluded from the nucleus in mitotic cells, it relocalizes at premeiosis, becoming localized in both the nucleus and the cytoplasm in meiosis. The nuclear localization of VDE is induced by inactivation of TOR kinases, which constitute central regulators of cell differentiation in S. cerevisiae, and by nutrient depletion. A functional genomic approach revealed that at least two karyopherins, Srp1p and Kap142p, are required for the nuclear localization pattern. Genetic and physical interactions between Srp1p and VDE imply direct involvement of karyopherin-mediated nuclear transport in this process. Inactivation of TOR signaling or acquisition of an extra nuclear localization signal in the VDE coding region leads to artificial nuclear localization of VDE and thereby induces homing even during mitosis. These results serve as evidence that VDE utilizes the host systems of nutrient signal transduction and nucleocytoplasmic transport to ensure the propagation of its coding region.

  11. Karyopherin-Mediated Nuclear Import of the Homing Endonuclease VMA1-Derived Endonuclease Is Required for Self-Propagation of the Coding Region

    PubMed Central

    Nagai, Yuri; Nogami, Satoru; Kumagai-Sano, Fumi; Ohya, Yoshikazu

    2003-01-01

    VMA1-derived endonuclease (VDE), a site-specific endonuclease in Saccharomyces cerevisiae, enters the nucleus to generate a double-strand break in the VDE-negative allelic locus, mediating the self-propagating gene conversion called homing. Although VDE is excluded from the nucleus in mitotic cells, it relocalizes at premeiosis, becoming localized in both the nucleus and the cytoplasm in meiosis. The nuclear localization of VDE is induced by inactivation of TOR kinases, which constitute central regulators of cell differentiation in S. cerevisiae, and by nutrient depletion. A functional genomic approach revealed that at least two karyopherins, Srp1p and Kap142p, are required for the nuclear localization pattern. Genetic and physical interactions between Srp1p and VDE imply direct involvement of karyopherin-mediated nuclear transport in this process. Inactivation of TOR signaling or acquisition of an extra nuclear localization signal in the VDE coding region leads to artificial nuclear localization of VDE and thereby induces homing even during mitosis. These results serve as evidence that VDE utilizes the host systems of nutrient signal transduction and nucleocytoplasmic transport to ensure the propagation of its coding region. PMID:12588991

  12. Experimental and code simulation of a station blackout scenario for APR1400 with test facility ATLAS and MARS code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, X. G.; Kim, Y. S.; Choi, K. Y.

    2012-07-01

    A SBO (station blackout) experiment named SBO-01 was performed at full-pressure IET (Integral Effect Test) facility ATLAS (Advanced Test Loop for Accident Simulation) which is scaled down from the APR1400 (Advanced Power Reactor 1400 MWe). In this study, the transient of SBO-01 is discussed and is subdivided into three phases: the SG fluid loss phase, the RCS fluid loss phase, and the core coolant depletion and core heatup phase. In addition, the typical phenomena in SBO-01 test - SG dryout, natural circulation, core coolant boiling, the PRZ full, core heat-up - are identified. Furthermore, the SBO-01 test is reproduced bymore » the MARS code calculation with the ATLAS model which represents the ATLAS test facility. The experimental and calculated transients are then compared and discussed. The comparison reveals there was malfunction of equipments: the SG leakage through SG MSSV and the measurement error of loop flow meter. As the ATLAS model is validated against the experimental results, it can be further employed to investigate the other possible SBO scenarios and to study the scaling distortions in the ATLAS. (authors)« less

  13. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  14. Software ``Best'' Practices: Agile Deconstructed

    NASA Astrophysics Data System (ADS)

    Fraser, Steven

    This workshop will explore the intersection of agility and software development in a world of legacy code-bases and large teams. Organizations with hundreds of developers and code-bases exceeding a million or tens of millions of lines of code are seeking new ways to expedite development while retaining and attracting staff who desire to apply “agile” methods. This is a situation where specific agile practices may be embraced outside of their usual zone of applicability. Here is where practitioners must understand both what “best practices” already exist in the organization - and how they might be improved or modified by applying “agile” approaches.

  15. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  16. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  17. Core follow calculation with the nTRACER numerical reactor and verification using power reactor measurement data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Y. S.; Joo, H. G.; Yoon, J. I.

    The nTRACER direct whole core transport code employing the planar MOC solution based 3-D calculation method, the subgroup method for resonance treatment, the Krylov matrix exponential method for depletion, and a subchannel thermal/hydraulic calculation solver was developed for practical high-fidelity simulation of power reactors. Its accuracy and performance is verified by comparing with the measurement data obtained for three pressurized water reactor cores. It is demonstrated that accurate and detailed multi-physic simulation of power reactors is practically realizable without any prior calculations or adjustments. (authors)

  18. Solar powered multipurpose remotely powered aircraft

    NASA Technical Reports Server (NTRS)

    Alexandrou, A. N.; Durgin, W. W.; Cohn, R. F.; Olinger, D. J.; Cody, Charlotte K.; Chan, Agnes; Cheung, Kwok-Hung; Conley, Kristin; Crivelli, Paul M.; Javorski, Christian T.

    1992-01-01

    Increase in energy demands coupled with rapid depletion of natural energy resources have deemed solar energy as an attractive alternative source of power. The focus was to design and construct a solar powered, remotely piloted vehicle to demonstrate the feasibility of solar energy as an effective, alternate source of power. The final design included minimizing the power requirements and maximizing the strength-to-weight and lift-to-drag ratios. Given the design constraints, Surya (the code-name given to the aircraft), is a lightweight aircraft primarily built using composite materials and capable of achieving level flight powered entirely by solar energy.

  19. Space-Based Three-Dimensional Imaging of Equatorial Plasma Bubbles: Advancing the Understanding of Ionospheric Density Depletions and Scintillation

    DTIC Science & Technology

    2012-03-28

    Scintillation 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Comberiate, Joseph M. 5e. TASK NUMBER 5f. WORK...bubble climatology. A tomographic reconstruction technique was modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric... modified and applied to SSUSI data to reconstruct three-dimensional cubes of ionospheric electron density. These data cubes allowed for 3-D imaging of

  20. Progress of IRSN R&D on ITER Safety Assessment

    NASA Astrophysics Data System (ADS)

    Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.

    2012-08-01

    The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.

  1. Simulation of stimulated Brillouin scattering and stimulated Raman scattering in shock ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, L.; Li, J.; Liu, W. D.

    2016-04-15

    We study stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS) in shock ignition by comparing fluid and particle-in-cell (PIC) simulations. Under typical parameters for the OMEGA experiments [Theobald et al., Phys. Plasmas 19, 102706 (2012)], a series of 1D fluid simulations with laser intensities ranging between 2 × 10{sup 15} and 2 × 10{sup 16 }W/cm{sup 2} finds that SBS is the dominant instability, which increases significantly with the incident intensity. Strong pump depletion caused by SBS and SRS limits the transmitted intensity at the 0.17n{sub c} to be less than 3.5 × 10{sup 15 }W/cm{sup 2}. The PIC simulations show similar physics but with higher saturationmore » levels for SBS and SRS convective modes and stronger pump depletion due to higher seed levels for the electromagnetic fields in PIC codes. Plasma flow profiles are found to be important in proper modeling of SBS and limiting its reflectivity in both the fluid and PIC simulations.« less

  2. Long Non-coding RNA, PANDA, Contributes to the Stabilization of p53 Tumor Suppressor Protein.

    PubMed

    Kotake, Yojiro; Kitagawa, Kyoko; Ohhata, Tatsuya; Sakai, Satoshi; Uchida, Chiharu; Niida, Hiroyuki; Naemura, Madoka; Kitagawa, Masatoshi

    2016-04-01

    P21-associated noncoding RNA DNA damage-activated (PANDA) is induced in response to DNA damage and represses apoptosis by inhibiting the function of nuclear transcription factor Y subunit alpha (NF-YA) transcription factor. Herein, we report that PANDA affects regulation of p53 tumor-suppressor protein. U2OS cells were transfected with PANDA siRNAs. At 72 h post-transfection, cells were subjected to immunoblotting and quantitative reverse transcription-polymerase chain reaction. Depletion of PANDA was associated with decreased levels of p53 protein, but not p53 mRNA. The stability of p53 protein was markedly reduced by PANDA silencing. Degradation of p53 protein by silencing PANDA was prevented by treatment of MG132, a proteasome inhibitor. Moreover, depletion of PANDA prevented accumulation of p53 protein, as a result of DNA damage, induced by the genotoxic agent etoposide. These results suggest that PANDA stabilizes p53 protein in response to DNA damage, and provide new insight into the regulatory mechanisms of p53. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  3. Simple Model of Macroscopic Instability in XeCl Discharge Pumped Lasers

    NASA Astrophysics Data System (ADS)

    Ahmed, Belasri; Zoheir, Harrache

    2003-10-01

    The aim of this work is to study the development of the macroscopic non uniformity of the electron density of high pressure discharge for excimer lasers and eventually its propagation because of the medium kinetics phenomena. This study is executed using a transverse mono-dimensional model, in which the plasma is represented by a set of resistance's in parallel. This model was employed using a numerical code including three strongly coupled parts: electric circuit equations, electron Boltzmann equation, and kinetics equations (chemical kinetics model). The time variations of the electron density in each plasma element are obtained by solving a set of ordinary differential equations describing the plasma kinetics and external circuit. The use of the present model allows a good comprehension of the halogen depletion phenomena, which is the principal cause of laser ending and allows a simple study of a large-scale non uniformity in preionization density and its effects on electrical and chemical plasma properties. The obtained results indicate clearly that about 50consumed at the end of the pulse. KEY WORDS Excimer laser, XeCl, Modeling, Cold plasma, Kinetic, Halogen depletion, Macroscopic instability.

  4. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  5. Transcriptional analysis of the multicopy hao gene coding for hydroxylamine oxidoreductase in Nitrosomonas sp. strain ENI-11.

    PubMed

    Hirota, Ryuichi; Kuroda, Akio; Ikeda, Tsukasa; Takiguchi, Noboru; Ohtake, Hisao; Kato, Junichi

    2006-08-01

    The nitrifying bacterium Nitrosomonas sp. strain ENI-11 has three copies of the gene encoding hydroxylamine oxidoreductase (hao(1), hao(2), and hao(3)) on its genome. Broad-host-range reporter plasmids containing transcriptional fusion genes between hao copies and lacZ were constructed to analyze the expression of each hydroxylamine oxidoreductase gene (hao) copy individually and quantitatively. beta-Galactosidase assays of ENI-11 harboring reporter plasmids revealed that all hao copies were transcribed in the wild-type strain. Promoter analysis of hao copies revealed that transcription of hao(3) was highest among the hao copies. Expression levels of hao(1) and hao(2) were 40% and 62% of that of hao(3) respectively. Transcription of hao(1) was negatively regulated, whereas a portion of hao(3) transcription was read through transcription from the rpsT promoter. When energy-depleted cells were incubated in the growth medium, only hao(3) expression increased. This result suggests that it is hao(3) that is responsible for recovery from energy-depleted conditions in Nitrosomonas sp. strain ENI-11.

  6. An RCM-E simulation of a steady magnetospheric convection event

    NASA Astrophysics Data System (ADS)

    Yang, J.; Toffoletto, F.; Wolf, R.; Song, Y.

    2009-12-01

    We present simulation results of an idealized steady magnetospheric convection (SMC) event using the Rice Convection Model coupled with an equilibrium magnetic field solver (RCM-E). The event is modeled by placing a plasma distribution with substantially depleted entropy parameter PV5/3 on the RCM's high latitude boundary. The calculated magnetic field shows a highly depressed configuration due to the enhanced westward current around geosynchronous orbit where the resulting partial ring current is stronger and more symmetric than in a typical substorm growth phase. The magnitude of BZ component in the mid plasma sheet is large compared to empirical magnetic field models. Contrary to some previous results, there is no deep BZ minimum in the near-Earth plasma sheet. This suggests that the magnetosphere could transfer into a strong adiabatic earthward convection mode without significant stretching of the plasma-sheet magnetic field, when there are flux tubes with depleted plasma content continuously entering the inner magnetosphere from the mid-tail. Virtual AU/AL and Dst indices are also calculated using a synthetic magnetogram code and are compared to typical features in published observations.

  7. Requirement for CD4 T Cell Help in Generating Functional CD8 T Cell Memory

    NASA Astrophysics Data System (ADS)

    Shedlock, Devon J.; Shen, Hao

    2003-04-01

    Although primary CD8 responses to acute infections are independent of CD4 help, it is unknown whether a similar situation applies to secondary responses. We show that depletion of CD4 cells during the recall response has minimal effect, whereas depletion during the priming phase leads to reduced responses by memory CD8 cells to reinfection. Memory CD8 cells generated in CD4+/+ mice responded normally when transferred into CD4-/- hosts, whereas memory CD8 cells generated in CD4-/- mice mounted defective recall responses in CD4+/+ adoptive hosts. These results demonstrate a previously undescribed role for CD4 help in the development of functional CD8 memory.

  8. Does UV CETI Suffer from the MAD Syndrome?

    NASA Technical Reports Server (NTRS)

    Drake, Jeremy

    2000-01-01

    Data have been reduced and partially analyzed and models have been fitted. ASCA data indicate a metal-poor corona, with metals down by a factor of 3 or more relative to the photospheric values. EUVE data show a FIP effect, which is expected if the metals are enhanced rather than depleted. An absolute measure of the metal abundance has not yet been performed for the EUVE data. Either the FIP effect is in operation in the presence of a global depletion of metals, or the ASCA analysis is giving the wrong answer. The latter could be the case if the plasma models applied are incomplete. Further investigation into this is warranted prior to publication.

  9. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  10. Information quality measurement of medical encoding support based on usability.

    PubMed

    Puentes, John; Montagner, Julien; Lecornu, Laurent; Cauvin, Jean-Michel

    2013-12-01

    Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... code sets inherent to a transaction, and not related to the format of the transaction. Data elements... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  12. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... code sets inherent to a transaction, and not related to the format of the transaction. Data elements... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  13. 78 FR 35143 - 1,3-Propanediol; Exemptions From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-12

    ... rabbits. Dermal sensitization studies on guinea pigs showed that 1,3-propanediol is not a sensitizer. In a... whether this document applies to them. Potentially affected entities may include: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...

  14. Shifting Codes: Education or Regulation? Trainee Teachers and the Code of Conduct and Practice in England

    ERIC Educational Resources Information Center

    Spendlove, David; Barton, Amanda; Hallett, Fiona; Shortt, Damien

    2012-01-01

    In 2009, the General Teaching Council for England (GTCE) introduced a revised Code of Conduct and Practice (2009) for registered teachers. The code also applies to all trainee teachers who are provisionally registered with the GTCE and who could be liable to a charge of misconduct during their periods of teaching practice. This paper presents the…

  15. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  16. Basolateral cholesterol depletion alters Aquaporin-2 post-translational modifications and disrupts apical plasma membrane targeting.

    PubMed

    Moeller, Hanne B; Fuglsang, Cecilia Hvitfeldt; Pedersen, Cecilie Nøhr; Fenton, Robert A

    2018-01-01

    Apical plasma membrane accumulation of the water channel Aquaporin-2 (AQP2) in kidney collecting duct principal cells is critical for body water homeostasis. Posttranslational modification (PTM) of AQP2 is important for regulating AQP2 trafficking. The aim of this study was to determine the role of cholesterol in regulation of AQP2 PTM and in apical plasma membrane targeting of AQP2. Cholesterol depletion from the basolateral plasma membrane of a collecting duct cell line (mpkCCD14) using methyl-beta-cyclodextrin (MBCD) increased AQP2 ubiquitylation. Forskolin, cAMP or dDAVP-mediated AQP2 phosphorylation at Ser269 (pS269-AQP2) was prevented by cholesterol depletion from the basolateral membrane. None of these effects on pS269-AQP2 were observed when cholesterol was depleted from the apical side of cells, or when MBCD was applied subsequent to dDAVP stimulation. Basolateral, but not apical, MBCD application prevented cAMP-induced apical plasma membrane accumulation of AQP2. These studies indicate that manipulation of the cholesterol content of the basolateral plasma membrane interferes with AQP2 PTM and subsequently regulated apical plasma membrane targeting of AQP2. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Spontaneous Analogy by Piggybacking on a Perceptual System

    DTIC Science & Technology

    2013-08-01

    1992). High-level Perception, Representation, and Analogy: A Critique of Artificial Intelligence Methodology. J. Exp. Theor. Artif . Intell., 4(3...nrl.navy.mil David W. Aha Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory (Code 5510); Washington, DC 20375 david.aha...Research Laboratory,Center for Applied Research in Artificial Intelligence (Code 5510),4555 Overlook Ave., SW,Washington,DC,20375 8. PERFORMING ORGANIZATION

  18. Background Perchlorate Source Identification Technical Guidance

    DTIC Science & Technology

    2013-12-01

    Sciences Branch (Code 71752) of the Advanced Systems and Applied Sciences Division (Code 71700), Space and Naval Warfare Systems Center (SSC Pacific), San...Head Advanced Systems & Applied Sciences Division iii EXECUTIVE SUMMARY The objective of this document is to outline the approach, tools, and...Helium HMX Octahydro-1,3,5,7-Tetranitro-1,3,5,7- Tetrazocine IR Installation Restoration IRIS Integrated Risk Information System IR-MS Isotope-Ratio

  19. The development of a thermal hydraulic feedback mechanism with a quasi-fixed point iteration scheme for control rod position modeling for the TRIGSIMS-TH application

    NASA Astrophysics Data System (ADS)

    Karriem, Veronica V.

    Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.

  20. Two-dimensional numerical simulation of O-mode to Z-mode conversion in the ionosphere

    NASA Astrophysics Data System (ADS)

    Cannon, P. D.; Honary, F.; Borisov, N.

    2016-03-01

    Experiments in the illumination of the F region of the ionosphere via radio frequency waves polarized in the ordinary mode (O-mode) have revealed that the magnitude of artificial heating-induced effects depends strongly on the inclination angle of the pump beam, with a greater modification to the plasma observed when the heating beam is directed close to or along the magnetic zenith direction. Numerical simulations performed using a recently developed finite-difference time-domain (FDTD) code are used to investigate the contribution of the O-mode to Z-mode conversion process to this effect. The aspect angle dependence and angular size of the radio window for which conversion of an O-mode pump wave to the Z-mode occurs is simulated for a variety of plasma density profiles including 2-D linear gradients representative of large-scale plasma depletions, density-depleted plasma ducts, and periodic field-aligned irregularities. The angular shape of the conversion window is found to be strongly influenced by the background plasma profile. If the Z-mode wave is reflected, it can propagate back toward the O-mode reflection region leading to resonant enhancement of the electric field in this region. Simulation results presented in this paper demonstrate that this process can make a significant contribution to the magnitude of electron density depletion and temperature enhancement around the resonance height and contributes to a strong dependence of the magnitude of plasma perturbation with the direction of the pump wave.

  1. Trypanosoma brucei RAP1 maintains telomere and subtelomere integrity by suppressing TERRA and telomeric RNA:DNA hybrids.

    PubMed

    Nanavaty, Vishal; Sandhu, Ranjodh; Jehi, Sanaa E; Pandya, Unnati M; Li, Bibo

    2017-06-02

    Trypanosoma brucei causes human African trypanosomiasis and regularly switches its major surface antigen, VSG, thereby evading the host's immune response. VSGs are monoallelically expressed from subtelomeric expression sites (ESs), and VSG switching exploits subtelomere plasticity. However, subtelomere integrity is essential for T. brucei viability. The telomeric transcript, TERRA, was detected in T. brucei previously. We now show that the active ES-adjacent telomere is transcribed. We find that TbRAP1, a telomere protein essential for VSG silencing, suppresses VSG gene conversion-mediated switching. Importantly, TbRAP1 depletion increases the TERRA level, which appears to result from longer read-through into the telomere downstream of the active ES. Depletion of TbRAP1 also results in more telomeric RNA:DNA hybrids and more double strand breaks (DSBs) at telomeres and subtelomeres. In TbRAP1-depleted cells, expression of excessive TbRNaseH1, which cleaves the RNA strand of the RNA:DNA hybrid, brought telomeric RNA:DNA hybrids, telomeric/subtelomeric DSBs and VSG switching frequency back to WT levels. Therefore, TbRAP1-regulated appropriate levels of TERRA and telomeric RNA:DNA hybrid are fundamental to subtelomere/telomere integrity. Our study revealed for the first time an important role of a long, non-coding RNA in antigenic variation and demonstrated a link between telomeric silencing and subtelomere/telomere integrity through TbRAP1-regulated telomere transcription. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Nutrient depletion from rhizosphere solution by maize grown in soil with long-term compost amendment

    USDA-ARS?s Scientific Manuscript database

    Improved understanding of rhizosphere chemistry will enhance our ability to model nutrient dynamics and on a broader scale, to develop effective management strategies for applied plant nutrients. With a controlled-climate study, we evaluated in situ changes in macro-nutrient concentrations in the rh...

  3. 40 CFR 86.1816-18 - Emission standards for heavy-duty vehicles.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... as specified in this section. (4) Measure emissions from hybrid electric vehicles (including plug-in hybrid electric vehicles) as described in 40 CFR part 1066, subpart F, except that these procedures do not apply for plug-in hybrid electric vehicles during charge-depleting operation. (b) Tier 3 exhaust...

  4. Mathematical Formulation used by MATLAB Code to Convert FTIR Interferograms to Calibrated Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Derek Elswick

    This report discusses the mathematical procedures used to convert raw interferograms from Fourier transform infrared (FTIR) sensors to calibrated spectra. The work discussed in this report was completed as part of the Helios project at Los Alamos National Laboratory. MATLAB code was developed to convert the raw interferograms to calibrated spectra. The report summarizes the developed MATLAB scripts and functions, along with a description of the mathematical methods used by the code. The first step in working with raw interferograms is to convert them to uncalibrated spectra by applying an apodization function to the raw data and then by performingmore » a Fourier transform. The developed MATLAB code also addresses phase error correction by applying the Mertz method. This report provides documentation for the MATLAB scripts.« less

  5. 3D Encoding of Musical Score Information and the Playback Method Used by the Cellular Phone

    NASA Astrophysics Data System (ADS)

    Kubo, Hitoshi; Sugiura, Akihiko

    Recently, 3G cellular phone that can take a movie has spread by improving the digital camera function. And, 2Dcode has accurate readout and high operability. And it has spread as an information transmission means. However, the symbol is expanded and complicated when information of 2D codes increases. To solve these, 3D code was proposed. But it need the special equipment for readout, and specializes in the enhancing reality feeling technology. Therefore, it is difficult to apply it to the cellular phone. And so, we propose 3D code that can be recognized by the movie shooting function of the cellular phone. And, score information was encoded. We apply Gray Code to the property of music, and encode it. And the effectiveness was verified.

  6. Determining water storage depletion within Iran by assimilating GRACE data into the W3RA hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Forootan, E.; Kuhn, M.; Awange, J.; van Dijk, A. I. J. M.; Schumacher, M.; Sharifi, M. A.

    2018-04-01

    Groundwater depletion, due to both unsustainable water use and a decrease in precipitation, has been reported in many parts of Iran. In order to analyze these changes during the recent decade, in this study, we assimilate Terrestrial Water Storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) into the World-Wide Water Resources Assessment (W3RA) model. This assimilation improves model derived water storage simulations by introducing missing trends and correcting the amplitude and phase of seasonal water storage variations. The Ensemble Square-Root Filter (EnSRF) technique is applied, which showed stable performance in propagating errors during the assimilation period (2002-2012). Our focus is on sub-surface water storage changes including groundwater and soil moisture variations within six major drainage divisions covering the whole Iran including its eastern part (East), Caspian Sea, Centre, Sarakhs, Persian Gulf and Oman Sea, and Lake Urmia. Results indicate an average of -8.9 mm/year groundwater reduction within Iran during the period 2002 to 2012. A similar decrease is also observed in soil moisture storage especially after 2005. We further apply the canonical correlation analysis (CCA) technique to relate sub-surface water storage changes to climate (e.g., precipitation) and anthropogenic (e.g., farming) impacts. Results indicate an average correlation of 0.81 between rainfall and groundwater variations and also a large impact of anthropogenic activities (mainly for irrigations) on Iran's water storage depletions.

  7. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  8. Fixed-point Design of the Lattice-reduction-aided Iterative Detection and Decoding Receiver for Coded MIMO Systems

    DTIC Science & Technology

    2011-01-01

    reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions

  9. QR codes: next level of social media.

    PubMed

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  10. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  11. EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.

    2014-04-01

    The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.

  12. Doclet To Synthesize UML

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.

  13. Targeted quantification of low ng/mL level proteins in human serum without immunoaffinity depletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Tujin; Sun, Xuefei; Gao, Yuqian

    2013-07-05

    We recently reported an antibody-free targeted protein quantification strategy, termed high-pressure, high-resolution separations with intelligent selection and multiplexing (PRISM) for achieving significantly enhanced sensitivity using selected reaction monitoring (SRM) mass spectrometry. Integrating PRISM with front-end IgY14 immunoaffinity depletion, sensitive detection of targeted proteins at 50-100 pg/mL levels in human blood plasma/serum was demonstrated. However, immunoaffinity depletion is often associated with undesired losses of target proteins of interest. Herein we report further evaluation of PRISM-SRM quantification of low-abundance serum proteins without immunoaffinity depletion and the multiplexing potential of this technique. Limits of quantification (LOQs) at low ng/mL levels with a medianmore » CV of ~12% were achieved for proteins spiked into human female serum using as little as 2 µL serum. PRISM-SRM provided up to ~1000-fold improvement in the LOQ when compared to conventional SRM measurements. Multiplexing capability of PRISM-SRM was also evaluated by two sets of serum samples with 6 and 21 target peptides spiked at the low attomole/µL levels. The results from SRM measurements for pooled or post-concatenated samples were comparable to those obtained from individual peptide fractions in terms of signal-to-noise ratios and SRM peak area ratios of light to heavy peptides. PRISM-SRM was applied to measure several ng/mL-level endogenous plasma proteins, including prostate-specific antigen, in clinical patient sera where correlation coefficients > 0.99 were observed between the results from PRISM-SRM and ELISA assays. Our results demonstrate that PRISM-SRM can be successfully used for quantification of low-abundance endogenous proteins in highly complex samples. Moderate throughput (50 samples/week) can be achieved by applying the post-concatenation or fraction multiplexing strategies. We anticipate broad applications for targeted PRISM-SRM quantification of low-abundance cellular proteins in systems biology studies as well as candidate biomarkers in biofluids.« less

  14. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  15. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  16. Alternative Fuels Data Center

    Science.gov Websites

    Texas Commission on Environmental Quality (TCEQ). Exemptions apply for the following: vehicles with a idling. (Reference Texas Statutes, Health and Safety Code 382.0191; and Texas Administrative Code

  17. 26 CFR 1.6042-3 - Dividends subject to reporting.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... documentation of foreign status and definition of U.S. payor and non-U.S. payor) shall apply. The provisions of... the Internal Revenue Code (Code). (iv) Distributions or payments from sources outside the United States (as determined under the provisions of part I, subchapter N, chapter 1 of the Code and the...

  18. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  19. 75 FR 44184 - Aluminum tris(O-ethylphosphonate), Butylate, Chlorethoxyfos, Clethodim, et al.; Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ...: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code... contamination of food, feed, or food-contact/feed-contact surfaces. Compliance with the tolerance level... Apply to Me? You may be potentially affected by this action if you are an agricultural producer, food...

  20. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR GLOBAL CODING FOR SCANNED FORMS (UA-D-31.1)

    EPA Science Inventory

    The purpose of this SOP is to define the strategy for the Global Coding of Scanned Forms. This procedure applies to the Arizona NHEXAS project and the "Border" study. Keywords: Coding; scannable forms.

    The National Human Exposure Assessment Survey (NHEXAS) is a federal interag...

  1. Content Analysis Coding Schemes for Online Asynchronous Discussion

    ERIC Educational Resources Information Center

    Weltzer-Ward, Lisa

    2011-01-01

    Purpose: Researchers commonly utilize coding-based analysis of classroom asynchronous discussion contributions as part of studies of online learning and instruction. However, this analysis is inconsistent from study to study with over 50 coding schemes and procedures applied in the last eight years. The aim of this article is to provide a basis…

  2. Master standard data quantity food production code. Macro elements for synthesizing production labor time.

    PubMed

    Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C

    1978-06-01

    Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.

  3. Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels

    NASA Technical Reports Server (NTRS)

    Moher, Michael L.; Lodge, John H.

    1990-01-01

    A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.

  4. The ceramide-enriched trans-Golgi compartments reorganize together with other parts of the Golgi apparatus in response to ATP-depletion.

    PubMed

    Meisslitzer-Ruppitsch, Claudia; Röhrl, Clemens; Ranftler, Carmen; Neumüller, Josef; Vetterlein, Monika; Ellinger, Adolf; Pavelka, Margit

    2011-02-01

    In this study, the ceramide-enriched trans-Golgi compartments representing sites of synthesis of sphingomyelin and higher organized lipids were visualized in control and ATP-depleted hepatoma and endothelial cells using internalization of BODIPY-ceramide and the diaminobenzidine photooxidation method for combined light-electron microscopical exploration. Metabolic stress induced by lowering the cellular ATP-levels leads to reorganizations of the Golgi apparatus and the appearance of tubulo-glomerular bodies and networks. The results obtained with three different protocols, in which BODIPY-ceramide either was applied prior to, concomitantly with, or after ATP-depletion, revealed that the ceramide-enriched compartments reorganize together with other parts of the Golgi apparatus under these conditions. They were found closely associated with and integrated in the tubulo-glomerular bodies formed in response to ATP-depletion. This is in line with the changes of the staining patterns obtained with the Helix pomatia lectin and the GM130 and TGN46 immuno-reactions occurring in response to ATP-depletion and is confirmed by 3D electron tomography. The 3D reconstructions underlined the glomerular character of the reorganized Golgi apparatus and demonstrated continuities of ceramide positive and negative parts. Most interestingly, BODIPY-ceramide becomes concentrated in compartments of the tubulo-glomerular Golgi bodies, even though the reorganization took place before BODIPY-ceramide administration. This indicates maintained functionalities although the regular Golgi stack organization is abolished; the results provide novel insights into Golgi structure-function relationships, which might be relevant for cells affected by metabolic stress.

  5. Uncovering the Chemistry of Earth-like Planets

    NASA Astrophysics Data System (ADS)

    Zeng, Li; Sasselov, Dimitar; Jacobsen, Stein

    2015-08-01

    We propose to use the evidence from our solar system to understand exoplanets, and in particular, to predict their surface chemistry and thereby the possibility of life. An Earth-like planet, born from the same nebula as its host star, is composed primarily of silicate rocks and an iron-nickel metal core, and depleted in volatile content in a systematic manner. The more volatile (easier to vaporize or dissociate into gas form) an element is in an Earth-like planet, the more depleted the element is compared to its host star. After depletion, an Earth-like planet would go through the process of core formation due to heat from radioactive decay and collisions. Core formation depletes a planet’s rocky mantle of siderophile (iron-loving) elements, in addition to the volatile depletion. After that, Earth-like planets likely accrete some volatile-rich materials, called “late veneer”. The late veneer could be essential to the origins of life on Earth and Earth-like planets, as it also delivers the volatiles such as nitrogen, sulfur, carbon and water to the planet’s surface, which are crucial for life to occur. Here we build an integrative model of Earth-like planets from the bottom up. Thus the chemical compositions of Earth-like planets could be inferred from their mass-radius relations and their host stars’ elemental abundances, and the origins of volatile contents (especially water) on their surfaces could be understood, and thereby shed light on the origins of life on them. This elemental abundance model could be applied to other rocky exoplanets in exoplanet systems.

  6. Uncovering the Chemistry of Earth-like Planets

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Jacobsen, S. B.; Sasselov, D. D.

    2015-12-01

    We propose to use the evidence from our solar system to understand exoplanets, and in particular, to predict their surface chemistry and thereby the possibility of life. An Earth-like planet, born from the same nebula as its host star, is composed primarily of silicate rocks and an iron-nickel metal core, and depleted in volatile content in a systematic manner. The more volatile (easier to vaporize or dissociate into gas form) an element is in an Earth-like planet, the more depleted the element is compared to its host star. After depletion, an Earth-like planet would go through the process of core formation due to heat from radioactive decay and collisions. Core formation depletes a planet's rocky mantle of siderophile (iron-loving) elements, in addition to the volatile depletion. After that, Earth-like planets likely accrete some volatile-rich materials, called "late veneer". The late veneer could be essential to the origins of life on Earth and Earth-like planets, as it also delivers the volatiles such as nitrogen, sulfur, carbon and water to the planet's surface, which are crucial for life to occur. Here we build an integrative model of Earth-like planets from the bottom up. Thus the chemical compositions of Earth-like planets could be inferred from their mass-radius relations and their host stars' elemental abundances, and the origins of volatile contents (especially water) on their surfaces could be understood, and thereby shed light on the origins of life on them. This elemental abundance model could be applied to other rocky exoplanets in exoplanet systems.

  7. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  8. Advanced nodal neutron diffusion method with space-dependent cross sections: ILLICO-VX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Advanced transverse integrated nodal methods for neutron diffusion developed since the 1970s require that node- or assembly-homogenized cross sections be known. The underlying structural heterogeneity can be accurately accounted for in homogenization procedures by the use of heterogeneity or discontinuity factors. Other (milder) types of heterogeneity, burnup-induced or due to thermal-hydraulic feedback, can be resolved by explicitly accounting for the spatial variations of material properties. This can be done during the nodal computations via nonlinear iterations. The new method has been implemented in the code ILLICO-VX (ILLICO variable cross-section method). Numerous numerical tests were performed. As expected, the convergence ratemore » of ILLICO-VX is lower than that of ILLICO, requiring approx. 30% more outer iterations per k/sub eff/ computation. The methodology has also been implemented as the NOMAD-VX option of the NOMAD, multicycle, multigroup, two- and three-dimensional nodal diffusion depletion code. The burnup-induced heterogeneities (space dependence of cross sections) are calculated during the burnup steps.« less

  9. lncRNA requirements for mouse acute myeloid leukemia and normal differentiation

    PubMed Central

    Knott, Simon RV; Munera Maravilla, Ester; Jackson, Benjamin T; Wild, Sophia A; Kovacevic, Tatjana; Stork, Eva Maria; Zhou, Meng; Erard, Nicolas; Lee, Emily; Kelley, David R; Roth, Mareike; Barbosa, Inês AM; Zuber, Johannes; Rinn, John L

    2017-01-01

    A substantial fraction of the genome is transcribed in a cell-type-specific manner, producing long non-coding RNAs (lncRNAs), rather than protein-coding transcripts. Here, we systematically characterize transcriptional dynamics during hematopoiesis and in hematological malignancies. Our analysis of annotated and de novo assembled lncRNAs showed many are regulated during differentiation and mis-regulated in disease. We assessed lncRNA function via an in vivo RNAi screen in a model of acute myeloid leukemia. This identified several lncRNAs essential for leukemia maintenance, and found that a number act by promoting leukemia stem cell signatures. Leukemia blasts show a myeloid differentiation phenotype when these lncRNAs were depleted, and our data indicates that this effect is mediated via effects on the MYC oncogene. Bone marrow reconstitutions showed that a lncRNA expressed across all progenitors was required for the myeloid lineage, whereas the other leukemia-induced lncRNAs were dispensable in the normal setting. PMID:28875933

  10. lncRNA requirements for mouse acute myeloid leukemia and normal differentiation.

    PubMed

    Delás, M Joaquina; Sabin, Leah R; Dolzhenko, Egor; Knott, Simon Rv; Munera Maravilla, Ester; Jackson, Benjamin T; Wild, Sophia A; Kovacevic, Tatjana; Stork, Eva Maria; Zhou, Meng; Erard, Nicolas; Lee, Emily; Kelley, David R; Roth, Mareike; Barbosa, Inês Am; Zuber, Johannes; Rinn, John L; Smith, Andrew D; Hannon, Gregory J

    2017-09-06

    A substantial fraction of the genome is transcribed in a cell-type-specific manner, producing long non-coding RNAs (lncRNAs), rather than protein-coding transcripts. Here, we systematically characterize transcriptional dynamics during hematopoiesis and in hematological malignancies. Our analysis of annotated and de novo assembled lncRNAs showed many are regulated during differentiation and mis-regulated in disease. We assessed lncRNA function via an in vivo RNAi screen in a model of acute myeloid leukemia. This identified several lncRNAs essential for leukemia maintenance, and found that a number act by promoting leukemia stem cell signatures. Leukemia blasts show a myeloid differentiation phenotype when these lncRNAs were depleted, and our data indicates that this effect is mediated via effects on the MYC oncogene. Bone marrow reconstitutions showed that a lncRNA expressed across all progenitors was required for the myeloid lineage, whereas the other leukemia-induced lncRNAs were dispensable in the normal setting.

  11. Blanket activation and afterheat for the Compact Reversed-Field Pinch Reactor

    NASA Astrophysics Data System (ADS)

    Davidson, J. W.; Battat, M. E.

    A detailed assessment has been made of the activation and afterheat for a Compact Reversed-Field Pinch Reactor (CRFPR) blanket using a two-dimensional model that included the limiter, the vacuum ducts, and the manifolds and headers for cooling the limiter and the first and second walls. Region-averaged, multigroup fluxes and prompt gamma-ray/neutron heating rates were calculated using the two-dimensional, discrete-ordinates code TRISM. Activation and depletion calculations were performed with the code FORIG using one-group cross sections generated with the TRISM region-averaged fluxes. Afterheat calculations were performed for regions near the plasma, i.e., the limiter, first wall, etc. assuming a 10-day irradiation. Decay heats were computed for decay periods up to 100 minutes. For the activation calculations, the irradiation period was taken to be one year and blanket activity inventories were computed for decay times to 4 x 10 years. These activities were also calculated as the toxicity-weighted biological hazard potential (BHP).

  12. Multi-processing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    The MIMD concept is applied, through multitasking, with relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. An existing single processor algorithm is mapped without the need for developing a new algorithm. The procedure of designing a code utilizing this approach is automated with the Unix stream editor. A Multiple Processor Multiple Grid (MPMG) code is developed as a demonstration of this approach. This code solves the three-dimensional, Reynolds-averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. This solver is applied to a generic, oblique-wing aircraft problem on a four-processor computer using one process for data management and nonparallel computations and three processes for pseudotime advance on three different grid systems.

  13. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    DTIC Science & Technology

    2014-06-01

    Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable...Calderbank, “Finite-length MIMO decision feedback equal- ization for space-time block-coded signals over multipath-fading channels,” IEEE Transac- tions on

  14. Coupled petrological-geodynamical modeling of a compositionally heterogeneous mantle plume

    NASA Astrophysics Data System (ADS)

    Rummel, Lisa; Kaus, Boris J. P.; White, Richard W.; Mertz, Dieter F.; Yang, Jianfeng; Baumann, Tobias S.

    2018-01-01

    Self-consistent geodynamic modeling that includes melting is challenging as the chemistry of the source rocks continuously changes as a result of melt extraction. Here, we describe a new method to study the interaction between physical and chemical processes in an uprising heterogeneous mantle plume by combining a geodynamic code with a thermodynamic modeling approach for magma generation and evolution. We pre-computed hundreds of phase diagrams, each of them for a different chemical system. After melt is extracted, the phase diagram with the closest bulk rock chemistry to the depleted source rock is updated locally. The petrological evolution of rocks is tracked via evolving chemical compositions of source rocks and extracted melts using twelve oxide compositional parameters. As a result, a wide variety of newly generated magmatic rocks can in principle be produced from mantle rocks with different degrees of depletion. The results show that a variable geothermal gradient, the amount of extracted melt and plume excess temperature affect the magma production and chemistry by influencing decompression melting and the depletion of rocks. Decompression melting is facilitated by a shallower lithosphere-asthenosphere boundary and an increase in the amount of extracted magma is induced by a lower critical melt fraction for melt extraction and/or higher plume temperatures. Increasing critical melt fractions activates the extraction of melts triggered by decompression at a later stage and slows down the depletion process from the metasomatized mantle. Melt compositional trends are used to determine melting related processes by focusing on K2O/Na2O ratio as indicator for the rock type that has been molten. Thus, a step-like-profile in K2O/Na2O might be explained by a transition between melting metasomatized and pyrolitic mantle components reproducible through numerical modeling of a heterogeneous asthenospheric mantle source. A potential application of the developed method is shown for the West Eifel volcanic field.

  15. Nitrous oxide emissions from fertilized soil: Can we manage it?

    USDA-ARS?s Scientific Manuscript database

    Cropped fields in the upper Midwest have the potential to emit nitrous oxide (N2O) and nitric oxide (NO) gases resulting from soil transformation of nitrogen (N) fertilizers applied to crops such as corn and potatoes. Nitrous oxide is a potent greenhouse and also an important in ozone depleting che...

  16. Non-Destructive Testing of Semiconductors Using Surface Acoustic Wave.

    DTIC Science & Technology

    1983-12-31

    are thin film A). fingers (1 ;im) alternately connected to bus pads as shown in fig. 1.lb. An RF voltage applied to the transducer creates an...inversion 140 layer sets in (the deep depletion regime). This timing arrangement is not difficult to attain, due to the long minoritv carriler response

  17. Domestic Work and the Wage Penalty for Motherhood in West Germany

    ERIC Educational Resources Information Center

    Kuhhirt, Michael; Ludwig, Volker

    2012-01-01

    Previous research suggests that household tasks prohibit women from unfolding their full earning potential by depleting their work effort and limiting their time flexibility. The present study investigated whether this relationship can explain the wage gap between mothers and nonmothers in West Germany. The empirical analysis applied fixed-effects…

  18. 77 FR 58081 - Protection of Stratospheric Ozone: Listing of Substitutes for Ozone-Depleting Substances-Fire...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ... protection options including new, improved technology for early warning and smoke detection. Thus, EPA is... require state, local, or tribal governments to change their regulations. Thus, Executive Order 13132 does... governments to change their regulations. Thus, Executive Order 13175 does not apply to this action. G...

  19. A geoscientist in the State Department

    NASA Astrophysics Data System (ADS)

    Prather, Michael J.

    2006-12-01

    It must have been in a fit of idealism, à la Jimmy Stewart, that I applied to be a Jefferson Science Fellow (JSF) at the U.S. Department of State in the summer of 2004. The flyer was appealing, offering an opportunity to become "directly involved with the State Department, applying current knowledge of science and technology in support of the development of U.S. international policy. The Jefferson Science Fellowships enable academic scientists and engineers to act as consultants to the State Department on matters of science, technology, and engineering as they affect foreign policy."My own science—elating to ozone depletion, climate change, and aviation environmental impacts—often has been at the science-policy interface. As a result, I have attended governmental and intergovernmental meetings, particularly the international assessments on climate change and ozone depletion. I had even come to know the State Department team on climate negotiations, although I had never been inside the State Department. The appeal of working on the inside of negotiations within the United Nations Framework Convention on Climate Change was strong—if only to find out what an 'interlocutor' was.

  20. Reed-Solomon error-correction as a software patch mechanism.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendley, Kevin D.

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  1. Simulation of groundwater conditions and streamflow depletion to evaluate water availability in a Freeport, Maine, watershed

    USGS Publications Warehouse

    Nielsen, Martha G.; Locke, Daniel B.

    2012-01-01

    In order to evaluate water availability in the State of Maine, the U.S. Geological Survey (USGS) and the Maine Geological Survey began a cooperative investigation to provide the first rigorous evaluation of watersheds deemed "at risk" because of the combination of instream flow requirements and proportionally large water withdrawals. The study area for this investigation includes the Harvey and Merrill Brook watersheds and the Freeport aquifer in the towns of Freeport, Pownal, and Yarmouth, Maine. A numerical groundwater- flow model was used to evaluate groundwater withdrawals, groundwater-surface-water interactions, and the effect of water-management practices on streamflow. The water budget illustrates the effect that groundwater withdrawals have on streamflow and the movement of water within the system. Streamflow measurements were made following standard USGS techniques, from May through September 2009 at one site in the Merrill Brook watershed and four sites in the Harvey Brook watershed. A record-extension technique was applied to estimate long-term monthly streamflows at each of the five sites. The conceptual model of the groundwater system consists of a deep, confined aquifer (the Freeport aquifer) in a buried valley that trends through the middle of the study area, covered by a discontinuous confining unit, and topped by a thin upper saturated zone that is a mixture of sandy units, till, and weathered clay. Harvey and Merrill Brooks flow southward through the study area, and receive groundwater discharge from the upper saturated zone and from the deep aquifer through previously unknown discontinuities in the confining unit. The Freeport aquifer gets most of its recharge from local seepage around the edges of the confining unit, the remainder is received as inflow from the north within the buried valley. Groundwater withdrawals from the Freeport aquifer in the study area were obtained from the local water utility and estimated for other categories. Overall, the public-supply withdrawals (105.5 million gallons per year (Mgal/yr)) were much greater than those for any other category, being almost 7 times greater than all domestic well withdrawals (15.3 Mgal/yr). Industrial withdrawals in the study area (2.0 Mgal/yr) are mostly by a company that withdraws from an aquifer at the edge of the Merrill Brook watershed. Commercial withdrawals are very small (1.0 Mgal/yr), and no irrigation or other agricultural withdrawals were identified in this study area. A three-dimensional, steady-state groundwater-flow model was developed to evaluate stream-aquifer interactions and streamflow depletion from pumping, to help refine the conceptual model, and to predict changes in streamflow resulting from changes in pumping and recharge. Groundwater levels and flow in the Freeport aquifer study area were simulated with the three-dimensional, finite-difference groundwater-flow modeling code, MODFLOW-2005. Study area hydrology was simulated with a 3-layer model, under steady-state conditions. The groundwater model was used to evaluate changes that could occur in the water budgets of three parts of the local hydrologic system (the Harvey Brook watershed, the Merrill Brook watershed, and the buried aquifer from which pumping occurs) under several different climatic and pumping scenarios. The scenarios were (1) no pumping well withdrawals; (2) current (2009) pumping, but simulated drought conditions (20-percent reduction in recharge); (3) current (2009) recharge, but a 50-percent increase in pumping well withdrawals for public supply; and (4) drought conditions and increased pumping combined. In simulated drought situations, the overall recharge to the buried valley is about 15 percent less and the total amount of streamflow in the model area is reduced by about 19 percent. Without pumping, infiltration to the buried valley aquifer around the confining unit decreased by a small amount (0.05 million gallons per day (Mgal/d)), and discharge to the streams increased by about 8 percent (0.3 Mgal/d). A 50-percent increase in pumping resulted in a simulated decrease in streamflow discharge of about 4 percent (0.14 Mgal/d). Streamflow depletion in Harvey Brook was evaluated by use of the numerical groundwater-flow model and an analytical model. The analytical model estimated negligible depletion from Harvey Brook under current (2009) pumping conditions, whereas the numerical model estimated that flow to Harvey Brook decreased 0.38 cubic feet per second (ft3/s) because of the pumping well withdrawals. A sensitivity analysis of the analytical model method showed that conducting a cursory evaluation using an analytical model of streamflow depletion using available information may result in a very wide range in results, depending on how well the hydraulic conductivity variables and aquifer geometry of the system are known, and how well the aquifer fits the assumptions of the model. Using the analytical model to evaluate the streamflow depletion with an incomplete understanding of the hydrologic system gave results that seem unlikely to reflect actual streamflow depletion in the Freeport aquifer study area. In contrast, the groundwater-flow model was a more robust method of evaluating the amount of streamflow depletion that results from withdrawals in the Freeport aquifer, and could be used to evaluate streamflow depletion in both streams. Simulations of streamflow without pumping for each measurement site were compared to the calibratedmodel streamflow (with pumping), the difference in the total being streamflow depletion. Simulations without pumping resulted in a simulated increase in the steady-state flow rate of 0.38 ft3/s in Harvey Brook and 0.01 ft3/s in Merrill Brook. This translates into a streamflow-depletion amount equal to about 8.5 percent of the steady-state base flow in Harvey Brook, and an unmeasurable amount of depletion in Merrill Brook. If pumping was increased by 50 percent and recharge reduced by 20 percent, the amount of streamflow depletion in Harvey Brook could reach 1.41 ft3/s.

  2. Serotonin depletion induces pessimistic-like behavior in a cognitive bias paradigm in pigs.

    PubMed

    Stracke, Jenny; Otten, Winfried; Tuchscherer, Armin; Puppe, Birger; Düpjan, Sandra

    2017-05-15

    Cognitive and affective processes are highly interrelated. This has implications for neuropsychiatric disorders such as major depressive disorder in humans but also for the welfare of non-human animals. The brain serotonergic system might play a key role in mediating the relationship between cognitive functions and affective regulation. The aim of our study was to examine the influence of serotonin depletion on the affective state and cognitive processing in pigs, an important farm animal species but also a potential model species for biomedical research in humans. For this purpose, we modified a serotonin depletion model using para-chlorophenylalanine (pCPA) to decrease serotonin levels in brain areas involved in cognitive and affective processing (part 1). The consequences of serotonin depletion were then measured in two behavioral tests (part 2): the spatial judgement task (SJT), providing information about the effects of the affective state on cognitive processing, and the open field/novel object (OFNO) test, which measures behavioral reactions to novelty that are assumed to reflect affective state. In part 1, 40 pigs were treated with either pCPA or saline for six consecutive days. Serotonin levels were assessed in seven different brain regions 4, 5, 6, 11 and 13days after the first injection. Serotonin was significantly depleted in all analyzed brain regions up to 13days after the first application. In part 2, the pCPA model was applied to 48 animals in behavioral testing. Behavioral tests, the OFNO test and the SJT, were conducted both before and after pCPA/saline injections. While results from the OFNO tests were inconclusive, an effect of treatment as well as an effect of the phase (before and after treatment) was observed in the SJT. Animals treated with pCPA showed more pessimistic-like behavior, suggesting a more negative affective state due to serotonin depletion. Thus, our results confirm that the serotonergic system is a key player in cognitive-emotional processing. Hence, the serotonin depletion model and the spatial judgement task can increase our understanding of the basic mechanisms underlying both human neuropsychiatric disorders and animal welfare. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Top and Split Gating Control of the Electrical Characteristics of a Two-dimensional Electron Gas in a LaAlO3/SrTiO3 Perovskite

    NASA Astrophysics Data System (ADS)

    Kwak, Yongsu; Song, Jonghyun; Kim, Jihwan; Kim, Jinhee

    2018-04-01

    A top gate field effect transistor was fabricated using polymethyl methacrylate (PMMA) as a gate insulator on a LaAlO3 (LAO)/SrTiO3 (STO) hetero-interface. It showed n-type behavior, and a depletion mode was observed at low temperature. The electronic properties of the 2-dimensional electron gas at the LAO/STO hetero-interface were not changed by covering LAO with PMMA following the Au top gate electrode. A split gate device was also fabricated to construct depletion mode by using a narrow constriction between the LAO/STO conduction interface. The depletion mode, as well as superconducting critical current, could be controlled by applying a split gate voltage. Noticeably, the superconducting critical current tended to decrease with decreasing the split gate voltage and finally became zero. These results indicate that a weak-linked Josephson junction can be constructed and destroyed by split gating. This observation opens the possibility of gate-voltage-adjustable quantum devices.

  4. Quantitative phosphoproteomics reveals new roles for the protein phosphatase PP6 in mitotic cells.

    PubMed

    Rusin, Scott F; Schlosser, Kate A; Adamo, Mark E; Kettenbach, Arminja N

    2015-10-13

    Protein phosphorylation is an important regulatory mechanism controlling mitotic progression. Protein phosphatase 6 (PP6) is an essential enzyme with conserved roles in chromosome segregation and spindle assembly from yeast to humans. We applied a baculovirus-mediated gene silencing approach to deplete HeLa cells of the catalytic subunit of PP6 (PP6c) and analyzed changes in the phosphoproteome and proteome in mitotic cells by quantitative mass spectrometry-based proteomics. We identified 408 phosphopeptides on 272 proteins that increased and 298 phosphopeptides on 220 proteins that decreased in phosphorylation upon PP6c depletion in mitotic cells. Motif analysis of the phosphorylated sites combined with bioinformatics pathway analysis revealed previously unknown PP6c-dependent regulatory pathways. Biochemical assays demonstrated that PP6c opposed casein kinase 2-dependent phosphorylation of the condensin I subunit NCAP-G, and cellular analysis showed that depletion of PP6c resulted in defects in chromosome condensation and segregation in anaphase, consistent with dysregulation of condensin I function in the absence of PP6 activity. Copyright © 2015, American Association for the Advancement of Science.

  5. Quantitative phosphoproteomics reveals new roles for the protein phosphatase PP6 in mitotic cells

    PubMed Central

    Rusin, Scott F.; Schlosser, Kate A.; Adamo, Mark E.; Kettenbach, Arminja N.

    2017-01-01

    Protein phosphorylation is an important regulatory mechanism controlling mitotic progression. Protein phosphatase 6 (PP6) is an essential enzyme with conserved roles in chromosome segregation and spindle assembly from yeast to humans. We applied a baculovirus-mediated gene silencing approach to deplete HeLa cells of the catalytic subunit of PP6 (PP6c) and analyzed changes in the phosphoproteome and proteome in mitotic cells by quantitative mass spectrometry–based proteomics. We identified 408 phosphopeptides on 272 proteins that increased and 298 phosphopeptides on 220 proteins that decreased in phosphorylation upon PP6c depletion in mitotic cells. Motif analysis of the phosphorylated sites combined with bioinformatics pathway analysis revealed previously unknown PP6c–dependent regulatory pathways. Biochemical assays demonstrated that PP6c opposed casein kinase 2–dependent phosphorylation of the condensin I subunit NCAP-G, and cellular analysis showed that depletion of PP6c resulted in defects in chromosome condensation and segregation in anaphase, consistent with dysregulation of condensin I function in the absence of PP6 activity. PMID:26462736

  6. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  7. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  8. How Sustainable is Groundwater Abstraction? A Global Assessment.

    NASA Astrophysics Data System (ADS)

    de Graaf, I.; Van Beek, R.; Gleeson, T. P.; Sutanudjaja, E.; Wada, Y.; Bierkens, M. F.

    2016-12-01

    Groundwater is the world's largest accessible freshwater resource and is of critical importance for irrigation, and thus for global food security. For regions with high demands, groundwater abstractions often exceed recharge and persistent groundwater depletion occurs. The direct effects of depletion are falling groundwater levels, increased pumping costs, land subsidence, and reduced baseflows to rivers. Water demands are expected to increase further due to growing population, economic development, and climate change, posing the urgent question how sustainable current water abstractions are worldwide and where and when these abstractions approach conceivable economic and environmental limits. In this study we estimated trends over 1960-2100 in groundwater levels, resulting from changes in demand and climate. We explored the limits of groundwater abstraction by predicting where and when groundwater levels drop that deep that groundwater gets unattainable for abstraction (economic limit) or, that groundwater baseflows to rivers drop below environmental requirements (environmental limit). We used a global hydrological model coupled to a groundwater model, meaning lateral groundwater flows, river infiltration and drainage, and infiltration and capillary-rise are simulated dynamically. Historical data and projections are used to prescribe water demands and climate forcing to the model. For the near future we used RCP8.5 and applied globally driest, average, and wettest GCM to test climate sensitivity. Results show that in general environmental limits are reached before economic limits, for example starting as early as the 1970s compared to the 1980s for economic limits in the upper Ganges basin. Economic limits are mostly related to regions with depletion, while environmental limits are reached also in regions were groundwater and surface water withdrawals are significant but depletion is not taking place (yet), for example in Spain and Portugal. In the near future, more regions will reach their limits, current depletion regions will expand and new regions experiencing depletion will develop. Regionally the increasing level of groundwater stress, economically and environmentally, will be an important factor in future economic development and could lead to socio-economic tension.

  9. How Sustainable is Groundwater Abstraction? A Global Assessment.

    NASA Astrophysics Data System (ADS)

    de Graaf, I.; Van Beek, R.; Gleeson, T. P.; Sutanudjaja, E.; Wada, Y.; Bierkens, M. F.

    2017-12-01

    Groundwater is the world's largest accessible freshwater resource and is of critical importance for irrigation, and thus for global food security. For regions with high demands, groundwater abstractions often exceed recharge and persistent groundwater depletion occurs. The direct effects of depletion are falling groundwater levels, increased pumping costs, land subsidence, and reduced baseflows to rivers. Water demands are expected to increase further due to growing population, economic development, and climate change, posing the urgent question how sustainable current water abstractions are worldwide and where and when these abstractions approach conceivable economic and environmental limits. In this study we estimated trends over 1960-2100 in groundwater levels, resulting from changes in demand and climate. We explored the limits of groundwater abstraction by predicting where and when groundwater levels drop that deep that groundwater gets unattainable for abstraction (economic limit) or, that groundwater baseflows to rivers drop below environmental requirements (environmental limit). We used a global hydrological model coupled to a groundwater model, meaning lateral groundwater flows, river infiltration and drainage, and infiltration and capillary-rise are simulated dynamically. Historical data and projections are used to prescribe water demands and climate forcing to the model. For the near future we used RCP8.5 and applied globally driest, average, and wettest GCM to test climate sensitivity. Results show that in general environmental limits are reached before economic limits, for example starting as early as the 1970s compared to the 1980s for economic limits in the upper Ganges basin. Economic limits are mostly related to regions with depletion, while environmental limits are reached also in regions were groundwater and surface water withdrawals are significant but depletion is not taking place (yet), for example in Spain and Portugal. In the near future, more regions will reach their limits, current depletion regions will expand and new regions experiencing depletion will develop. Regionally the increasing level of groundwater stress, economically and environmentally, will be an important factor in future economic development and could lead to socio-economic tension.

  10. Environmental comparison of alternative treatments for sewage sludge: An Italian case study.

    PubMed

    Lombardi, Lidia; Nocita, Cristina; Bettazzi, Elena; Fibbi, Donatella; Carnevale, Ennio

    2017-11-01

    A Life Cycle Assessment (LCA) was applied to compare different alternatives for sewage sludge treatment: such as land spreading, composting, incineration, landfill and wet oxidation. The LCA system boundaries include mechanical dewatering, the alternative treatment, transport, and final disposal/recovery of residues. Cases of recovered materials produced as outputs from the systems, were resolved by expanding the system boundaries to include avoided primary productions. The impact assessment was calculated using the CML-IA baseline method. Results showed that the incineration of sewage sludge with electricity production and solid residues recovery collects the lowest impact indicator values in the categories human toxicity, fresh water aquatic ecotoxicity, acidification and eutrophication, while it has the highest values for the categories global warming and ozone layer depletion. Land spreading has the lowest values for the categories abiotic depletion, fossil fuel depletion, global warming, ozone layer depletion and photochemical oxidation, while it collects the highest values for terrestrial ecotoxicity and eutrophication. Wet oxidation has just one of the best indicators (terrestrial ecotoxicity) and three of the worst ones (abiotic depletion, human toxicity and fresh water aquatic ecotoxicity). Composting process shows intermediate results. Landfill has the worst performances in global warming, photochemical oxidation and acidification. Results indicate that if the aim is to reduce the effect of the common practice of sludge land spreading on human and ecosystem toxicity, on acidification and on eutrophication, incineration with energy recovery would clearly improve the environmental performance of those indicators, but an increase in resource depletion and global warming is unavoidable. However, these conclusions are strictly linked to the effective recovery of solid residues from incineration, as the results are shown to be very sensitive with respect to this assumption. Similarly, the quality of the wet oxidation process residues plays an important role in defining the impact of this treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. An oscillating tragedy of the commons in replicator dynamics with game-environment feedback.

    PubMed

    Weitz, Joshua S; Eksin, Ceyhun; Paarporn, Keith; Brown, Sam P; Ratcliff, William C

    2016-11-22

    A tragedy of the commons occurs when individuals take actions to maximize their payoffs even as their combined payoff is less than the global maximum had the players coordinated. The originating example is that of overgrazing of common pasture lands. In game-theoretic treatments of this example, there is rarely consideration of how individual behavior subsequently modifies the commons and associated payoffs. Here, we generalize evolutionary game theory by proposing a class of replicator dynamics with feedback-evolving games in which environment-dependent payoffs and strategies coevolve. We initially apply our formulation to a system in which the payoffs favor unilateral defection and cooperation, given replete and depleted environments, respectively. Using this approach, we identify and characterize a class of dynamics: an oscillatory tragedy of the commons in which the system cycles between deplete and replete environmental states and cooperation and defection behavior states. We generalize the approach to consider outcomes given all possible rational choices of individual behavior in the depleted state when defection is favored in the replete state. In so doing, we find that incentivizing cooperation when others defect in the depleted state is necessary to avert the tragedy of the commons. In closing, we propose directions for the study of control and influence in games in which individual actions exert a substantive effect on the environmental state.

  12. The Effects of Dynamical Rates on Species Coexistence in a Variable Environment: The Paradox of the Plankton Revisited.

    PubMed

    Li, Lina; Chesson, Peter

    2016-08-01

    Hutchinson's famous hypothesis for the "paradox of the plankton" has been widely accepted, but critical aspects have remained unchallenged. Hutchinson argued that environmental fluctuations would promote coexistence when the timescale for environmental change is comparable to the timescale for competitive exclusion. Using a consumer-resource model, we do find that timescales of processes are important. However, it is not the time to exclusion that must be compared with the time for environmental change but the time for resource depletion. Fast resource depletion, when resource consumption is favored for different species at different times, strongly promotes coexistence. The time for exclusion is independent of the rate of resource depletion. Therefore, the widely believed predictions of Hutchinson are misleading. Fast resource depletion, as determined by environmental conditions, ensures strong coupling of environmental processes and competition, which leads to enhancement over time of intraspecific competition relative to interspecific competition as environmental shifts favor different species at different times. This critical coupling is measured by the covariance between environment and competition. Changes in this quantity as densities change determine the stability of coexistence and provide the key to rigorous analysis, both theoretically and empirically, of coexistence in a variable environment. These ideas apply broadly to diversity maintenance in variable environments whether the issue is species diversity or genetic diversity and competition or apparent competition.

  13. Activation of TRPV1 channels inhibits mechanosensitive Piezo channel activity by depleting membrane phosphoinositides

    PubMed Central

    Borbiro, Istvan; Badheka, Doreen; Rohacs, Tibor

    2015-01-01

    Capsaicin is an activator of the heat-sensitive TRPV1 (transient receptor potential vanilloid 1) ion channels and has been used as a local analgesic. We found that activation of TRPV1 channels with capsaicin either in dorsal root ganglion neurons or in a heterologous expression system inhibited the mechanosensitive Piezo1 and Piezo2 channels by depleting phosphatidylinositol 4,5-bisphosphate [PI(4,5)P2] and its precursor PI(4)P from the plasma membrane through Ca2+-induced phospholipase Cδ (PLCδ) activation. Experiments with chemically inducible phosphoinositide phosphatases and receptor-induced activation of PLCβ indicated that inhibition of Piezo channels required depletion of both PI(4)P and PI(4,5)P2. The mechanically activated current amplitudes decreased substantially in the excised inside-out configuration, where the membrane patch containing Piezo1 channels is removed from the cell. PI(4,5)P2 and PI(4)P applied to these excised patches inhibited this decrease. Thus, we concluded that Piezo channel activity requires the presence of phosphoinositides, and the combined depletion of PI(4,5)P2 or PI(4)P reduces channel activity. In addition to revealing a role for distinct membrane lipids in mechanosensitive ion channel regulation, these data suggest that inhibition of Piezo2 channels may contribute to the analgesic effect of capsaicin. PMID:25670203

  14. Characterisation of a novel reverse-biased PPD CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.

    2017-11-01

    A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.

  15. An oscillating tragedy of the commons in replicator dynamics with game-environment feedback

    PubMed Central

    Weitz, Joshua S.; Eksin, Ceyhun; Paarporn, Keith; Brown, Sam P.; Ratcliff, William C.

    2016-01-01

    A tragedy of the commons occurs when individuals take actions to maximize their payoffs even as their combined payoff is less than the global maximum had the players coordinated. The originating example is that of overgrazing of common pasture lands. In game-theoretic treatments of this example, there is rarely consideration of how individual behavior subsequently modifies the commons and associated payoffs. Here, we generalize evolutionary game theory by proposing a class of replicator dynamics with feedback-evolving games in which environment-dependent payoffs and strategies coevolve. We initially apply our formulation to a system in which the payoffs favor unilateral defection and cooperation, given replete and depleted environments, respectively. Using this approach, we identify and characterize a class of dynamics: an oscillatory tragedy of the commons in which the system cycles between deplete and replete environmental states and cooperation and defection behavior states. We generalize the approach to consider outcomes given all possible rational choices of individual behavior in the depleted state when defection is favored in the replete state. In so doing, we find that incentivizing cooperation when others defect in the depleted state is necessary to avert the tragedy of the commons. In closing, we propose directions for the study of control and influence in games in which individual actions exert a substantive effect on the environmental state. PMID:27830651

  16. Inversion of Zeeman polarization for solar magnetic field diagnostics

    NASA Astrophysics Data System (ADS)

    Derouich, M.

    2017-05-01

    The topic of magnetic field diagnostics with the Zeeman effect is currently vividly discussed. There are some testable inversion codes available to the spectropolarimetry community and their application allowed for a better understanding of the magnetism of the solar atmosphere. In this context, we propose an inversion technique associated with a new numerical code. The inversion procedure is promising and particularly successful for interpreting the Stokes profiles in quick and sufficiently precise way. In our inversion, we fit a part of each Stokes profile around a target wavelength, and then determine the magnetic field as a function of the wavelength which is equivalent to get the magnetic field as a function of the height of line formation. To test the performance of the new numerical code, we employed "hare and hound" approach by comparing an exact solution (called input) with the solution obtained by the code (called output). The precision of the code is also checked by comparing our results to the ones obtained with the HAO MERLIN code. The inversion code has been applied to synthetic Stokes profiles of the Na D1 line available in the literature. We investigated the limitations in recovering the input field in case of noisy data. As an application, we applied our inversion code to the polarization profiles of the Fe Iλ 6302.5 Å observed at IRSOL in Locarno.

  17. Genome-wide identification and functional prediction of nitrogen-responsive intergenic and intronic long non-coding RNAs in maize (Zea mays L.).

    PubMed

    Lv, Yuanda; Liang, Zhikai; Ge, Min; Qi, Weicong; Zhang, Tifu; Lin, Feng; Peng, Zhaohua; Zhao, Han

    2016-05-11

    Nitrogen (N) is an essential and often limiting nutrient to plant growth and development. Previous studies have shown that the mRNA expressions of numerous genes are regulated by nitrogen supplies; however, little is known about the expressed non-coding elements, for example long non-coding RNAs (lncRNAs) that control the response of maize (Zea mays L.) to nitrogen. LncRNAs are a class of non-coding RNAs larger than 200 bp, which have emerged as key regulators in gene expression. In this study, we surveyed the intergenic/intronic lncRNAs in maize B73 leaves at the V7 stage under conditions of N-deficiency and N-sufficiency using ribosomal RNA depletion and ultra-deep total RNA sequencing approaches. By integration with mRNA expression profiles and physiological evaluations, 7245 lncRNAs and 637 nitrogen-responsive lncRNAs were identified that exhibited unique expression patterns. Co-expression network analysis showed that the nitrogen-responsive lncRNAs were enriched mainly in one of the three co-expressed modules. The genes in the enriched module are mainly involved in NADH dehydrogenase activity, oxidative phosphorylation and the nitrogen compounds metabolic process. We identified a large number of lncRNAs in maize and illustrated their potential regulatory roles in response to N stress. The results lay the foundation for further in-depth understanding of the molecular mechanisms of lncRNAs' role in response to nitrogen stresses.

  18. The Intolerance of Regulatory Sequence to Genetic Variation Predicts Gene Dosage Sensitivity

    PubMed Central

    Wang, Quanli; Halvorsen, Matt; Han, Yujun; Weir, William H.; Allen, Andrew S.; Goldstein, David B.

    2015-01-01

    Noncoding sequence contains pathogenic mutations. Yet, compared with mutations in protein-coding sequence, pathogenic regulatory mutations are notoriously difficult to recognize. Most fundamentally, we are not yet adept at recognizing the sequence stretches in the human genome that are most important in regulating the expression of genes. For this reason, it is difficult to apply to the regulatory regions the same kinds of analytical paradigms that are being successfully applied to identify mutations among protein-coding regions that influence risk. To determine whether dosage sensitive genes have distinct patterns among their noncoding sequence, we present two primary approaches that focus solely on a gene’s proximal noncoding regulatory sequence. The first approach is a regulatory sequence analogue of the recently introduced residual variation intolerance score (RVIS), termed noncoding RVIS, or ncRVIS. The ncRVIS compares observed and predicted levels of standing variation in the regulatory sequence of human genes. The second approach, termed ncGERP, reflects the phylogenetic conservation of a gene’s regulatory sequence using GERP++. We assess how well these two approaches correlate with four gene lists that use different ways to identify genes known or likely to cause disease through changes in expression: 1) genes that are known to cause disease through haploinsufficiency, 2) genes curated as dosage sensitive in ClinGen’s Genome Dosage Map, 3) genes judged likely to be under purifying selection for mutations that change expression levels because they are statistically depleted of loss-of-function variants in the general population, and 4) genes judged unlikely to cause disease based on the presence of copy number variants in the general population. We find that both noncoding scores are highly predictive of dosage sensitivity using any of these criteria. In a similar way to ncGERP, we assess two ensemble-based predictors of regional noncoding importance, ncCADD and ncGWAVA, and find both scores are significantly predictive of human dosage sensitive genes and appear to carry information beyond conservation, as assessed by ncGERP. These results highlight that the intolerance of noncoding sequence stretches in the human genome can provide a critical complementary tool to other genome annotation approaches to help identify the parts of the human genome increasingly likely to harbor mutations that influence risk of disease. PMID:26332131

  19. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...

    2017-10-17

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  20. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less

  1. RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, G.; Epiney, A. S.

    2012-07-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  3. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  4. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2018-01-01

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.

  5. Fast Flows in the Magnetotail and Energetic Particle Transport: Multiscale Coupling in the Magnetosphere

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, X.; Fok, M. C. H.; Buzulukova, N.; Perez, J. D.; Chen, L. J.

    2017-12-01

    The interaction between the Earth's inner and outer magnetospheric regions associated with the tail fast flows is calculated by coupling the Auburn 3-D global hybrid simulation code (ANGIE3D) to the Comprehensive Inner Magnetosphere/Ionosphere (CIMI) model. The global hybrid code solves fully kinetic equations governing the ions and a fluid model for electrons in the self-consistent electromagnetic field of the dayside and night side outer magnetosphere. In the integrated computation model, the hybrid simulation provides the CIMI model with field data in the CIMI 3-D domain and particle data at its boundary, and the transport in the inner magnetosphere is calculated by the CIMI model. By joining the two existing codes, effects of the solar wind on particle transport through the outer magnetosphere into the inner magnetosphere are investigated. Our simulation shows that fast flows and flux ropes are localized transients in the magnetotail plasma sheet and their overall structures have a dawn-dusk asymmetry. Strong perpendicular ion heating is found at the fast flow braking, which affects the earthward transport of entropy-depleted bubbles. We report on the impacts from the temperature anisotropy and non-Maxwellian ion distributions associated with the fast flows on the ring current and the convection electric field.

  6. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  7. The design of the CMOS wireless bar code scanner applying optical system based on ZigBee

    NASA Astrophysics Data System (ADS)

    Chen, Yuelin; Peng, Jian

    2008-03-01

    The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.

  8. ORIGEN-based Nuclear Fuel Inventory Module for Fuel Cycle Assessment: Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skutnik, Steven E.

    The goal of this project, “ORIGEN-based Nuclear Fuel Depletion Module for Fuel Cycle Assessment" is to create a physics-based reactor depletion and decay module for the Cyclus nuclear fuel cycle simulator in order to assess nuclear fuel inventories over a broad space of reactor operating conditions. The overall goal of this approach is to facilitate evaluations of nuclear fuel inventories for a broad space of scenarios, including extended used nuclear fuel storage and cascading impacts on fuel cycle options such as actinide recovery in used nuclear fuel, particularly for multiple recycle scenarios. The advantages of a physics-based approach (compared tomore » a recipe-based approach which has been typically employed for fuel cycle simulators) is in its inherent flexibility; such an approach can more readily accommodate the broad space of potential isotopic vectors that may be encountered under advanced fuel cycle options. In order to develop this flexible reactor analysis capability, we are leveraging the Origen nuclear fuel depletion and decay module from SCALE to produce a standalone “depletion engine” which will serve as the kernel of a Cyclus-based reactor analysis module. The ORIGEN depletion module is a rigorously benchmarked and extensively validated tool for nuclear fuel analysis and thus its incorporation into the Cyclus framework can bring these capabilities to bear on the problem of evaluating long-term impacts of fuel cycle option choices on relevant metrics of interest, including materials inventories and availability (for multiple recycle scenarios), long-term waste management and repository impacts, etc. Developing this Origen-based analysis capability for Cyclus requires the refinement of the Origen analysis sequence to the point where it can reasonably be compiled as a standalone sequence outside of SCALE; i.e., wherein all of the computational aspects of Origen (including reactor cross-section library processing and interpolation, input and output processing, and depletion/decay solvers) can be self-contained into a single executable sequence. Further, to embed this capability into other software environments (such as the Cyclus fuel cycle simulator) requires that Origen’s capabilities be encapsulated into a portable, self-contained library which other codes can then call directly through function calls, thereby directly accessing the solver and data processing capabilities of Origen. Additional components relevant to this work include modernization of the reactor data libraries used by Origen for conducting nuclear fuel depletion calculations. This work has included the development of new fuel assembly lattices not previously available (such as for CANDU heavy-water reactor assemblies) as well as validation of updated lattices for light-water reactors updated to employ modern nuclear data evaluations. The CyBORG reactor analysis module as-developed under this workscope is fully capable of dynamic calculation of depleted fuel compositions from all commercial U.S. reactor assembly types as well as a number of international fuel types, including MOX, VVER, MAGNOX, and PHWR CANDU fuel assemblies. In addition, the Origen-based depletion engine allows for CyBORG to evaluate novel fuel assembly and reactor design types via creation of Origen reactor data libraries via SCALE. The establishment of this new modeling capability affords fuel cycle modelers a substantially improved ability to model dynamically-changing fuel cycle and reactor conditions, including recycled fuel compositions from fuel cycle scenarios involving material recycle into thermal-spectrum systems.« less

  9. 17 CFR 160.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... a number or code in an encrypted form, as long as you do not provide the recipient with a means to... form of access number or access code for a consumer's credit card account, deposit account or... apply if you disclose an account number or similar form of access number or access code: (1) To your...

  10. 17 CFR 160.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... a number or code in an encrypted form, as long as you do not provide the recipient with a means to... form of access number or access code for a consumer's credit card account, deposit account or... apply if you disclose an account number or similar form of access number or access code: (1) To your...

  11. 17 CFR 160.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... a number or code in an encrypted form, as long as you do not provide the recipient with a means to... similar form of access number or access code for a consumer's credit card account, deposit account or... apply if you disclose an account number or similar form of access number or access code: (1) To your...

  12. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR GLOBAL CODING FOR SCANNED FORMS (UA-D-31.1)

    EPA Science Inventory

    The purpose of this SOP is to define the strategy for the global coding of scanned forms. This procedure applies to the Arizona NHEXAS project and the Border study. Keywords: Coding; scannable forms.

    The U.S.-Mexico Border Program is sponsored by the Environmental Health Workg...

  13. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR GLOBAL CODING USED BY NHEXAS ARIZONA (HAND ENTRY) (UA-D-5.0)

    EPA Science Inventory

    The purpose of this SOP is to define the global coding scheme to used in the working and master databases. This procedure applies to all of the databases used during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; databases.

    The National Human Exposu...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; McPherson, Brian J.; Grigg, Reid B.

    Numerical simulation is an invaluable analytical tool for scientists and engineers in making predictions about of the fate of carbon dioxide injected into deep geologic formations for long-term storage. Current numerical simulators for assessing storage in deep saline formations have capabilities for modeling strongly coupled processes involving multifluid flow, heat transfer, chemistry, and rock mechanics in geologic media. Except for moderate pressure conditions, numerical simulators for deep saline formations only require the tracking of two immiscible phases and a limited number of phase components, beyond those comprising the geochemical reactive system. The requirements for numerically simulating the utilization and storagemore » of carbon dioxide in partially depleted petroleum reservoirs are more numerous than those for deep saline formations. The minimum number of immiscible phases increases to three, the number of phase components may easily increase fourfold, and the coupled processes of heat transfer, geochemistry, and geomechanics remain. Public and scientific confidence in the ability of numerical simulators used for carbon dioxide sequestration in deep saline formations has advanced via a natural progression of the simulators being proven against benchmark problems, code comparisons, laboratory-scale experiments, pilot-scale injections, and commercial-scale injections. This paper describes a new numerical simulator for the scientific investigation of carbon dioxide utilization and storage in partially depleted petroleum reservoirs, with an emphasis on its unique features for scientific investigations; and documents the numerical simulation of the utilization of carbon dioxide for enhanced oil recovery in the western section of the Farnsworth Unit and represents an early stage in the progression of numerical simulators for carbon utilization and storage in depleted oil reservoirs.« less

  15. Human SNM1B is required for normal cellular response to both DNA interstrand crosslink-inducing agents and ionizing radiation.

    PubMed

    Demuth, Ilja; Digweed, Martin; Concannon, Patrick

    2004-11-11

    DNA interstrand crosslinks (ICLs) are critical lesions for the mammalian cell since they affect both DNA strands and block transcription and replication. The repair of ICLs in the mammalian cell involves components of different repair pathways such as nucleotide-excision repair and the double-strand break/homologous recombination repair pathways. However, the mechanistic details of mammalian ICL repair have not been fully delineated. We describe here the complete coding sequence and the genomic organization of hSNM1B, one of at least three human homologs of the Saccharomyces cerevisiae PSO2 gene. Depletion of hSNM1B by RNA interference rendered cells hypersensitive to ICL-inducing agents. This requirement for hSNM1B in the cellular response to ICL has been hypothesized before but never experimentally verified. In addition, siRNA knockdown of hSNM1B rendered cells sensitive to ionizing radiation, suggesting the possibility of hSNM1B involvement in homologous recombination repair of double-strand breaks arising as intermediates of ICL repair. Monoubiquitination of FANCD2, a key step in the FANC/BRCA pathway, is not affected in hSNM1B-depleted HeLa cells, indicating that hSNM1B is probably not a part of the Fanconi anemia core complex. Nonetheless, similarities in the phenotype of hSNM1B-depleted cells and cultured cells from patients suffering from Fanconi anemia make hSNM1B a candidate for one of the as yet unidentified Fanconi anemia genes not involved in monoubiquitination of FANCD2.

  16. Student perception of travel service learning experience in Morocco.

    PubMed

    Puri, Aditi; Kaddoura, Mahmoud; Dominick, Christine

    2013-08-01

    This study explores the perceptions of health profession students participating in academic service learning in Morocco with respect to adapting health care practices to cultural diversity. Authors utilized semi-structured, open-ended interviews to explore the perceptions of health profession students. Nine dental hygiene and nursing students who traveled to Morocco to provide oral and general health services were interviewed. After interviews were recorded, they were transcribed verbatim to ascertain descriptive validity and to generate inductive and deductive codes that constitute the major themes of the data analysis. Thereafter, NVIVO 8 was used to rapidly determine the frequency of applied codes. The authors compared the codes and themes to establish interpretive validity. Codes and themes were initially determined independently by co-authors and applied to the data subsequently. The authors compared the applied codes to establish intra-rater reliability. International service learning experiences led to perceptions of growth as a health care provider among students. The application of knowledge and skills learned in academic programs and service learning settings were found to help in bridging the theory-practice gap. The specific experience enabled students to gain an understanding of diverse health care and cultural practices in Morocco. Students perceived that the experience gained in international service learning can heighten awareness of diverse cultural and health care practices to foster professional growth of health professionals.

  17. Beyond a code of ethics: phenomenological ethics for everyday practice.

    PubMed

    Greenfield, Bruce; Jensen, Gail M

    2010-06-01

    Physical therapy, like all health-care professions, governs itself through a code of ethics that defines its obligations of professional behaviours. The code of ethics provides professions with a consistent and common moral language and principled guidelines for ethical actions. Yet, and as argued in this paper, professional codes of ethics have limits applied to ethical decision-making in the presence of ethical dilemmas. Part of the limitations of the codes of ethics is that there is no particular hierarchy of principles that govern in all situations. Instead, the exigencies of clinical practice, the particularities of individual patient's illness experiences and the transformative nature of chronic illnesses and disabilities often obscure the ethical concerns and issues embedded in concrete situations. Consistent with models of expert practice, and with contemporary models of patient-centred care, we advocate and describe in this paper a type of interpretative and narrative approach to moral practice and ethical decision-making based on phenomenology. The tools of phenomenology that are well defined in research are applied and examined in a case that illustrates their use in uncovering the values and ethical concerns of a patient. Based on the deconstruction of this case on a phenomenologist approach, we illustrate how such approaches for ethical understanding can help assist clinicians and educators in applying principles within the context and needs of each patient. (c) 2010 John Wiley & Sons, Ltd.

  18. Sulphur limitation and early sulphur deficiency responses in poplar: significance of gene expression, metabolites, and plant hormones

    PubMed Central

    Honsel, Anne; Kojima, Mikiko; Haas, Richard; Frank, Wolfgang; Sakakibara, Hitoshi; Herschbach, Cornelia; Rennenberg, Heinz

    2012-01-01

    The influence of sulphur (S) depletion on the expression of genes related to S metabolism, and on metabolite and plant hormone contents was analysed in young and mature leaves, fine roots, xylem sap, and phloem exudates of poplar (Populus tremula×Populus alba) with special focus on early consequences. S depletion was applied by a gradual decrease of sulphate availability. The observed changes were correlated with sulphate contents. Based on the decrease in sulphate contents, two phases of S depletion could be distinguished that were denominated as ‘S limitation’ and ‘early S deficiency’. S limitation was characterized by improved sulphate uptake (enhanced root-specific sulphate transporter PtaSULTR1;2 expression) and reduction capacities (enhanced adenosine 5′-phosphosulphate (APS) reductase expression) and by enhanced remobilization of sulphate from the vacuole (enhanced putative vacuolar sulphate transporter PtaSULTR4;2 expression). During early S deficiency, whole plant distribution of S was impacted, as indicated by increasing expression of the phloem-localized sulphate transporter PtaSULTR1;1 and by decreasing glutathione contents in fine roots, young leaves, mature leaves, and phloem exudates. Furthermore, at ‘early S deficiency’, expression of microRNA395 (miR395), which targets transcripts of PtaATPS3/4 (ATP sulphurylase) for cleavage, increased. Changes in plant hormone contents were observed at ‘early S deficiency’ only. Thus, S depletion affects S and plant hormone metabolism of poplar during ‘S limitation’ and ‘early S deficiency’ in a time series of events. Despite these consequences, the impact of S depletion on growth of poplar plants appears to be less severe than in Brassicaceae such as Arabidopsis thaliana or Brassica sp. PMID:22162873

  19. Effects of acute tryptophan depletion on central processing of CT-targeted and discriminatory touch in humans.

    PubMed

    Trotter, Paula Diane; McGlone, Francis; McKie, Shane; McFarquhar, Martyn; Elliott, Rebecca; Walker, Susannah Claire; Deakin, John Francis William

    2016-08-01

    C-tactile afferents (CTs) are slowly conducting nerve fibres, present only in hairy skin. They are optimally activated by slow, gentle stroking touch, such as those experienced during a caress. CT stimulation activates affective processing brain regions, alluding to their role in affective touch perception. We tested a theory that CT-activating touch engages the pro-social functions of serotonin, by determining whether reducing serotonin, through acute tryptophan depletion, diminishes subjective pleasantness and affective brain responses to gentle touch. A tryptophan depleting amino acid drink was administered to 16 healthy females, with a further 14 receiving a control drink. After 4 h, participants underwent an fMRI scan, during which time CT-innervated forearm skin and CT non-innervated finger skin was stroked with three brushes of differing texture, at CT-optimal force and velocity. Pleasantness ratings were obtained post scanning. The control group showed a greater response in ipsilateral orbitofrontal cortex to CT-activating forearm touch compared to touch to the finger where CTs are absent. This differential response was not present in the tryptophan depleted group. This interaction effect was significant. In addition, control participants showed a differential primary somatosensory cortex response to brush texture applied to the finger, a purely discriminatory touch response, which was not observed in the tryptophan depleted group. This interaction effect was also significant. Pleasantness ratings were similar across treatment groups. These results implicate serotonin in the differentiation between CT-activating and purely discriminatory touch responses. Such effects could contribute to some of the social abnormalities seen in psychiatric disorders associated with abnormal serotonin function. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. AAV-CRISPR/Cas9-Mediated Depletion of VEGFR2 Blocks Angiogenesis In Vitro.

    PubMed

    Wu, Wenyi; Duan, Yajian; Ma, Gaoen; Zhou, Guohong; Park-Windhol, Cindy; D'Amore, Patricia A; Lei, Hetian

    2017-12-01

    Pathologic angiogenesis is a component of many diseases, including neovascular age-related macular degeneration, proliferation diabetic retinopathy, as well as tumor growth and metastasis. The purpose of this project was to examine whether the system of adeno-associated viral (AAV)-mediated CRISPR (clustered regularly interspaced short palindromic repeats)-associated endonuclease (Cas)9 can be used to deplete expression of VEGF receptor 2 (VEGFR2) in human vascular endothelial cells in vitro and thus suppress its downstream signaling events. The dual AAV system of CRISPR/Cas9 from Streptococcus pyogenes (AAV-SpGuide and -SpCas9) was adapted to edit genomic VEGFR2 in primary human retinal microvascular endothelial cells (HRECs). In this system, the endothelial-specific promoter for intercellular adhesion molecule 2 (ICAM2) was cloned into the dual AAV vectors of SpGuide and SpCas9 for driving expression of green fluorescence protein (GFP) and SpCas9, respectively. These two AAV vectors were applied to production of recombinant AAV serotype 5 (rAAV5), which were used to infect HRECs for depletion of VEGFR2. Protein expression was determined by Western blot; and cell proliferation, migration, as well as tube formation were examined. AAV5 effectively infected vascular endothelial cells (ECs) and retinal pigment epithelial (RPE) cells; the ICAM2 promoter drove expression of GFP and SpCas9 in HRECs, but not in RPE cells. The results showed that the rAAV5-CRISPR/Cas9 depleted VEGFR2 by 80% and completely blocked VEGF-induced activation of Akt, and proliferation, migration as well as tube formation of HRECs. AAV-CRISRP/Cas9-mediated depletion of VEGFR2 is a potential therapeutic strategy for pathologic angiogenesis.

  1. Sulphur limitation and early sulphur deficiency responses in poplar: significance of gene expression, metabolites, and plant hormones.

    PubMed

    Honsel, Anne; Kojima, Mikiko; Haas, Richard; Frank, Wolfgang; Sakakibara, Hitoshi; Herschbach, Cornelia; Rennenberg, Heinz

    2012-03-01

    The influence of sulphur (S) depletion on the expression of genes related to S metabolism, and on metabolite and plant hormone contents was analysed in young and mature leaves, fine roots, xylem sap, and phloem exudates of poplar (Populus tremula×Populus alba) with special focus on early consequences. S depletion was applied by a gradual decrease of sulphate availability. The observed changes were correlated with sulphate contents. Based on the decrease in sulphate contents, two phases of S depletion could be distinguished that were denominated as 'S limitation' and 'early S deficiency'. S limitation was characterized by improved sulphate uptake (enhanced root-specific sulphate transporter PtaSULTR1;2 expression) and reduction capacities (enhanced adenosine 5'-phosphosulphate (APS) reductase expression) and by enhanced remobilization of sulphate from the vacuole (enhanced putative vacuolar sulphate transporter PtaSULTR4;2 expression). During early S deficiency, whole plant distribution of S was impacted, as indicated by increasing expression of the phloem-localized sulphate transporter PtaSULTR1;1 and by decreasing glutathione contents in fine roots, young leaves, mature leaves, and phloem exudates. Furthermore, at 'early S deficiency', expression of microRNA395 (miR395), which targets transcripts of PtaATPS3/4 (ATP sulphurylase) for cleavage, increased. Changes in plant hormone contents were observed at 'early S deficiency' only. Thus, S depletion affects S and plant hormone metabolism of poplar during 'S limitation' and 'early S deficiency' in a time series of events. Despite these consequences, the impact of S depletion on growth of poplar plants appears to be less severe than in Brassicaceae such as Arabidopsis thaliana or Brassica sp.

  2. Using hidden Markov models and observed evolution to annotate viral genomes.

    PubMed

    McCauley, Stephen; Hein, Jotun

    2006-06-01

    ssRNA (single stranded) viral genomes are generally constrained in length and utilize overlapping reading frames to maximally exploit the coding potential within the genome length restrictions. This overlapping coding phenomenon leads to complex evolutionary constraints operating on the genome. In regions which code for more than one protein, silent mutations in one reading frame generally have a protein coding effect in another. To maximize coding flexibility in all reading frames, overlapping regions are often compositionally biased towards amino acids which are 6-fold degenerate with respect to the 64 codon alphabet. Previous methodologies have used this fact in an ad hoc manner to look for overlapping genes by motif matching. In this paper differentiated nucleotide compositional patterns in overlapping regions are incorporated into a probabilistic hidden Markov model (HMM) framework which is used to annotate ssRNA viral genomes. This work focuses on single sequence annotation and applies an HMM framework to ssRNA viral annotation. A description of how the HMM is parameterized, whilst annotating within a missing data framework is given. A Phylogenetic HMM (Phylo-HMM) extension, as applied to 14 aligned HIV2 sequences is also presented. This evolutionary extension serves as an illustration of the potential of the Phylo-HMM framework for ssRNA viral genomic annotation. The single sequence annotation procedure (SSA) is applied to 14 different strains of the HIV2 virus. Further results on alternative ssRNA viral genomes are presented to illustrate more generally the performance of the method. The results of the SSA method are encouraging however there is still room for improvement, and since there is overwhelming evidence to indicate that comparative methods can improve coding sequence (CDS) annotation, the SSA method is extended to a Phylo-HMM to incorporate evolutionary information. The Phylo-HMM extension is applied to the same set of 14 HIV2 sequences which are pre-aligned. The performance improvement that results from including the evolutionary information in the analysis is illustrated.

  3. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  4. Integrated system for production of neutronics and photonics calculational constants. Program SIGMA1 (Version 77-1): Doppler broaden evaluated cross sections in the Evaluated Nuclear Data File/Version B (ENDF/B) format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.

    1977-01-12

    A code, SIGMA1, has been designed to Doppler broaden evaluated cross sections in the ENDF/B format. The code can only be applied to tabulated data that vary linearly in energy and cross section between tabulated points. This report describes the methods used in the code and serves as a user's guide to the code.

  5. Computer model predictions of the local effects of large, solid-fuel rocket motors on stratospheric ozone. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zittel, P.F.

    1994-09-10

    The solid-fuel rocket motors of large space launch vehicles release gases and particles that may significantly affect stratospheric ozone densities along the vehicle's path. In this study, standard rocket nozzle and flowfield computer codes have been used to characterize the exhaust gases and particles through the afterburning region of the solid-fuel motors of the Titan IV launch vehicle. The models predict that a large fraction of the HCl gas exhausted by the motors is converted to Cl and Cl2 in the plume afterburning region. Estimates of the subsequent chemistry suggest that on expansion into the ambient daytime stratosphere, the highlymore » reactive chlorine may significantly deplete ozone in a cylinder around the vehicle track that ranges from 1 to 5 km in diameter over the altitude range of 15 to 40 km. The initial ozone depletion is estimated to occur on a time scale of less than 1 hour. After the initial effects, the dominant chemistry of the problem changes, and new models are needed to follow the further expansion, or closure, of the ozone hole on a longer time scale.« less

  6. Summary Staging Manual 2000 - SEER

    Cancer.gov

    Access this manual of codes and coding instructions for the summary stage field for cases diagnosed 2001-2017. 2000 version applies to every anatomic site. It uses all information in the medical record. Also called General Staging, California Staging, and SEER Staging.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Kathleen; Lopez, Hugo; Cairns, Julie

    An overview of the main North American codes and standards associated with hydrogen safety sensors is provided. The distinction between a code and a standard is defined, and the relationship between standards and codes is clarified, especially for those circumstances where a standard or a certification requirement is explicitly referenced within a code. The report identifies three main types of standards commonly applied to hydrogen sensors (interface and controls standards, shock and hazard standards, and performance-based standards). The certification process and a list and description of the main standards and model codes associated with the use of hydrogen safety sensorsmore » in hydrogen infrastructure are presented.« less

  8. The Canadian Medical Association Code of Ethics 1868 to 1996: a primer for medical educators.

    PubMed

    Brownell, A Keith W; Brownell, Elizabeth

    2002-06-01

    The Canadian Medical Association's (CMA) Code of Ethics applies to all physicians, residents, and medical students in Canada. Learning about the code must be a part of every physician's education, and keeping current with it must be a part of every physician's continuing medical education. This article, based on a review of the 19 CMA codes of ethics issued from 1868 to 1996, shows how deeply the Code of Ethics is tied to the past, highlights those topics that have been part of every version, and demonstrates how the code changed over time. This article should assist medical educators as they develop teaching material on codes of medical ethics, and would be of interest to practising physicians.

  9. RNAcode: Robust discrimination of coding and noncoding regions in comparative sequence data

    PubMed Central

    Washietl, Stefan; Findeiß, Sven; Müller, Stephan A.; Kalkhof, Stefan; von Bergen, Martin; Hofacker, Ivo L.; Stadler, Peter F.; Goldman, Nick

    2011-01-01

    With the availability of genome-wide transcription data and massive comparative sequencing, the discrimination of coding from noncoding RNAs and the assessment of coding potential in evolutionarily conserved regions arose as a core analysis task. Here we present RNAcode, a program to detect coding regions in multiple sequence alignments that is optimized for emerging applications not covered by current protein gene-finding software. Our algorithm combines information from nucleotide substitution and gap patterns in a unified framework and also deals with real-life issues such as alignment and sequencing errors. It uses an explicit statistical model with no machine learning component and can therefore be applied “out of the box,” without any training, to data from all domains of life. We describe the RNAcode method and apply it in combination with mass spectrometry experiments to predict and confirm seven novel short peptides in Escherichia coli and to analyze the coding potential of RNAs previously annotated as “noncoding.” RNAcode is open source software and available for all major platforms at http://wash.github.com/rnacode. PMID:21357752

  10. RNAcode: robust discrimination of coding and noncoding regions in comparative sequence data.

    PubMed

    Washietl, Stefan; Findeiss, Sven; Müller, Stephan A; Kalkhof, Stefan; von Bergen, Martin; Hofacker, Ivo L; Stadler, Peter F; Goldman, Nick

    2011-04-01

    With the availability of genome-wide transcription data and massive comparative sequencing, the discrimination of coding from noncoding RNAs and the assessment of coding potential in evolutionarily conserved regions arose as a core analysis task. Here we present RNAcode, a program to detect coding regions in multiple sequence alignments that is optimized for emerging applications not covered by current protein gene-finding software. Our algorithm combines information from nucleotide substitution and gap patterns in a unified framework and also deals with real-life issues such as alignment and sequencing errors. It uses an explicit statistical model with no machine learning component and can therefore be applied "out of the box," without any training, to data from all domains of life. We describe the RNAcode method and apply it in combination with mass spectrometry experiments to predict and confirm seven novel short peptides in Escherichia coli and to analyze the coding potential of RNAs previously annotated as "noncoding." RNAcode is open source software and available for all major platforms at http://wash.github.com/rnacode.

  11. Asymptotic Analysis of Time-Dependent Neutron Transport Coupled with Isotopic Depletion and Radioactive Decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brantley, P S

    2006-09-27

    We describe an asymptotic analysis of the coupled nonlinear system of equations describing time-dependent three-dimensional monoenergetic neutron transport and isotopic depletion and radioactive decay. The classic asymptotic diffusion scaling of Larsen and Keller [1], along with a consistent small scaling of the terms describing the radioactive decay of isotopes, is applied to this coupled nonlinear system of equations in a medium of specified initial isotopic composition. The analysis demonstrates that to leading order the neutron transport equation limits to the standard time-dependent neutron diffusion equation with macroscopic cross sections whose number densities are determined by the standard system of ordinarymore » differential equations, the so-called Bateman equations, describing the temporal evolution of the nuclide number densities.« less

  12. Self-adaptive multimethod optimization applied to a tailored heating forging process

    NASA Astrophysics Data System (ADS)

    Baldan, M.; Steinberg, T.; Baake, E.

    2018-05-01

    The presented paper describes an innovative self-adaptive multi-objective optimization code. Investigation goals concern proving the superiority of this code compared to NGSA-II and applying it to an inductor’s design case study addressed to a “tailored” heating forging application. The choice of the frequency and the heating time are followed by the determination of the turns number and their positions. Finally, a straightforward optimization is performed in order to minimize energy consumption using “optimal control”.

  13. Traffic Pattern Detection Using the Hough Transformation for Anomaly Detection to Improve Maritime Domain Awareness

    DTIC Science & Technology

    2013-12-01

    Programming code in the Python language used in AIS data preprocessing is contained in Appendix A. The MATLAB programming code used to apply the Hough...described in Chapter III is applied to archived AIS data in this chapter. The implementation of the method, including programming techniques used, is...is contained in the second. To provide a proof of concept for the algorithm described in Chapter III, the PYTHON programming language was used for

  14. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  15. Considerations of MCNP Monte Carlo code to be used as a radiotherapy treatment planning tool.

    PubMed

    Juste, B; Miro, R; Gallardo, S; Verdu, G; Santos, A

    2005-01-01

    The present work has simulated the photon and electron transport in a Theratron 780® (MDS Nordion)60Co radiotherapy unit, using the Monte Carlo transport code, MCNP (Monte Carlo N-Particle). This project explains mainly the different methodologies carried out to speedup calculations in order to apply this code efficiently in radiotherapy treatment planning.

  16. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    PubMed

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure minimize the costs about 2.7 times better than the canonical genetic code. Interestingly, the optimal codes are dominated by amino acids characterized by polarity close to its average value for all amino acids. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Hypersonic code efficiency and validation studies

    NASA Technical Reports Server (NTRS)

    Bennett, Bradford C.

    1992-01-01

    Renewed interest in hypersonic and supersonic flows spurred the development of the Compressible Navier-Stokes (CNS) code. Originally developed for external flows, CNS was modified to enable it to also be applied to internal high speed flows. In the initial phase of this study CNS was applied to both internal flow applications and fellow researchers were taught to run CNS. The second phase of this research was the development of surface grids over various aircraft configurations for the High Speed Research Program (HSRP). The complex nature of these configurations required the development of improved surface grid generation techniques. A significant portion of the grid generation effort was devoted to testing and recommending modifications to early versions of the S3D surface grid generation code.

  18. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Summary Stage 2018 - SEER

    Cancer.gov

    Access this manual of codes and coding instructions for the summary stage field for cases diagnosed January 1, 2018 and forward. 2018 version applies to every site and/or histology combination, including lymphomas and leukemias. Historically, also called General Staging, California Staging, and SEER Staging.

  20. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  1. The impact of speciated VOCs on regional ozone increment derived from measurements at the UK EMEP supersites between 1999 and 2012

    NASA Astrophysics Data System (ADS)

    Malley, C. S.; Braban, C. F.; Dumitrean, P.; Cape, J. N.; Heal, M. R.

    2015-03-01

    The impact of 27 volatile organic compounds (VOC) on the regional O3 increment was investigated using measurements made at the UK EMEP supersites Harwell (1999-2001 and 2010-2012) and Auchencorth (2012). Ozone at these sites is representative of rural O3 in south-east England and northern UK, respectively. Monthly-diurnal regional O3 increment was defined as the difference between the regional and hemispheric background O3 concentrations, respectively derived from oxidant vs. NOx correlation plots, and cluster analysis of back trajectories arriving at Mace Head, Ireland. At Harwell, which had substantially greater regional ozone increments than at Auchencorth, variation in the regional O3 increment mirrored afternoon depletion of VOCs due to photochemistry (after accounting for diurnal changes in boundary layer mixing depth, and weighting VOC concentrations according to their photochemical ozone creation potential). A positive regional O3 increment occurred consistently during the summer, during which time afternoon photochemical depletion was calculated for the majority of measured VOCs, and to the greatest extent for ethene and m + p-xylene. This indicates that, of the measured VOCs, ethene and m + p-xylene emissions reduction would be most effective in reducing the regional O3 increment, but that reductions in a larger number of VOCs would be required for further improvement. The VOC diurnal photochemical depletion was linked to the sources of the VOC emissions through the integration of gridded VOC emissions estimates over 96 h air-mass back trajectories. This demonstrated that the effectiveness of VOC gridded emissions for use in measurement and modelling studies is limited by the highly aggregated nature of the 11 SNAP source sectors in which they are reported, as monthly variation in speciated VOC trajectory emissions did not reflect monthly changes in individual VOC diurnal photochemical depletion. Additionally, the major VOC emission source sectors during elevated regional O3 increment at Harwell were more narrowly defined through disaggregation of the SNAP emissions to 91 NFR codes (i.e. sectors 3D2 (domestic solvent use), 3D3 (other product use) and 2D2 (food and drink)). However, spatial variation in the contribution of NFR sectors to parent SNAP emissions could only be accounted for at the country level. Hence, the future reporting of gridded VOC emissions in source sectors more highly disaggregated than currently (e.g. to NFR codes) would facilitate a more precise identification of those VOC sources most important for mitigation of the impact of VOCs on O3 formation. In summary, this work presents a clear methodology for achieving a coherent VOC regional-O3-impact chemical climate using measurement data and explores the effect of limited emission and measurement species on the understanding of the regional VOC contribution to O3 concentrations.

  2. The impact of speciated VOCs on regional ozone increment derived from measurements at the UK EMEP supersites between 1999 and 2012

    NASA Astrophysics Data System (ADS)

    Malley, C. S.; Braban, C. F.; Dumitrean, P.; Cape, J. N.; Heal, M. R.

    2015-07-01

    The impact of 27 volatile organic compounds (VOCs) on the regional O3 increment was investigated using measurements made at the UK EMEP supersites Harwell (1999-2001 and 2010-2012) and Auchencorth (2012). Ozone at these sites is representative of rural O3 in south-east England and northern UK, respectively. The monthly-diurnal regional O3 increment was defined as the difference between the regional and hemispheric background O3 concentrations, respectively, derived from oxidant vs. NOx correlation plots, and cluster analysis of back trajectories arriving at Mace Head, Ireland. At Harwell, which had substantially greater regional O3 increments than Auchencorth, variation in the regional O3 increment mirrored afternoon depletion of anthropogenic VOCs due to photochemistry (after accounting for diurnal changes in boundary layer mixing depth, and weighting VOC concentrations according to their photochemical ozone creation potential). A positive regional O3 increment occurred consistently during the summer, during which time afternoon photochemical depletion was calculated for the majority of measured VOCs, and to the greatest extent for ethene and m+p-xylene. This indicates that, of the measured VOCs, ethene and m+p-xylene emissions reduction would be most effective in reducing the regional O3 increment but that reductions in a larger number of VOCs would be required for further improvement. The VOC diurnal photochemical depletion was linked to anthropogenic sources of the VOC emissions through the integration of gridded anthropogenic VOC emission estimates over 96 h air-mass back trajectories. This demonstrated that one factor limiting the effectiveness of VOC gridded emissions for use in measurement and modelling studies is the highly aggregated nature of the 11 SNAP (Selected Nomenclature for Air Pollution) source sectors in which they are reported, as monthly variation in speciated VOC trajectory emissions did not reflect monthly changes in individual VOC diurnal photochemical depletion. Additionally, the major VOC emission source sectors during elevated regional O3 increment at Harwell were more narrowly defined through disaggregation of the SNAP emissions to 91 NFR (Nomenclature for Reporting) codes (i.e. sectors 3D2 (domestic solvent use), 3D3 (other product use) and 2D2 (food and drink)). However, spatial variation in the contribution of NFR sectors to parent SNAP emissions could only be accounted for at the country level. Hence, the future reporting of gridded VOC emissions in source sectors more highly disaggregated than currently (e.g. to NFR codes) would facilitate a more precise identification of those VOC sources most important for mitigation of the impact of VOCs on O3 formation. In summary, this work presents a clear methodology for achieving a coherent VOC, regional-O3-impact chemical climate using measurement data and explores the effect of limited emission and measurement species on the understanding of the regional VOC contribution to O3 concentrations.

  3. SIGACE Code for Generating High-Temperature ACE Files; Validation and Benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Amit R.; Ganesan, S.; Trkov, A.

    2005-05-24

    A code named SIGACE has been developed as a tool for MCNP users within the scope of a research contract awarded by the Nuclear Data Section of the International Atomic Energy Agency (IAEA) (Ref: 302-F4-IND-11566 B5-IND-29641). A new recipe has been evolved for generating high-temperature ACE files for use with the MCNP code. Under this scheme the low-temperature ACE file is first converted to an ENDF formatted file using the ACELST code and then Doppler broadened, essentially limited to the data in the resolved resonance region, to any desired higher temperature using SIGMA1. The SIGACE code then generates a high-temperaturemore » ACE file for use with the MCNP code. A thinning routine has also been introduced in the SIGACE code for reducing the size of the ACE files. The SIGACE code and the recipe for generating ACE files at higher temperatures has been applied to the SEFOR fast reactor benchmark problem (sodium-cooled fast reactor benchmark described in ENDF-202/BNL-19302, 1974 document). The calculated Doppler coefficient is in good agreement with the experimental value. A similar calculation using ACE files generated directly with the NJOY system also agrees with our SIGACE computed results. The SIGACE code and the recipe is further applied to study the numerical benchmark configuration of selected idealized PWR pin cell configurations with five different fuel enrichments as reported by Mosteller and Eisenhart. The SIGACE code that has been tested with several FENDL/MC files will be available, free of cost, upon request, from the Nuclear Data Section of the IAEA.« less

  4. Task representation in individual and joint settings

    PubMed Central

    Prinz, Wolfgang

    2015-01-01

    This paper outlines a framework for task representation and discusses applications to interference tasks in individual and joint settings. The framework is derived from the Theory of Event Coding (TEC). This theory regards task sets as transient assemblies of event codes in which stimulus and response codes interact and shape each other in particular ways. On the one hand, stimulus and response codes compete with each other within their respective subsets (horizontal interactions). On the other hand, stimulus and response code cooperate with each other (vertical interactions). Code interactions instantiating competition and cooperation apply to two time scales: on-line performance (i.e., doing the task) and off-line implementation (i.e., setting the task). Interference arises when stimulus and response codes overlap in features that are irrelevant for stimulus identification, but relevant for response selection. To resolve this dilemma, the feature profiles of event codes may become restructured in various ways. The framework is applied to three kinds of interference paradigms. Special emphasis is given to joint settings where tasks are shared between two participants. Major conclusions derived from these applications include: (1) Response competition is the chief driver of interference. Likewise, different modes of response competition give rise to different patterns of interference; (2) The type of features in which stimulus and response codes overlap is also a crucial factor. Different types of such features give likewise rise to different patterns of interference; and (3) Task sets for joint settings conflate intraindividual conflicts between responses (what), with interindividual conflicts between responding agents (whom). Features of response codes may, therefore, not only address responses, but also responding agents (both physically and socially). PMID:26029085

  5. A Bioinformatics-Based Alternative mRNA Splicing Code that May Explain Some Disease Mutations Is Conserved in Animals.

    PubMed

    Qu, Wen; Cingolani, Pablo; Zeeberg, Barry R; Ruden, Douglas M

    2017-01-01

    Deep sequencing of cDNAs made from spliced mRNAs indicates that most coding genes in many animals and plants have pre-mRNA transcripts that are alternatively spliced. In pre-mRNAs, in addition to invariant exons that are present in almost all mature mRNA products, there are at least 6 additional types of exons, such as exons from alternative promoters or with alternative polyA sites, mutually exclusive exons, skipped exons, or exons with alternative 5' or 3' splice sites. Our bioinformatics-based hypothesis is that, in analogy to the genetic code, there is an "alternative-splicing code" in introns and flanking exon sequences, analogous to the genetic code, that directs alternative splicing of many of the 36 types of introns. In humans, we identified 42 different consensus sequences that are each present in at least 100 human introns. 37 of the 42 top consensus sequences are significantly enriched or depleted in at least one of the 36 types of introns. We further supported our hypothesis by showing that 96 out of 96 analyzed human disease mutations that affect RNA splicing, and change alternative splicing from one class to another, can be partially explained by a mutation altering a consensus sequence from one type of intron to that of another type of intron. Some of the alternative splicing consensus sequences, and presumably their small-RNA or protein targets, are evolutionarily conserved from 50 plant to animal species. We also noticed the set of introns within a gene usually share the same splicing codes, thus arguing that one sub-type of splicesosome might process all (or most) of the introns in a given gene. Our work sheds new light on a possible mechanism for generating the tremendous diversity in protein structure by alternative splicing of pre-mRNAs.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sublet, J.-Ch., E-mail: jean-christophe.sublet@ukaea.uk; Eastwood, J.W.; Morgan, J.G.

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2more » and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.« less

  7. FISPACT-II: An Advanced Simulation System for Activation, Transmutation and Material Modelling

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.; Gilbert, M. R.; Fleming, M.; Arter, W.

    2017-01-01

    Fispact-II is a code system and library database for modelling activation-transmutation processes, depletion-burn-up, time dependent inventory and radiation damage source terms caused by nuclear reactions and decays. The Fispact-II code, written in object-style Fortran, follows the evolution of material irradiated by neutrons, alphas, gammas, protons, or deuterons, and provides a wide range of derived radiological output quantities to satisfy most needs for nuclear applications. It can be used with any ENDF-compliant group library data for nuclear reactions, particle-induced and spontaneous fission yields, and radioactive decay (including but not limited to TENDL-2015, ENDF/B-VII.1, JEFF-3.2, JENDL-4.0u, CENDL-3.1 processed into fine-group-structure files, GEFY-5.2 and UKDD-16), as well as resolved and unresolved resonance range probability tables for self-shielding corrections and updated radiological hazard indices. The code has many novel features including: extension of the energy range up to 1 GeV; additional neutron physics including self-shielding effects, temperature dependence, thin and thick target yields; pathway analysis; and sensitivity and uncertainty quantification and propagation using full covariance data. The latest ENDF libraries such as TENDL encompass thousands of target isotopes. Nuclear data libraries for Fispact-II are prepared from these using processing codes PREPRO, NJOY and CALENDF. These data include resonance parameters, cross sections with covariances, probability tables in the resonance ranges, PKA spectra, kerma, dpa, gas and radionuclide production and energy-dependent fission yields, supplemented with all 27 decay types. All such data for the five most important incident particles are provided in evaluated data tables. The Fispact-II simulation software is described in detail in this paper, together with the nuclear data libraries. The Fispact-II system also includes several utility programs for code-use optimisation, visualisation and production of secondary radiological quantities. Included in the paper are summaries of results from the suite of verification and validation reports available with the code.

  8. Cookbook Recipe to Simulate Seawater Intrusion with Standard MODFLOW

    NASA Astrophysics Data System (ADS)

    Schaars, F.; Bakker, M.

    2012-12-01

    We developed a cookbook recipe to simulate steady interface flow in multi-layer coastal aquifers with regular groundwater codes such as standard MODFLOW. The main step in the recipe is a simple transformation of the hydraulic conductivities and thicknesses of the aquifers. Standard groundwater codes may be applied to compute the head distribution in the aquifer using the transformed parameters. For example, for flow in a single unconfined aquifer, the hydraulic conductivity needs to be multiplied with 41 and the base of the aquifer needs to be set to mean sea level (for a relative seawater density of 1.025). Once the head distribution is obtained, the Ghijben-Herzberg relationship is applied to compute the depth of the interface. The recipe may be applied to quite general settings, including spatially variable aquifer properties. Any standard groundwater code may be used, as long as it can simulate unconfined flow where the transmissivity is a linear function of the head. The proposed recipe is benchmarked successfully against a number of analytic and numerical solutions.

  9. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  10. Fate of avermectin B1a on citrus fruits. 1. Distribution and magnitude of the avermectin B sub 1a and sup 14 C residue on citrus fruits from a field study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maynard, M.S.; Iwata, Y.; Wislocki, P.G.

    An 8{mu}g/mL solution of ({sup 14}C)avermectin B{sub 1a}, the approximate field application rate, was applied to oranges, lemons, and grapefruit; a 10-fold higher rate was also applied to oranges. Immediately postapplication, {sup 14}C residues were 20-38 ng/g for the fruit treated at the field rate. Most of the residue was recovered in the surface solvent rinse at less than 2 weeks postapplication; however, after this time more of the residue was recovered from the rind fraction. The total recoveries of applied radioactivity were 61-90% and 33-50% at 1 and 12 weeks postapplication, respectively. The level of unextractable rind {sup 14}Cmore » residue from oranges treated at the 10{times} rate and harvested at 12 weeks (a worse case) was 4.9% of the applied dose (<2 ppb at the field rate). The inner pulp samples for all treatments had {sup 14}C residue levels below the detection limit of 0.4-0.8 ppb. The initial depletion half-life of avermectin B{sub 1a} was <1 week, with losses occurring within 30-40 min. For the 1-12-week postapplication period, the avermectin B{sub 1a} and {sup 14}C residue depletion half-lives were 20-38 and 56-98 days, respectively. Differences in the rate of dissipation of avermectin B{sub 1a} due to fruit type and application rate were observed.« less

  11. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  12. The Exchange Data Communication System based on Centralized Database for the Meat Industry

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yuichi; Taniguchi, Yoji; Terada, Shuji; Komoda, Norihisa

    We propose applying the EDI system that is based on centralized database and supports conversion of code data to the meat industry. This system makes it possible to share exchange data on beef between enterprises from producers to retailers by using Web EDI technology. In order to efficiently convert code direct conversion of a sender's code to a receiver's code using a code map is used. This system that mounted this function has been implemented in September 2004. Twelve enterprises including retailers, and processing traders, and wholesalers were using the system as of June 2005. In this system, the number of code maps relevant to the introductory cost of the code conversion function was lower than the theoretical value and were close to the case that a standard code is mediated.

  13. Evaluation of Persons of Varying Ages.

    ERIC Educational Resources Information Center

    Stolte, John F.

    1996-01-01

    Reviews two experiments that strongly support dual coding theory. Dual coding theory holds that communicating concretely (tactile, auditory, or visual stimuli) affects evaluative thinking stronger than communicating abstractly through words and numbers. The experiments applied this theory to the realm of age and evaluation. (MJP)

  14. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in parallel tiled code implementing Nussinov's RNA folding. Experimental results, received on modern Intel multi-core processors, demonstrate that this code outperforms known closely related implementations when the length of RNA strands is bigger than 2500.

  15. Highly Depleted Ethane and Mildly Depleted Methanol in Comet 21P/Giacobini-Zinner: Application of a New Empirical nu(sub 2) Band Model for CH30H Near 50 K

    NASA Technical Reports Server (NTRS)

    DiSanti, M. A.; Bonev, B. P.; Villanueva, G. L.; Mumma, M. J.

    2012-01-01

    Infrared spectra of Comet 2lP/Giacobini-Zinner (hereafter 2IP/GZ) were obtained using NIRSPEC at Keck II on UT 2005 June 03, approximately one month before perihelion, that simultaneously measured H2O, C2H6, and CH3OH. For H2O, the production rate of 3.8 x 10(exp 28) molecules / S was consistent with that measured during other apparitions of 21P/GZ retrieved from optical, infrared, and mm-wavelength observations. The water analysis also provided values for rotational temperature (T(sub rot) = 55(epx +3) /-.2 K) and the abundance ratio of ortho- and para-water (3.00 +/-0.15, implying a spin temperature exceeding 50 K). Six Q-branches in the V7 band of C2H6 provided a production rate (5.27 +/- 0.90 x 10(exp 25)/S) that corresponded to an abundance ratio of 0.139 +/- 0.024 % relative to H2O, confirming the previously reported strong depletion of C2H6 from IR observations during the 1998 apparition, and in qualitative agreement with the depletion in C2 known from optical studies. For CH30H, we applied our recently published ab initia model for the v3 band to obtain a rotational temperature (48(exp + 10) / -7 K) consistent with that obtained for H2O. In addition we applied a newly developed empirical model for the CH30H v2 band, and obtained a production rate consistent with that obtained from the v3 band. Combining results from both v2 and v3 bands provided a production rate (47.5 +/- 4.4 x 10(exp 25) / S) that corresponded to an abundance ratio of 1.25 +/- 0.12 % relative to H2O in 21P/GZ. Our study provides the first measure of primary volatile production rates for any Jupiter family comet over multiple apparitions using high resolution IR spectroscopy.

  16. Simulated Cytoskeletal Collapse via Tau Degradation

    PubMed Central

    Sendek, Austin; Fuller, Henry R.; Hayre, N. Robert; Singh, Rajiv R. P.; Cox, Daniel L.

    2014-01-01

    We present a coarse-grained two dimensional mechanical model for the microtubule-tau bundles in neuronal axons in which we remove taus, as can happen in various neurodegenerative conditions such as Alzheimers disease, tauopathies, and chronic traumatic encephalopathy. Our simplified model includes (i) taus modeled as entropic springs between microtubules, (ii) removal of taus from the bundles due to phosphorylation, and (iii) a possible depletion force between microtubules due to these dissociated phosphorylated taus. We equilibrate upon tau removal using steepest descent relaxation. In the absence of the depletion force, the transverse rigidity to radial compression of the bundles falls to zero at about 60% tau occupancy, in agreement with standard percolation theory results. However, with the attractive depletion force, spring removal leads to a first order collapse of the bundles over a wide range of tau occupancies for physiologically realizable conditions. While our simplest calculations assume a constant concentration of microtubule intercalants to mediate the depletion force, including a dependence that is linear in the detached taus yields the same collapse. Applying percolation theory to removal of taus at microtubule tips, which are likely to be the protective sites against dynamic instability, we argue that the microtubule instability can only obtain at low tau occupancy, from 0.06–0.30 depending upon the tau coordination at the microtubule tips. Hence, the collapse we discover is likely to be more robust over a wide range of tau occupancies than the dynamic instability. We suggest in vitro tests of our predicted collapse. PMID:25162587

  17. Spatial quantification of groundwater abstraction in the irrigated Indus basin.

    PubMed

    Cheema, M J M; Immerzeel, W W; Bastiaanssen, W G M

    2014-01-01

    Groundwater abstraction and depletion were assessed at a 1-km resolution in the irrigated areas of the Indus Basin using remotely sensed evapotranspiration (ET) and precipitation; a process-based hydrological model and spatial information on canal water supplies. A calibrated Soil and Water Assessment Tool (SWAT) model was used to derive total annual irrigation applied in the irrigated areas of the basin during the year 2007. The SWAT model was parameterized by station corrected precipitation data (R) from the Tropical Rainfall Monitoring Mission, land use, soil type, and outlet locations. The model was calibrated using a new approach based on spatially distributed ET fields derived from different satellite sensors. The calibration results were satisfactory and strong improvements were obtained in the Nash-Sutcliffe criterion (0.52 to 0.93), bias (-17.3% to -0.4%), and the Pearson correlation coefficient (0.78 to 0.93). Satellite information on R and ET was then combined with model results of surface runoff, drainage, and percolation to derive groundwater abstraction and depletion at a nominal resolution of 1 km. It was estimated that in 2007, 68 km³ (262 mm) of groundwater was abstracted in the Indus Basin while 31 km³ (121 mm) was depleted. The mean error was 41 mm/year and 62 mm/year at 50% and 70% probability of exceedance, respectively. Pakistani and Indian Punjab and Haryana were the most vulnerable areas to groundwater depletion and strong measures are required to maintain aquifer sustainability. © 2013, National Ground Water Association.

  18. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  19. Irreducible normalizer operators and thresholds for degenerate quantum codes with sublinear distances

    NASA Astrophysics Data System (ADS)

    Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.

    2015-03-01

    We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.

  20. Edge equilibrium code for tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xujing; Zakharov, Leonid E.; Drozdov, Vladimir V.

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  1. Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, C.E.

    Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.

  2. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  3. Methodology of decreasing software complexity using ontology

    NASA Astrophysics Data System (ADS)

    DÄ browska-Kubik, Katarzyna

    2015-09-01

    In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.

  4. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    PubMed Central

    Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge

    2013-01-01

    This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392

  5. Space shuttle main engine numerical modeling code modifications and analysis

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.

    1988-01-01

    The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).

  6. Integrated system for production of neutronics and photonics calculational constants. Volume 17, Part B, Rev. 1. Program SIGMA 1 (Version 78-1): Doppler broadened evaluated cross sections in the evaluated nuclear data file/Version B (ENDF/B) format. [For CDC-7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.

    1978-07-04

    The code SIGMA1 Doppler broadens evaluated cross sections in the ENDF/B format. The code can be applied only to data that vary as a linear function of energy and cross section between tabulated points. This report describes the methods used in the code and serves as a user's guide to the code. 6 figures, 2 tables.

  7. Development of an activated carbon-based electrode for the capture and rapid electrolytic reductive debromination of methyl bromide from post-harvest fumigations

    USDA-ARS?s Scientific Manuscript database

    Due to concerns surrounding its ozone depletion potential, there is a need for technologies to capture and destroy methyl bromide (CH3Br) emissions from post-harvest fumigations applied to control agricultural pests. Previously we described a system in which CH3Br fumes vented from fumigation chambe...

  8. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  9. Current and anticipated uses of thermal-hydraulic codes in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  10. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  11. Crop rotations in the sea: Increasing returns and reducing risk of collapse in sea cucumber fisheries

    PubMed Central

    Skewes, Timothy; Murphy, Nicole; Pascual, Ricardo; Fischer, Mibu

    2015-01-01

    Rotational harvesting is one of the oldest management strategies applied to terrestrial and marine natural resources, with crop rotations dating back to the time of the Roman Empire. The efficacy of this strategy for sessile marine species is of considerable interest given that these resources are vital to underpin food security and maintain the social and economic wellbeing of small-scale and commercial fishers globally. We modeled the rotational zone strategy applied to the multispecies sea cucumber fishery in Australia’s Great Barrier Reef Marine Park and show a substantial reduction in the risk of localized depletion, higher long-term yields, and improved economic performance. We evaluated the performance of rotation cycles of different length and show an improvement in biological and economic performance with increasing time between harvests up to 6 y. As sea cucumber fisheries throughout the world succumb to overexploitation driven by rising demand, there has been an increasing demand for robust assessments of fishery sustainability and a need to address local depletion concerns. Our results provide motivation for increased use of relatively low-information, low-cost, comanagement rotational harvest approaches in coastal and reef systems globally. PMID:25964357

  12. Overview of the Capstone Depleted Uranium Study of Aerosols from Impact with Armored Vehicles: Test Setup and Aerosol Generation, Characterization, and Application in Assessing Dose and Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, MaryAnn; Guilmette, Raymond A.

    2009-03-01

    The Capstone Depleted Uranium (DU) Aerosol Characterization and Risk Assessment Study was conducted to generate data about DU aerosols generated during the perforation of armored combat vehicles with large-caliber DU penetrators, and to apply the data in assessments of human health risks to personnel exposed to these aerosols, primarily through inhalation, during the 1991 Gulf War or in future military operations. The Capstone study consisted of two components: 1) generating, sampling and characterizing DU aerosols by firing at and perforating combat vehicles and 2) applying the source-term quantities and characteristics of the aerosols to the evaluation of doses and risks.more » This paper reviews the background of the study including the bases for the study, previous reviews of DU particles and health assessments from DU used by the U.S. military, the objectives of the study components, the participants and oversight teams, and the types of exposures it was intended to evaluate. It then discusses exposure scenarios used in the dose and risk assessment and provides an overview of how the field tests and dose and risk assessments were conducted.« less

  13. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics.

    PubMed

    Strozzi, D J; Bailey, D S; Michel, P; Divol, L; Sepke, S M; Kerbel, G D; Thomas, C A; Ralph, J E; Moody, J D; Schneider, M B

    2017-01-13

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI-specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)-mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. This model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling and data from hohlraum experiments on wall x-ray emission and capsule implosion shape.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strozzi, D. J.; Bailey, D. S.; Michel, P.

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated in this work via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI—specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)—mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. In conclusion, this model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling andmore » data from hohlraum experiments on wall x-ray emission and capsule implosion shape.« less

  15. A collision probability analysis of the double-heterogeneity problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hebert, A.

    1993-10-01

    A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less

  16. Dynamic quantitative photothermal monitoring of cell death of individual human red blood cells upon glucose depletion

    NASA Astrophysics Data System (ADS)

    Vasudevan, Srivathsan; Chen, George Chung Kit; Andika, Marta; Agarwal, Shuchi; Chen, Peng; Olivo, Malini

    2010-09-01

    Red blood cells (RBCs) have been found to undergo ``programmed cell death,'' or eryptosis, and understanding this process can provide more information about apoptosis of nucleated cells. Photothermal (PT) response, a label-free photothermal noninvasive technique, is proposed as a tool to monitor the cell death process of living human RBCs upon glucose depletion. Since the physiological status of the dying cells is highly sensitive to photothermal parameters (e.g., thermal diffusivity, absorption, etc.), we applied linear PT response to continuously monitor the death mechanism of RBC when depleted of glucose. The kinetics of the assay where the cell's PT response transforms from linear to nonlinear regime is reported. In addition, quantitative monitoring was performed by extracting the relevant photothermal parameters from the PT response. Twofold increases in thermal diffusivity and size reduction were found in the linear PT response during cell death. Our results reveal that photothermal parameters change earlier than phosphatidylserine externalization (used for fluorescent studies), allowing us to detect the initial stage of eryptosis in a quantitative manner. Hence, the proposed tool, in addition to detection of eryptosis earlier than fluorescence, could also reveal physiological status of the cells through quantitative photothermal parameter extraction.

  17. Soft repulsive mixtures under gravity: Brazil-nut effect, depletion bubbles, boundary layering, nonequilibrium shaking

    NASA Astrophysics Data System (ADS)

    Kruppa, Tobias; Neuhaus, Tim; Messina, René; Löwen, Hartmut

    2012-04-01

    A binary mixture of particles interacting via long-ranged repulsive forces is studied in gravity by computer simulation and theory. The more repulsive A-particles create a depletion zone of less repulsive B-particles around them reminiscent to a bubble. Applying Archimedes' principle effectively to this bubble, an A-particle can be lifted in a fluid background of B-particles. This "depletion bubble" mechanism explains and predicts a brazil-nut effect where the heavier A-particles float on top of the lighter B-particles. It also implies an effective attraction of an A-particle towards a hard container bottom wall which leads to boundary layering of A-particles. Additionally, we have studied a periodic inversion of gravity causing perpetuous mutual penetration of the mixture in a slit geometry. In this nonequilibrium case of time-dependent gravity, the boundary layering persists. Our results are based on computer simulations and density functional theory of a two-dimensional binary mixture of colloidal repulsive dipoles. The predicted effects also occur for other long-ranged repulsive interactions and in three spatial dimensions. They are therefore verifiable in settling experiments on dipolar or charged colloidal mixtures as well as in charged granulates and dusty plasmas.

  18. Soft repulsive mixtures under gravity: brazil-nut effect, depletion bubbles, boundary layering, nonequilibrium shaking.

    PubMed

    Kruppa, Tobias; Neuhaus, Tim; Messina, René; Löwen, Hartmut

    2012-04-07

    A binary mixture of particles interacting via long-ranged repulsive forces is studied in gravity by computer simulation and theory. The more repulsive A-particles create a depletion zone of less repulsive B-particles around them reminiscent to a bubble. Applying Archimedes' principle effectively to this bubble, an A-particle can be lifted in a fluid background of B-particles. This "depletion bubble" mechanism explains and predicts a brazil-nut effect where the heavier A-particles float on top of the lighter B-particles. It also implies an effective attraction of an A-particle towards a hard container bottom wall which leads to boundary layering of A-particles. Additionally, we have studied a periodic inversion of gravity causing perpetuous mutual penetration of the mixture in a slit geometry. In this nonequilibrium case of time-dependent gravity, the boundary layering persists. Our results are based on computer simulations and density functional theory of a two-dimensional binary mixture of colloidal repulsive dipoles. The predicted effects also occur for other long-ranged repulsive interactions and in three spatial dimensions. They are therefore verifiable in settling experiments on dipolar or charged colloidal mixtures as well as in charged granulates and dusty plasmas.

  19. Effect of gamma radiation on growth and survival of common seed-borne fungi in India

    NASA Astrophysics Data System (ADS)

    Maity, J. P.; Chakraborty, A.; Chanda, S.; Santra, S. C.

    2008-07-01

    The present work describes radiation-induced effects of major seeds like Oryza sativa Cv-2233, Oryza sativa Cv-Shankar, Cicer arietinum Cv-local and seed-borne fungi like Alternaria sp., Aspergillus sp., Trichoderma sp. and Curvularia sp. 60Co gamma source at 25 °C emitting gamma ray at 1173 and 1332 keV energy was used for irradiation. Dose of gamma irradiation up to 3 kGy (0.12 kGy/h) was applied for exposing the seed and fungal spores. Significant depletion of the fungal population was noted with irradiation at 1-2 kGy, whereas germinating potential of the treated grain did not alter significantly. However, significant differential radiation response in delayed seed germination, colony formation of the fungal spores and their depletion of growth were noticed in a dose-dependent manner. The depletion of the fungal viability (germination) was noted within the irradiation dose range of 1-2 kGy for Alternaria sp. and Aspergillus sp., while 0.5-1 kGy for Trichoderma sp. and Curvularia sp. However, complete inhibition of all the selected fungi was observed above 2.5 kGy.

  20. Electric-field distribution in Au-semi-insulating GaAs contact investigated by positron-lifetime technique

    NASA Astrophysics Data System (ADS)

    Ling, C. C.; Shek, Y. F.; Huang, A. P.; Fung, S.; Beling, C. D.

    1999-02-01

    Positron-lifetime spectroscopy has been used to investigate the electric-field distribution occurring at the Au-semi-insulating GaAs interface. Positrons implanted from a 22Na source and drifted back to the interface are detected through their characteristic lifetime at interface traps. The relative intensity of this fraction of interface-trapped positrons reveals that the field strength in the depletion region saturates at applied biases above 50 V, an observation that cannot be reconciled with a simple depletion approximation model. The data, are, however, shown to be fully consistent with recent direct electric-field measurements and the theoretical model proposed by McGregor et al. [J. Appl. Phys. 75, 7910 (1994)] of an enhanced EL2+ electron-capture cross section above a critical electric field that causes a dramatic reduction of the depletion region's net charge density. Two theoretically derived electric field profiles, together with an experimentally based profile, are used to estimate a positron mobility of ~95+/-35 cm2 V-1 s-1 under the saturation field. This value is higher than previous experiments would suggest, and reasons for this effect are discussed.

  1. Estimating initial contaminant mass based on fitting mass-depletion functions to contaminant mass discharge data: Testing method efficacy with SVE operations data

    NASA Astrophysics Data System (ADS)

    Mainhagu, J.; Brusseau, M. L.

    2016-09-01

    The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.

  2. 32 CFR 935.130 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Applicability. 935.130 Section 935.130 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE TERRITORIAL AND INSULAR REGULATIONS WAKE ISLAND CODE Motor Vehicle Code § 935.130 Applicability. This subpart applies to self-propelled...

  3. 32 CFR 935.130 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Applicability. 935.130 Section 935.130 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE TERRITORIAL AND INSULAR REGULATIONS WAKE ISLAND CODE Motor Vehicle Code § 935.130 Applicability. This subpart applies to self-propelled...

  4. 32 CFR 935.130 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Applicability. 935.130 Section 935.130 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE TERRITORIAL AND INSULAR REGULATIONS WAKE ISLAND CODE Motor Vehicle Code § 935.130 Applicability. This subpart applies to self-propelled...

  5. 32 CFR 935.130 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Applicability. 935.130 Section 935.130 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE TERRITORIAL AND INSULAR REGULATIONS WAKE ISLAND CODE Motor Vehicle Code § 935.130 Applicability. This subpart applies to self-propelled...

  6. 32 CFR 935.130 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Applicability. 935.130 Section 935.130 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE TERRITORIAL AND INSULAR REGULATIONS WAKE ISLAND CODE Motor Vehicle Code § 935.130 Applicability. This subpart applies to self-propelled...

  7. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  8. Code of Sustainable Practice in Occupational and Environmental Health and Safety for Corporations.

    PubMed

    Castleman, Barry; Allen, Barbara; Barca, Stefania; Bohme, Susanna Rankin; Henry, Emmanuel; Kaur, Amarjit; Massard-Guilbaud, Genvieve; Melling, Joseph; Menendez-Navarro, Alfredo; Renfrew, Daniel; Santiago, Myrna; Sellers, Christopher; Tweedale, Geoffrey; Zalik, Anna; Zavestoski, Stephen

    2008-01-01

    At a conference held at Stony Brook University in December 2007, "Dangerous Trade: Histories of Industrial Hazard across a Globalizing World," participants endorsed a Code of Sustainable Practice in Occupational and Environmental Health and Safety for Corporations. The Code outlines practices that would ensure corporations enact the highest health and environmentally protective measures in all the locations in which they operate. Corporations should observe international guidelines on occupational exposure to air contaminants, plant safety, air and water pollutant releases, hazardous waste disposal practices, remediation of polluted sites, public disclosure of toxic releases, product hazard labeling, sale of products for specific uses, storage and transport of toxic intermediates and products, corporate safety and health auditing, and corporate environmental auditing. Protective measures in all locations should be consonant with the most protective measures applied anywhere in the world, and should apply to the corporations' subsidiaries, contractors, suppliers, distributors, and licensees of technology. Key words: corporations, sustainability, environmental protection, occupational health, code of practice.

  9. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  10. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  11. Optimization techniques using MODFLOW-GWM

    USGS Publications Warehouse

    Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.

    2015-01-01

    An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.

  12. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  13. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  14. Density Convection near Radiating ICRF Antennas and its Effect on the Coupling of Lower Hybrid Waves

    NASA Astrophysics Data System (ADS)

    Ekedahl, A.; Colas, L.; Mayoral, M.-L.; Beaumont, B.; Bibet, Ph.; Brémond, S.; Kazarian, F.; Mailloux, J.; Noterdaeme, J.-M.; Efda-Jet Contributors

    2003-12-01

    Combined operation of Lower Hybrid (LH) and Ion Cyclotron Resonance Frequency (ICRF) waves can result in a degradation of the LH wave coupling, as observed both in the Tore Supra and JET tokamaks. The reflection coefficient on the part of the LH launcher magnetically connected to the powered ICRF antenna increases, suggesting a local decrease in the electron density in the connecting flux tubes. This has been confirmed by Langmuir probe measurements on the LH launchers in the latest Tore Supra experiments. Moreover, recent experiments in JET indicate that the LH coupling degradation depends on the ICRF power and its launched k//-spectrum. The 2D density distribution around the Tore Supra ICRF antennas has been modelled with the CELLS-code, balancing parallel losses with diffusive transport and sheath induced E×B convection, obtained from RF field mapping using the ICANT-code. The calculations are in qualitative agreement with the experimental observations, i.e. density depletion is obtained, localised mainly in the antenna shadow, and dependent on ICRF power and antenna spectrum.

  15. Control of Fur synthesis by the non-coding RNA RyhB and iron-responsive decoding.

    PubMed

    Vecerek, Branislav; Moll, Isabella; Bläsi, Udo

    2007-02-21

    The Fe2+-dependent Fur protein serves as a negative regulator of iron uptake in bacteria. As only metallo-Fur acts as an autogeneous repressor, Fe2+scarcity would direct fur expression when continued supply is not obviously required. We show that in Escherichia coli post-transcriptional regulatory mechanisms ensure that Fur synthesis remains steady in iron limitation. Our studies revealed that fur translation is coupled to that of an upstream open reading frame (uof), translation of which is downregulated by the non-coding RNA (ncRNA) RyhB. As RyhB transcription is negatively controlled by metallo-Fur, iron depletion creates a negative feedback loop. RyhB-mediated regulation of uof-fur provides the first example for indirect translational regulation by a trans-encoded ncRNA. In addition, we present evidence for an iron-responsive decoding mechanism of the uof-fur entity. It could serve as a backup mechanism of the RyhB circuitry, and represents the first link between iron availability and synthesis of an iron-containing protein.

  16. ASTRORAY: General relativistic polarized radiative transfer code

    NASA Astrophysics Data System (ADS)

    Shcherbakov, Roman V.

    2014-07-01

    ASTRORAY employs a method of ray tracing and performs polarized radiative transfer of (cyclo-)synchrotron radiation. The radiative transfer is conducted in curved space-time near rotating black holes described by Kerr-Schild metric. Three-dimensional general relativistic magneto hydrodynamic (3D GRMHD) simulations, in particular performed with variations of the HARM code, serve as an input to ASTRORAY. The code has been applied to reproduce the sub-mm synchrotron bump in the spectrum of Sgr A*, and to test the detectability of quasi-periodic oscillations in its light curve. ASTRORAY can be readily applied to model radio/sub-mm polarized spectra of jets and cores of other low-luminosity active galactic nuclei. For example, ASTRORAY is uniquely suitable to self-consistently model Faraday rotation measure and circular polarization fraction in jets.

  17. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  18. The algebraic decoding of the (41, 21, 9) quadratic residue code

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Truong, T. K.; Chen, Xuemin; Yin, Xiaowei

    1992-01-01

    A new algebraic approach for decoding the quadratic residue (QR) codes, in particular the (41, 21, 9) QR code is presented. The key ideas behind this decoding technique are a systematic application of the Sylvester resultant method to the Newton identities associated with the code syndromes to find the error-locator polynomial, and next a method for determining error locations by solving certain quadratic, cubic and quartic equations over GF(2 exp m) in a new way which uses Zech's logarithms for the arithmetic. The algorithms developed here are suitable for implementation in a programmable microprocessor or special-purpose VLSI chip. It is expected that the algebraic methods developed here can apply generally to other codes such as the BCH and Reed-Solomon codes.

  19. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect.

    PubMed

    Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  20. “When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect

    PubMed Central

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion. PMID:25009523

  1. Bar code, good for industry and trade--how does it benefit the dentist?

    PubMed

    Oehlmann, H

    2001-10-01

    Every dentist who attentively follows the change in product labelling can easily see that the HIBC bar code is on the increase. In fact, according to information from FIDE/VDDI and ADE/BVD, the dental industry and trade are firmly resolved to apply the HIBC bar code to all products used internationally in dental practices. Why? Indeed, at first it looks like extra expense to additionally print a bar code on the packages. Good reasons can only lie in advantages which manufacturers and the trade expect from the HIBC bar code, Indications in dental technician circles are that the HIBC bar code is coming. If there are advantages, what are these, and can the dentist also profit from them? What does HIBC bar code mean and what items of interest does it include? What does bar code cost and does only one code exist? This is explained briefly, concentrating on the benefits bar code can bring for different users.

  2. Normative lessons: codes of conduct, self-regulation and the law.

    PubMed

    Parker, Malcolm H

    2010-06-07

    Good medical practice: a code of conduct for doctors in Australia provides uniform standards to be applied in relation to complaints about doctors to the new Medical Board of Australia. The draft Code was criticised for being prescriptive. The final Code employs apparently less authoritative wording than the draft Code, but the implicit obligations it contains are no less prescriptive. Although the draft Code was thought to potentially undermine trust in doctors, and stifle professional judgement in relation to individual patients, its general obligations always allowed for flexibility of application, depending on the circumstances of individual patients. Professional codes may contain some aspirational statements, but they always contain authoritative ones, and they share this feature with legal codes. In successfully diluting the apparent prescriptivity of the draft Code, the profession has lost an opportunity to demonstrate its commitment to the raison d'etre of self-regulation - the protection of patients. Professional codes are not opportunities for reflection, consideration and debate, but are outcomes of these activities.

  3. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  4. Quantifying circular RNA expression from RNA-seq data using model-based framework.

    PubMed

    Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun

    2017-07-15

    Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Super-giant magnetoresistance at room-temperature in copper nanowires due to magnetic field modulation of potential barrier heights at nanowire-contact interfaces

    NASA Astrophysics Data System (ADS)

    Hossain, Md I.; Maksud, M.; Palapati, N. K. R.; Subramanian, A.; Atulasimha, J.; Bandyopadhyay, S.

    2016-07-01

    We have observed a super-giant (∼10 000 000%) negative magnetoresistance at 39 mT field in Cu nanowires contacted with Au contact pads. In these nanowires, potential barriers form at the two Cu/Au interfaces because of Cu oxidation that results in an ultrathin copper oxide layer forming between Cu and Au. Current flows when electrons tunnel through, and/or thermionically emit over, these barriers. A magnetic field applied transverse to the direction of current flow along the wire deflects electrons toward one edge of the wire because of the Lorentz force, causing electron accumulation at that edge and depletion at the other. This lowers the potential barrier at the accumulated edge and raises it at the depleted edge, causing a super-giant magnetoresistance at room temperature.

  7. Excitation and trapping of lower hybrid waves in striations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borisov, N.; Institute of Terrestrial Magnetism, Ionosphere and Radio Waves Propagation; Honary, F.

    2008-12-15

    The theory of lower hybrid (LH) waves trapped in striations in warm ionospheric plasma in the three-dimensional case is presented. A specific mechanism of trapping associated with the linear transformation of waves is discussed. It is shown analytically that such trapping can take place in elongated plasma depletions with the frequencies below and above the lower hybrid resonance frequency of the ambient plasma. The theory is applied mainly to striations generated artificially in ionospheric modification experiments and partly to natural plasma depletions in the auroral upper ionosphere. Typical amplitudes and transverse scales of the trapped LH waves excited in ionosphericmore » modification experiments are estimated. It is shown that such waves possibly can be detected by backscattering at oblique sounding in very high frequency (VHF) and ultra high frequency (UHF) ranges.« less

  8. Super-giant magnetoresistance at room-temperature in copper nanowires due to magnetic field modulation of potential barrier heights at nanowire-contact interfaces.

    PubMed

    Hossain, Md I; Maksud, M; Palapati, N K R; Subramanian, A; Atulasimha, J; Bandyopadhyay, S

    2016-07-29

    We have observed a super-giant (∼10 000 000%) negative magnetoresistance at 39 mT field in Cu nanowires contacted with Au contact pads. In these nanowires, potential barriers form at the two Cu/Au interfaces because of Cu oxidation that results in an ultrathin copper oxide layer forming between Cu and Au. Current flows when electrons tunnel through, and/or thermionically emit over, these barriers. A magnetic field applied transverse to the direction of current flow along the wire deflects electrons toward one edge of the wire because of the Lorentz force, causing electron accumulation at that edge and depletion at the other. This lowers the potential barrier at the accumulated edge and raises it at the depleted edge, causing a super-giant magnetoresistance at room temperature.

  9. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  10. Developing a method for specifying the components of behavior change interventions in practice: the example of smoking cessation.

    PubMed

    Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan

    2013-06-01

    There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.

  11. Anisotropic Effective Moduli of Microcrack Damaged Media

    DTIC Science & Technology

    2010-01-01

    18) vanish. In this case applying the L’Hospital’s rule to Eq. (18) when h2 ? h1 yields the following:C44 l2 ¼ 1 C55 þ pg lðlþ C44Þ ðl þ C44Þ½1...RESEARCH TRIANGLE PARK NC 27709-2211 5 NAVAL RESEARCH LAB E R FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130 J S PERKINS

  12. Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.

    1972-01-01

    A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.

  13. Impacts of DNAPL Source Treatment: Experimental and Modeling Assessment of the Benefits of Partial DNAPL Source Removal

    DTIC Science & Technology

    2009-09-01

    nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models

  14. Edge Equilibrium Code (EEC) For Tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xujling

    2014-02-24

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids

  15. Present state of HDTV coding in Japan and future prospect

    NASA Astrophysics Data System (ADS)

    Murakami, Hitomi

    The development status of HDTV digital codecs in Japan is evaluated; several bit rate-reduction codecs have been developed for 1125 lines/60-field HDTV, and performance trials have been conducted through satellite and optical fiber links. Prospective development efforts will attempt to achieve more efficient coding schemes able to reduce the bit rate to as little as 45 Mbps, as well as to apply coding schemes to automated teller machine networks.

  16. Identification of coding and non-coding mutational hotspots in cancer genomes.

    PubMed

    Piraino, Scott W; Furney, Simon J

    2017-01-05

    The identification of mutations that play a causal role in tumour development, so called "driver" mutations, is of critical importance for understanding how cancers form and how they might be treated. Several large cancer sequencing projects have identified genes that are recurrently mutated in cancer patients, suggesting a role in tumourigenesis. While the landscape of coding drivers has been extensively studied and many of the most prominent driver genes are well characterised, comparatively less is known about the role of mutations in the non-coding regions of the genome in cancer development. The continuing fall in genome sequencing costs has resulted in a concomitant increase in the number of cancer whole genome sequences being produced, facilitating systematic interrogation of both the coding and non-coding regions of cancer genomes. To examine the mutational landscapes of tumour genomes we have developed a novel method to identify mutational hotspots in tumour genomes using both mutational data and information on evolutionary conservation. We have applied our methodology to over 1300 whole cancer genomes and show that it identifies prominent coding and non-coding regions that are known or highly suspected to play a role in cancer. Importantly, we applied our method to the entire genome, rather than relying on predefined annotations (e.g. promoter regions) and we highlight recurrently mutated regions that may have resulted from increased exposure to mutational processes rather than selection, some of which have been identified previously as targets of selection. Finally, we implicate several pan-cancer and cancer-specific candidate non-coding regions, which could be involved in tumourigenesis. We have developed a framework to identify mutational hotspots in cancer genomes, which is applicable to the entire genome. This framework identifies known and novel coding and non-coding mutional hotspots and can be used to differentiate candidate driver regions from likely passenger regions susceptible to somatic mutation.

  17. Method for the prediction of the installation aerodynamics of a propfan at subsonic speeds: User manual

    NASA Technical Reports Server (NTRS)

    Chandrasekaran, B.

    1986-01-01

    This document is the user's guide for the method developed earlier for predicting the slipstream wing interaction at subsonic speeds. The analysis involves a subsonic panel code (HESS code) modified to handle the propeller onset flow. The propfan slipstream effects are superimposed on the normal flow boundary condition and are applied over the surface washed by the slipstream. The effects of the propeller slipstream are to increase the axial induced velocity, tangential velocity, and a total pressure rise in the wake of the propeller. Principles based on blade performance theory, momentum theory, and vortex theory were used to evaluate the slipstream effects. The code can be applied to any arbitrary three dimensional geometry, expressed in the form of HESS input format. The code can handle a propeller alone configuration or a propeller/nacelle/airframe configuration, operating up to high subcritical Mach numbers over a range of angles of attack. Inclusion of a viscous modelling is briefly outlined. Wind tunnel results/theory comparisons are included as examples for the application of the code to a generic supercritical wing/overwing Nacelle with a powered propfan. A sample input/output listing is provided.

  18. Transonic Navier-Stokes wing solutions using a zonal approach. Part 2: High angle-of-attack simulation

    NASA Technical Reports Server (NTRS)

    Chaderjian, N. M.

    1986-01-01

    A computer code is under development whereby the thin-layer Reynolds-averaged Navier-Stokes equations are to be applied to realistic fighter-aircraft configurations. This transonic Navier-Stokes code (TNS) utilizes a zonal approach in order to treat complex geometries and satisfy in-core computer memory constraints. The zonal approach has been applied to isolated wing geometries in order to facilitate code development. Part 1 of this paper addresses the TNS finite-difference algorithm, zonal methodology, and code validation with experimental data. Part 2 of this paper addresses some numerical issues such as code robustness, efficiency, and accuracy at high angles of attack. Special free-stream-preserving metrics proved an effective way to treat H-mesh singularities over a large range of severe flow conditions, including strong leading-edge flow gradients, massive shock-induced separation, and stall. Furthermore, lift and drag coefficients have been computed for a wing up through CLmax. Numerical oil flow patterns and particle trajectories are presented both for subcritical and transonic flow. These flow simulations are rich with complex separated flow physics and demonstrate the efficiency and robustness of the zonal approach.

  19. Maintaining a Critical Spectra within Monteburns for a Gas-Cooled Reactor Array by Way of Control Rod Manipulation

    DOE PAGES

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.; ...

    2016-10-01

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  20. Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian

    2018-05-01

    We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.

  1. Towards measuring the semantic capacity of a physical medium demonstrated with elementary cellular automata.

    PubMed

    Dittrich, Peter

    2018-02-01

    The organic code concept and its operationalization by molecular codes have been introduced to study the semiotic nature of living systems. This contribution develops further the idea that the semantic capacity of a physical medium can be measured by assessing its ability to implement a code as a contingent mapping. For demonstration and evaluation, the approach is applied to a formal medium: elementary cellular automata (ECA). The semantic capacity is measured by counting the number of ways codes can be implemented. Additionally, a link to information theory is established by taking multivariate mutual information for quantifying contingency. It is shown how ECAs differ in their semantic capacities, how this is related to various ECA classifications, and how this depends on how a meaning is defined. Interestingly, if the meaning should persist for a certain while, the highest semantic capacity is found in CAs with apparently simple behavior, i.e., the fixed-point and two-cycle class. Synergy as a predictor for a CA's ability to implement codes can only be used if context implementing codes are common. For large context spaces with sparse coding contexts synergy is a weak predictor. Concluding, the approach presented here can distinguish CA-like systems with respect to their ability to implement contingent mappings. Applying this to physical systems appears straight forward and might lead to a novel physical property indicating how suitable a physical medium is to implement a semiotic system. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Edinburgh Working Papers in Applied Linguistics. Number 5.

    ERIC Educational Resources Information Center

    Davies, Alan, Ed.; Parkinson, Brian, Ed.

    1994-01-01

    The eight papers in this volume, prepared by staff and students of the Institute for Applied Language Studies of Edinburgh University, address a variety of issues in applied linguistics. The papers include: (1) "A Coding System for Analyzing a Spoken Database" (Joan Cutting); (2) "L2 Perceptual Acquisition: The Effect of…

  3. Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Liu, Bing

    This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less

  4. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    NASA Astrophysics Data System (ADS)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  5. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  6. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  7. Protograph LDPC Codes for the Erasure Channel

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less

  9. Revisiting Code-Switching Practice in TESOL: A Critical Perspective

    ERIC Educational Resources Information Center

    Wang, Hao; Mansouri, Behzad

    2017-01-01

    In academic circles, the "English Only" view and "Balanced view" have established their grounds after volumes of work on the topic of code-switching in TESOL. With recent development in Critical Applied Linguistics, poststructural theory, postmodern theory, and the emergence of multilingualism, scholars have begun to view ELT…

  10. Naval Law Review. Volume 48

    DTIC Science & Technology

    2001-01-01

    from airports and hotels, Internet cafes , libraries, and even cellular phones. This unmatched versatility has made e-mail the preferred method of...8217 contention, however, was not that the Franchise Tax Board applied the wrong section of the code; it was that the code “unfairly taxed the wife’s

  11. 42 CFR 414.46 - Additional rules for payment of anesthesia services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (a) Definitions. For purposes of this section, the following definitions apply: (1) Base unit means the value for each anesthesia code that reflects all activities other than anesthesia time. These... furnishes the carrier with the base units for each anesthesia procedure code. The base units are derived...

  12. 42 CFR 414.46 - Additional rules for payment of anesthesia services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (a) Definitions. For purposes of this section, the following definitions apply: (1) Base unit means the value for each anesthesia code that reflects all activities other than anesthesia time. These... furnishes the carrier with the base units for each anesthesia procedure code. The base units are derived...

  13. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... sets inherent to a transaction, and not related to the format of the transaction. Data elements that... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  14. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...

  15. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... sets inherent to a transaction, and not related to the format of the transaction. Data elements that... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  16. 45 CFR 162.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... definitions apply: Code set means any set of codes used to encode data elements, such as tables of terms... sets inherent to a transaction, and not related to the format of the transaction. Data elements that... information in a transaction. Data set means a semantically meaningful unit of information exchanged between...

  17. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...

  18. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...

  19. 40 CFR 51.50 - What definitions apply to this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...

  20. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  1. Response of highbush blueberry to nitrogen fertilizer during field establishment. I. Accumulation and allocation of fertilizer nitrogen and biomass

    USDA-ARS?s Scientific Manuscript database

    The effects of N fertilizer rate on plant growth, N uptake, and biomass and N partitioning was studied in highbush blueberry during the first 2 years after planting. Plants were grown without N fertilizer or with either 50, 100, or 150 kg/ha N applied each year using 15N-depleted ammonium sulfate t...

  2. Recycling/Disposal Alternatives for Depleted Uranium Wastes

    DTIC Science & Technology

    1981-01-01

    could pass before new sites are available. Recent experi- ence with attempts to dispose of wastes generated by cleanup of the Three Mile Island...commercial sector. Nonordnance uses include counterweights, Lallast, shielding , and special appli- cations machinery. Although the purity requirements...Refer- ence 11). Since the activity of the tailings is higher than allow- able for unrestricted access, large earth -dam retention systems, known as

  3. Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, A.L.; Wilson, J.H.; Arwood, P.C.

    The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less

  4. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  5. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  6. Effect of hypolimnetic oxygenation on oxygen depletion rates in two water-supply reservoirs.

    PubMed

    Gantzer, Paul A; Bryant, Lee D; Little, John C

    2009-04-01

    Oxygenation systems, such as bubble-plume diffusers, are used to improve water quality by replenishing dissolved oxygen (DO) in the hypolimnia of water-supply reservoirs. The diffusers induce circulation and mixing, which helps distribute DO throughout the hypolimnion. Mixing, however, has also been observed to increase hypolimnetic oxygen demand (HOD) during system operation, thus accelerating oxygen depletion. Two water-supply reservoirs (Spring Hollow Reservoir (SHR) and Carvins Cove Reservoir (CCR)) that employ linear bubble-plume diffusers were studied to quantify diffuser effects on HOD. A recently validated plume model was used to predict oxygen addition rates. The results were used together with observed oxygen accumulation rates to evaluate HOD over a wide range of applied gas flow rates. Plume-induced mixing correlated well with applied gas flow rate and was observed to increase HOD. Linear relationships between applied gas flow rate and HOD were found for both SHR and CCR. HOD was also observed to be independent of bulk hypolimnion oxygen concentration, indicating that HOD is controlled by induced mixing. Despite transient increases in HOD, oxygenation caused an overall decrease in background HOD, as well as a decrease in induced HOD during diffuser operation, over several years. This suggests that the residual or background oxygen demand decreases from one year to the next. Despite diffuser-induced increases in HOD, hypolimnetic oxygenation remains a viable method for replenishing DO in thermally-stratified water-supply reservoirs such as SHR and CCR.

  7. Interaction of Aquifer and River-Canal Network near Well Field.

    PubMed

    Ghosh, Narayan C; Mishra, Govinda C; Sandhu, Cornelius S S; Grischek, Thomas; Singh, Vikrant V

    2015-01-01

    The article presents semi-analytical mathematical models to asses (1) enhancements of seepage from a canal and (2) induced flow from a partially penetrating river in an unconfined aquifer consequent to groundwater withdrawal in a well field in the vicinity of the river and canal. The nonlinear exponential relation between seepage from a canal reach and hydraulic head in the aquifer beneath the canal reach is used for quantifying seepage from the canal reach. Hantush's (1967) basic solution for water table rise due to recharge from a rectangular spreading basin in absence of pumping well is used for generating unit pulse response function coefficients for water table rise in the aquifer. Duhamel's convolution theory and method of superposition are applied to obtain water table position due to pumping and recharge from different canal reaches. Hunt's (1999) basic solution for river depletion due to constant pumping from a well in the vicinity of a partially penetrating river is used to generate unit pulse response function coefficients. Applying convolution technique and superposition, treating the recharge from canal reaches as recharge through conceptual injection wells, river depletion consequent to variable pumping and recharge is quantified. The integrated model is applied to a case study in Haridwar (India). The well field consists of 22 pumping wells located in the vicinity of a perennial river and a canal network. The river bank filtrate portion consequent to pumping is quantified. © 2014, National GroundWater Association.

  8. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  9. Applying graphics user interface ot group technology classification and coding at the Boeing aerospace company

    NASA Astrophysics Data System (ADS)

    Ness, P. H.; Jacobson, H.

    1984-10-01

    The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.

  10. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  11. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  12. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier; Popov, Emilian L.; Pointer, William David

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  13. Trellis Coding of Non-coherent Multiple Symbol Full Response M-ary CPFSK with Modulation Index 1/M

    NASA Technical Reports Server (NTRS)

    Lee, H.; Divsalar, D.; Weber, C.

    1994-01-01

    This paper introduces a trellis coded modulation (TCM) scheme for non-coherent multiple full response M-ary CPFSK with modulation index 1/M. A proper branch metric for the trellis decoder is obtained by employing a simple approximation of the modified Bessel function for large signal to noise ratio (SNR). Pairwise error probability of coded sequences is evaluated by applying a linear approximation to the Rician random variable.

  14. 26 CFR 7.48-3 - Election to apply the amendments made by sections 804 (a) and (b) of the Tax Reform Act of 1976...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the Act to movie and television films that are property described in section 50(a) of the Code and... sections 804 (a) and (b) of the Tax Reform Act of 1976 to property described in section 50(a) of the Code... described in section 50(a) of the Code. (a) General rule. Under section 804(e)(2) of the Tax Reform Act of...

  15. TacSat-4 COMMx, Advanced SATCOM Experiment

    DTIC Science & Technology

    2009-01-01

    Schein, M. T. Marley, C. T. Apland, R. E. Lee, B. D . Williams, E. D . Schaefer, S. R. Vernon, P . D . Schwartz , B. L. Kantsiper, E. J. Finnegan;The...Lee, B. D . Williams, E. D . Schaefer, P . D . Schwartz, R. Denissen, B. Kantsiper, E. J. Finnegan; The Johns Hopkins University Applied Physics...Mission Ops Lead, NRL Code 8233 Bob Kuzma, TacSat-4 Payload Controller, NRL Code 8242 Bob Skalitzky, TacSat-4 Power Systems, NRL Code 8244 Doug Bentz

  16. Health claims in the labelling and marketing of food products:

    PubMed Central

    Asp, Nils-Georg; Bryngelsson, Susanne

    2007-01-01

    Since 1990 certain health claims in the labelling and marketing of food products have been allowed in Sweden within the food sector's Code of Practice. The rules were developed in close dialogue with the authorities. The legal basis was a decision by the authorities not to apply the medicinal products’ legislation to “foods normally found on the dinner table” provided the rules defined in the Code were followed. The Code of Practice lists nine well-established diet–health relationships eligible for generic disease risk reduction claims in two steps and general rules regarding nutrient function claims. Since 2001, there has also been the possibility for using “product-specific physiological claims (PFP)”, subject to premarketing evaluation of the scientific dossier supporting the claim. The scientific documentation has been approved for 10 products with PFP, and another 15 products have been found to fulfil the Code's criteria for “low glycaemic index”. In the third edition of the Code, active since 2004, conditions in terms of nutritional composition were set, i.e. “nutrient profiles”, with a general reference to the Swedish National Food Administration's regulation on the use of a particular symbol, i.e. the keyhole symbol. Applying the Swedish Code of practice has provided experience useful in the implementation of the European Regulation on nutrition and health claims made on foods, effective from 2007.

  17. Laminar Heating Validation of the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Dries, Kevin M.

    2005-01-01

    OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.

  18. Knowledge base methodology: Methodology for first Engineering Script Language (ESL) knowledge base

    NASA Technical Reports Server (NTRS)

    Peeris, Kumar; Izygon, Michel E.

    1992-01-01

    The primary goal of reusing software components is that software can be developed faster, cheaper and with higher quality. Though, reuse is not automatic and can not just happen. It has to be carefully engineered. For example a component needs to be easily understandable in order to be reused, and it has also to be malleable enough to fit into different applications. In fact the software development process is deeply affected when reuse is being applied. During component development, a serious effort has to be directed toward making these components as reusable. This implies defining reuse coding style guidelines and applying then to any new component to create as well as to any old component to modify. These guidelines should point out the favorable reuse features and may apply to naming conventions, module size and cohesion, internal documentation, etc. During application development, effort is shifted from writing new code toward finding and eventually modifying existing pieces of code, then assembling them together. We see here that reuse is not free, and therefore has to be carefully managed.

  19. Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.

    PubMed

    Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G

    2006-08-01

    Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.

  20. The low information content of Neurospora splicing signals: implications for RNA splicing and intron origin.

    PubMed

    Collins, Richard A; Stajich, Jason E; Field, Deborah J; Olive, Joan E; DeAbreu, Diane M

    2015-05-01

    When we expressed a small (0.9 kb) nonprotein-coding transcript derived from the mitochondrial VS plasmid in the nucleus of Neurospora we found that it was efficiently spliced at one or more of eight 5' splice sites and ten 3' splice sites, which are present apparently by chance in the sequence. Further experimental and bioinformatic analyses of other mitochondrial plasmids, random sequences, and natural nuclear genes in Neurospora and other fungi indicate that fungal spliceosomes recognize a wide range of 5' splice site and branchpoint sequences and predict introns to be present at high frequency in random sequence. In contrast, analysis of intronless fungal nuclear genes indicates that branchpoint, 5' splice site and 3' splice site consensus sequences are underrepresented compared with random sequences. This underrepresentation of splicing signals is sufficient to deplete the nuclear genome of splice sites at locations that do not comprise biologically relevant introns. Thus, the splicing machinery can recognize a wide range of splicing signal sequences, but splicing still occurs with great accuracy, not because the splicing machinery distinguishes correct from incorrect introns, but because incorrect introns are substantially depleted from the genome. © 2015 Collins et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

Top