Fission yields data generation and benchmarks of decay heat estimation of a nuclear fuel
NASA Astrophysics Data System (ADS)
Gil, Choong-Sup; Kim, Do Heon; Yoo, Jae Kwon; Lee, Jounghwa
2017-09-01
Fission yields data with the ENDF-6 format of 235U, 239Pu, and several actinides dependent on incident neutron energies have been generated using the GEF code. In addition, fission yields data libraries of ORIGEN-S, -ARP modules in the SCALE code, have been generated with the new data. The decay heats by ORIGEN-S using the new fission yields data have been calculated and compared with the measured data for validation in this study. The fission yields data ORIGEN-S libraries based on ENDF/B-VII.1, JEFF-3.1.1, and JENDL/FPY-2011 have also been generated, and decay heats were calculated using the ORIGEN-S libraries for analyses and comparisons.
Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, S.; Havloej, F.; Lago, D.
2013-07-01
The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Ross, Kyle W.; Smith, James Dean
2010-04-01
The Oak Ridge National Laboratory computer code, ORIGEN2.2 (CCC-371, 2002), was used to obtain the elemental composition of irradiated low-enriched uranium (LEU)/mixed-oxide (MOX) pressurized-water reactor fuel assemblies. Described in this report are the input parameters for the ORIGEN2.2 calculations. The rationale for performing the ORIGEN2.2 calculation was to generate inventories to be used to populate MELCOR radionuclide classes. Therefore the ORIGEN2.2 output was subsequently manipulated. The procedures performed in this data reduction process are also described herein. A listing of the ORIGEN2.2 input deck for two-cycle MOX is provided in the appendix. The final output from this data reduction processmore » was three tables containing the radionuclide inventories for LEU/MOX in elemental form. Masses, thermal powers, and activities were reported for each category.« less
Status Report on NEAMS PROTEUS/ORIGEN Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A
2016-02-18
The US Department of Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program has contributed significantly to the development of the PROTEUS neutron transport code at Argonne National Laboratory and to the Oak Ridge Isotope Generation and Depletion Code (ORIGEN) depletion/decay code at Oak Ridge National Laboratory. PROTEUS’s key capability is the efficient and scalable (up to hundreds of thousands of cores) neutron transport solver on general, unstructured, three-dimensional finite-element-type meshes. The scalability and mesh generality enable the transfer of neutron and power distributions to other codes in the NEAMS toolkit for advanced multiphysics analysis. Recently, ORIGEN has received considerablemore » modernization to provide the high-performance depletion/decay capability within the NEAMS toolkit. This work presents a description of the initial integration of ORIGEN in PROTEUS, mainly performed during FY 2015, with minor updates in FY 2016.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Hamzah, N. S., E-mail: mark-dennis@nuclearmalaysia.gov.my; Abi, M. J. B., E-mail: mark-dennis@nuclearmalaysia.gov.my
ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences ofmore » results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.« less
Development of ORIGEN Libraries for Mixed Oxide (MOX) Fuel Assembly Designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mertyurek, Ugur; Gauld, Ian C.
In this research, ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup.more » The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. Finally, the nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.« less
Development of ORIGEN Libraries for Mixed Oxide (MOX) Fuel Assembly Designs
Mertyurek, Ugur; Gauld, Ian C.
2015-12-24
In this research, ORIGEN cross section libraries for reactor-grade mixed oxide (MOX) fuel assembly designs have been developed to provide fast and accurate depletion calculations to predict nuclide inventories, radiation sources and thermal decay heat information needed in safety evaluations and safeguards verification measurements of spent nuclear fuel. These ORIGEN libraries are generated using two-dimensional lattice physics assembly models that include enrichment zoning and cross section data based on ENDF/B-VII.0 evaluations. Using the SCALE depletion sequence, burnup-dependent cross sections are created for selected commercial reactor assembly designs and a representative range of reactor operating conditions, fuel enrichments, and fuel burnup.more » The burnup dependent cross sections are then interpolated to provide problem-dependent cross sections for ORIGEN, avoiding the need for time-consuming lattice physics calculations. The ORIGEN libraries for MOX assembly designs are validated against destructive radiochemical assay measurements of MOX fuel from the MALIBU international experimental program. This program included measurements of MOX fuel from a 15 × 15 pressurized water reactor assembly and a 9 × 9 boiling water reactor assembly. The ORIGEN MOX libraries are also compared against detailed assembly calculations from the Phase IV-B numerical MOX fuel burnup credit benchmark coordinated by the Nuclear Energy Agency within the Organization for Economic Cooperation and Development. Finally, the nuclide compositions calculated by ORIGEN using the MOX libraries are shown to be in good agreement with other physics codes and with experimental data.« less
Verifying Safeguards Declarations with INDEPTH: A Sensitivity Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grogan, Brandon R; Richards, Scott
2017-01-01
A series of ORIGEN calculations were used to simulate the irradiation and decay of a number of spent fuel assemblies. These simulations focused on variations in the irradiation history that achieved the same terminal burnup through a different set of cycle histories. Simulated NDA measurements were generated for each test case from the ORIGEN data. These simulated measurement types included relative gammas, absolute gammas, absolute gammas plus neutrons, and concentrations of a set of six isotopes commonly measured by NDA. The INDEPTH code was used to reconstruct the initial enrichment, cooling time, and burnup for each irradiation using each simulatedmore » measurement type. The results were then compared to the initial ORIGEN inputs to quantify the size of the errors induced by the variations in cycle histories. Errors were compared based on the underlying changes to the cycle history, as well as the data types used for the reconstructions.« less
ORIGEN-based Nuclear Fuel Inventory Module for Fuel Cycle Assessment: Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, Steven E.
The goal of this project, “ORIGEN-based Nuclear Fuel Depletion Module for Fuel Cycle Assessment" is to create a physics-based reactor depletion and decay module for the Cyclus nuclear fuel cycle simulator in order to assess nuclear fuel inventories over a broad space of reactor operating conditions. The overall goal of this approach is to facilitate evaluations of nuclear fuel inventories for a broad space of scenarios, including extended used nuclear fuel storage and cascading impacts on fuel cycle options such as actinide recovery in used nuclear fuel, particularly for multiple recycle scenarios. The advantages of a physics-based approach (compared tomore » a recipe-based approach which has been typically employed for fuel cycle simulators) is in its inherent flexibility; such an approach can more readily accommodate the broad space of potential isotopic vectors that may be encountered under advanced fuel cycle options. In order to develop this flexible reactor analysis capability, we are leveraging the Origen nuclear fuel depletion and decay module from SCALE to produce a standalone “depletion engine” which will serve as the kernel of a Cyclus-based reactor analysis module. The ORIGEN depletion module is a rigorously benchmarked and extensively validated tool for nuclear fuel analysis and thus its incorporation into the Cyclus framework can bring these capabilities to bear on the problem of evaluating long-term impacts of fuel cycle option choices on relevant metrics of interest, including materials inventories and availability (for multiple recycle scenarios), long-term waste management and repository impacts, etc. Developing this Origen-based analysis capability for Cyclus requires the refinement of the Origen analysis sequence to the point where it can reasonably be compiled as a standalone sequence outside of SCALE; i.e., wherein all of the computational aspects of Origen (including reactor cross-section library processing and interpolation, input and output processing, and depletion/decay solvers) can be self-contained into a single executable sequence. Further, to embed this capability into other software environments (such as the Cyclus fuel cycle simulator) requires that Origen’s capabilities be encapsulated into a portable, self-contained library which other codes can then call directly through function calls, thereby directly accessing the solver and data processing capabilities of Origen. Additional components relevant to this work include modernization of the reactor data libraries used by Origen for conducting nuclear fuel depletion calculations. This work has included the development of new fuel assembly lattices not previously available (such as for CANDU heavy-water reactor assemblies) as well as validation of updated lattices for light-water reactors updated to employ modern nuclear data evaluations. The CyBORG reactor analysis module as-developed under this workscope is fully capable of dynamic calculation of depleted fuel compositions from all commercial U.S. reactor assembly types as well as a number of international fuel types, including MOX, VVER, MAGNOX, and PHWR CANDU fuel assemblies. In addition, the Origen-based depletion engine allows for CyBORG to evaluate novel fuel assembly and reactor design types via creation of Origen reactor data libraries via SCALE. The establishment of this new modeling capability affords fuel cycle modelers a substantially improved ability to model dynamically-changing fuel cycle and reactor conditions, including recycled fuel compositions from fuel cycle scenarios involving material recycle into thermal-spectrum systems.« less
NASA Astrophysics Data System (ADS)
Husnayani, I.; Udiyani, P. M.; Bakhri, S.; Sunaryo, G. R.
2018-02-01
Pebble Bed Reactor (PBR) is a high temperature gas-cooled reactor which employs graphite as a moderator and helium as a coolant. In a multi-pass PBR, burnup of the fuel pebble must be measured in each cycle by online measurement in order to determine whether the fuel pebble should be reloaded into the core for another cycle or moved out of the core into spent fuel storage. One of the well-known methods for measuring burnup is based on the activity of radionuclide decay inside the fuel pebble. In this work, the activity and gamma emission of Kr-85m were studied in order to investigate the feasibility of Kr-85m as burnup measurement indicator in a PBR. The activity and gamma emission of Kr-85 were estimated using ORIGEN2.1 computer code. The parameters of HTR-10 were taken as a case study in performing ORIGEN2.1 simulation. The results show that the activity revolution of Kr-85m has a good relationship with the burnup of the pebble fuel in each cycle. The Kr-85m activity reduction in each burnup step,in the range of 12% to 4%, is considered sufficient to show the burnup level in each cycle. The gamma emission of Kr-85m is also sufficiently high which is in the order of 1010 photon/second. From these results, it can be concluded that Kr-85m is suitable to be used as burnup measurement indicator in a pebble bed reactor.
NASA Astrophysics Data System (ADS)
Tanaka, Ken-ichi
2016-06-01
We performed benchmark calculation for radioactivity activated in a Primary Containment Vessel (PCV) of a Boiling Water Reactor (BWR) by using MAXS library, which was developed by collapsing with neutron energy spectra in the PCV of the BWR. Radioactivities due to neutron irradiation were measured by using activation foil detector of Gold (Au) and Nickel (Ni) at thirty locations in the PCV. We performed activation calculations of the foils with SCALE5.1/ORIGEN-S code with irradiation conditions of each foil location as the benchmark calculation. We compared calculations and measurements to estimate an effectiveness of MAXS library.
NASA Astrophysics Data System (ADS)
Tsilanizara, A.; Gilardi, N.; Huynh, T. D.; Jouanne, C.; Lahaye, S.; Martinez, J. M.; Diop, C. M.
2014-06-01
The knowledge of the decay heat quantity and the associated uncertainties are important issues for the safety of nuclear facilities. Many codes are available to estimate the decay heat. ORIGEN, FISPACT, DARWIN/PEPIN2 are part of them. MENDEL is a new depletion code developed at CEA, with new software architecture, devoted to the calculation of physical quantities related to fuel cycle studies, in particular decay heat. The purpose of this paper is to present a probabilistic approach to assess decay heat uncertainty due to the decay data uncertainties from nuclear data evaluation like JEFF-3.1.1 or ENDF/B-VII.1. This probabilistic approach is based both on MENDEL code and URANIE software which is a CEA uncertainty analysis platform. As preliminary applications, single thermal fission of uranium 235, plutonium 239 and PWR UOx spent fuel cell are investigated.
Methodology for the nuclear design validation of an Alternate Emergency Management Centre (CAGE)
NASA Astrophysics Data System (ADS)
Hueso, César; Fabbri, Marco; de la Fuente, Cristina; Janés, Albert; Massuet, Joan; Zamora, Imanol; Gasca, Cristina; Hernández, Héctor; Vega, J. Ángel
2017-09-01
The methodology is devised by coupling different codes. The study of weather conditions as part of the data of the site will determine the relative concentrations of radionuclides in the air using ARCON96. The activity in the air is characterized depending on the source and release sequence specified in NUREG-1465 by RADTRAD code, which provides results of the inner cloud source term contribution. Known activities, energy spectra are inferred using ORIGEN-S, which are used as input for the models of the outer cloud, filters and containment generated with MCNP5. The sum of the different contributions must meet the conditions of habitability specified by the CSN (Spanish Nuclear Regulatory Body) (TEDE <50 mSv and equivalent dose to the thyroid <500 mSv within 30 days following the accident doses) so that the dose is optimized by varying parameters such as CAGE location, flow filtering need for recirculation, thicknesses and compositions of the walls, etc. The results for the most penalizing area meet the established criteria, and therefore the CAGE building design based on the methodology presented is radiologically validated.
Monte Carlo simulations of safeguards neutron counter for oxide reduction process feed material
NASA Astrophysics Data System (ADS)
Seo, Hee; Lee, Chaehun; Oh, Jong-Myeong; An, Su Jung; Ahn, Seong-Kyu; Park, Se-Hwan; Ku, Jeong-Hoe
2016-10-01
One of the options for spent-fuel management in Korea is pyroprocessing whose main process flow is the head-end process followed by oxide reduction, electrorefining, and electrowining. In the present study, a well-type passive neutron coincidence counter, namely, the ACP (Advanced spent fuel Conditioning Process) safeguards neutron counter (ASNC), was redesigned for safeguards of a hot-cell facility related to the oxide reduction process. To this end, first, the isotopic composition, gamma/neutron emission yield and energy spectrum of the feed material ( i.e., the UO2 porous pellet) were calculated using the OrigenARP code. Then, the proper thickness of the gammaray shield was determined, both by irradiation testing at a standard dosimetry laboratory and by MCNP6 simulations using the parameters obtained from the OrigenARP calculation. Finally, the neutron coincidence counter's calibration curve for 100- to 1000-g porous pellets, in consideration of the process batch size, was determined through simulations. Based on these simulation results, the neutron counter currently is under construction. In the near future, it will be installed in a hot cell and tested with spent fuel materials.
Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tippayakul, C.; Ivanov, K.; Misu, S.
2006-07-01
This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross sectionmore » library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)« less
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
NASA Astrophysics Data System (ADS)
Lahaye, S.; Huynh, T. D.; Tsilanizara, A.
2016-03-01
Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.
NASA Astrophysics Data System (ADS)
Karriem, Veronica V.
Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.
Monte Carlo Shielding Comparative Analysis Applied to TRIGA HEU and LEU Spent Fuel Transport
NASA Astrophysics Data System (ADS)
Margeanu, C. A.; Margeanu, S.; Barbos, D.; Iorgulis, C.
2010-12-01
The paper is a comparative study of LEU and HEU fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask, by means of 3D Monte Carlo MORSE-SGC code. Before loading into the shipping cask, TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. The actinides contribution to total fuel radioactivity is very low in HEU spent fuel case, becoming 10 times greater in LEU spent fuel case. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than HEU ones. Comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from cask surface (about 15% relative difference).
NASA Astrophysics Data System (ADS)
Schlömer, Luc; Phlippen, Peter-W.; Lukas, Bernard
2017-09-01
The decommissioning of a light water reactor (LWR), which is licensed under § 7 of the German Atomic Energy Act, following the post-operational phase requires a comprehensive licensing procedure including in particular radiation protection aspects and possible impacts to the environment. Decommissioning includes essential changes in requirements for the systems and components and will mainly lead to the direct dismantling. In this context, neutron induced activation calculations for the structural components have to be carried out to predict activities in structures and to estimate future costs for conditioning and packaging. To avoid an overestimation of the radioactive inventory and to calculate the expenses for decommissioning as accurate as possible, modern state-of-the-art Monte-Carlo-Techniques (MCNP™) are applied and coupled with present-day activation and decay codes (ORIGEN-S). In this context ADVANTG is used as weight window generator for MCNP™ i. e. as variance reduction tool to speed up the calculation in deep penetration problems. In this paper the calculation procedure is described and the obtained results are presented with a validation along with measured activities and photon dose rates measured in the post-operational phase. The validation shows that the applied calculation procedure is suitable for the determination of the radioactive inventory of a nuclear power plant. Even the measured gamma dose rates in the post-operational phase at different positions in the reactor building agree within a factor of 2 to 3 with the calculation results. The obtained results are accurate and suitable to support effectively the decommissioning planning process.
2013-09-01
survival rate than CA males [3-7]. Socioeconomic and environmental factors, such as diet , access to care, and screening, have been cited as factors...cDNA clone coding for 3α-hydroxysteroid dehydrogenase (3α-HSD) was obtained from Origene. The 3α-HSD, also known as aldo- keto reductase family 1 member...growth of established human prostate LNCaP tumors in nude mice fed a low-fat diet . J Natl Cancer Inst 87: 1456–1462 17. Aronson WJ et al. (1999
NASA Astrophysics Data System (ADS)
Barber, Duncan Henry
During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A benchmark calculation demonstrates the improvement in agreement of the total inventory of those chemical elements included in the RMC fuel model to an ORIGEN-S calculation. ORIGEN-S is the Oak Ridge isotope generation and depletion computer program. The Gibbs energy minimizer requires a chemical database containing coefficients from which the Gibbs energy of pure compounds, gas and liquid mixtures, and solid solutions can be calculated. The RMC model of irradiated uranium dioxide fuel has been converted into the required format. The Gibbs energy minimizer has been incorporated into a new model of fission-product vaporization from the fuel surface. Calculated release fractions using the new code have been compared to results calculated with SOURCE IST 2.0P11 and to results of tests used in the validation of SOURCE 2.0. The new code shows improvements in agreement with experimental releases for a number of nuclides. Of particular significance is the better agreement between experimental and calculated release fractions for 140La. The improved agreement reflects the inclusion in the RMC model of the solubility of lanthanum (III) oxide (La2O3) in the fuel matrix. Calculated lanthanide release fractions from earlier computer programs were a challenge to environmental qualification analysis of equipment for some accident scenarios. The new prototype computer program would alleviate this concern. Keywords: Nuclear Engineering; Material Science; Thermodynamics; Radioactive Material, Gibbs Energy Minimization, Actinide Generation and Depletion, FissionProduct Generation and Depletion.
Nuclide Depletion Capabilities in the Shift Monte Carlo Code
Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...
2017-12-21
A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
ORIGEN2 calculations supporting TRIGA irradiated fuel data package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.A.
ORIGEN2 calculations were performed for TRIGA spent fuel elements from the Hanford Neutron Radiography Facility. The calculations support storage and disposal and results include mass, activity,and decay heat. Comparisons with underwater dose-rate measurements were used to confirm and adjust the calculations.
2008-09-01
the Origen Analyzer (BioVeris), the DELFIA (Wallac/PE) and the MPD ELISA ( BioTraces ). BioTraces had the most sensitive assay in which 125I was used...investigations we decided to abandon the BioTraces assay and focused on a more practical and also sensitive assay provided by the Origen Analyzer
Modeling of Radioxenon Production and Release Pathways
2010-09-01
MCNP is utilized to model the neutron transport, while ORIGEN 2.2 is utilized to calculate the production and decay of fission products, activation...products, and transuranics resulting from the calculated neutron flux profile. MONTEBURNS is a pearl script that couples the MCNP and ORIGEN 2.2...core enriched to 93.80% 239Pu was used. MCNP was used to determine the thickness of soil or rock necessary to accurately model the attenuation and
Computational Age Dating of Special Nuclear Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2012-06-30
This slide-show presented an overview of the Constrained Progressive Reversal (CPR) method for computing decays, age dating, and spoof detecting. The CPR method is: Capable of temporal profiling a SNM sample; Precise (compared with known decay code, such a ORIGEN); Easy (for computer implementation and analysis). We have illustrated with real SNM data using CPR for age dating and spoof detection. If SNM is pure, may use CPR to derive its age. If SNM is mixed, CPR will indicate that it is mixed or spoofed.
NASA Astrophysics Data System (ADS)
Sobolev, V.; Uyttenhove, W.; Thetford, R.; Maschek, W.
2011-07-01
The neutronic and thermomechanical performances of two composite fuel systems: CERCER with (Pu,Np,Am,Cm)O 2-x fuel particles in ceramic MgO matrix and CERMET with metallic Mo matrix, selected for transmutation of minor actinides in the European Facility for Industrial Transmutation (EFIT), were analysed aiming at their optimisation. The ALEPH burnup code system, based on MNCPX and ORIGEN codes and JEFF3.1 nuclear data library, and the modern version of the fuel rod performance code TRAFIC were used for this analysis. Because experimental data on the properties of the mixed minor-actinide oxides are scarce, and the in-reactor behaviour of the T91 steel chosen as cladding, as well as of the corrosion protective layer, is still not well-known, a set of "best estimates" provided the properties used in the code. The obtained results indicate that both fuel candidates, CERCER and CERMET, can satisfy the fuel design and safety criteria of EFIT. The residence time for both types of fuel elements can reach about 5 years with the reactivity swing within ±1000 pcm, and about 22% of the loaded MA is transmuted during this period. However, the fuel centreline temperature in the hottest CERCER fuel rod is close to the temperature above which MgO matrix becomes chemically instable. Moreover, a weak PCMI can appear in about 3 years of operation. The CERMET fuel can provide larger safety margins: the fuel temperature is more than 1000 K below the permitted level of 2380 K and the pellet-cladding gap remains open until the end of operation.
Comparative analysis of LWR and FBR spent fuels for nuclear forensics evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Permana, Sidik; Suzuki, Mitsutoshi; Su'ud, Zaki
2012-06-06
Some interesting issues are attributed to nuclide compositions of spent fuels from thermal reactors as well as fast reactors such as a potential to reuse as recycled fuel, and a possible capability to be manage as a fuel for destructive devices. In addition, analysis on nuclear forensics which is related to spent fuel compositions becomes one of the interesting topics to evaluate the origin and the composition of spent fuels from the spent fuel foot-prints. Spent fuel compositions of different fuel types give some typical spent fuel foot prints and can be estimated the origin of source of those spentmore » fuel compositions. Some technics or methods have been developing based on some science and technological capability including experimental and modeling or theoretical aspects of analyses. Some foot-print of nuclear forensics will identify the typical information of spent fuel compositions such as enrichment information, burnup or irradiation time, reactor types as well as the cooling time which is related to the age of spent fuels. This paper intends to evaluate the typical spent fuel compositions of light water (LWR) and fast breeder reactors (FBR) from the view point of some foot prints of nuclear forensics. An established depletion code of ORIGEN is adopted to analyze LWR spent fuel (SF) for several burnup constants and decay times. For analyzing some spent fuel compositions of FBR, some coupling codes such as SLAROM code, JOINT and CITATION codes including JFS-3-J-3.2R as nuclear data library have been adopted. Enriched U-235 fuel composition of oxide type is used for fresh fuel of LWR and a mixed oxide fuel (MOX) for FBR fresh fuel. Those MOX fuels of FBR come from the spent fuels of LWR. Some typical spent fuels from both LWR and FBR will be compared to distinguish some typical foot-prints of SF based on nuclear forensic analysis.« less
Determination of the NPP Kr\\vsko spent fuel decay heat
NASA Astrophysics Data System (ADS)
Kromar, Marjan; Kurinčič, Bojan
2017-07-01
Nuclear fuel is designed to support fission process in a reactor core. Some of the isotopes, formed during the fission, decay and produce decay heat and radiation. Accurate knowledge of the nuclide inventory producing decay heat is important after reactor shut down, during the fuel storage and subsequent reprocessing or disposal. In this paper possibility to calculate the fuel isotopic composition and determination of the fuel decay heat with the Serpent code is investigated. Serpent is a well-known Monte Carlo code used primarily for the calculation of the neutron transport in a reactor. It has been validated for the burn-up calculations. In the calculation of the fuel decay heat different set of isotopes is important than in the neutron transport case. Comparison with the Origen code is performed to verify that the Serpent is taking into account all isotopes important to assess the fuel decay heat. After the code validation, a sensitivity study is carried out. Influence of several factors such as enrichment, fuel temperature, moderator temperature (density), soluble boron concentration, average power, burnable absorbers, and burnup is analyzed.
NASA Astrophysics Data System (ADS)
Harkness, Ira; Zhu, Ting; Liang, Yinong; Rauch, Eric; Enqvist, Andreas; Jordan, Kelly A.
2018-01-01
Demand for spent nuclear fuel dry casks as an interim storage solution has increased globally and the IAEA has expressed a need for robust safeguards and verification technologies for ensuring the continuity of knowledge and the integrity of radioactive materials inside spent fuel casks. Existing research has been focusing on "fingerprinting" casks based on count rate statistics to represent radiation emission signatures. The current research aims to expand to include neutron energy spectral information as part of the fuel characteristics. First, spent fuel composition data are taken from the Next Generation Safeguards Initiative Spent Fuel Libraries, representative for Westinghouse 17ˣ17 PWR assemblies. The ORIGEN-S code then calculates the spontaneous fission and (α,n) emissions for individual fuel rods, followed by detailed MCNP simulations of neutrons transported through the fuel assemblies. A comprehensive database of neutron energy spectral profiles is to be constructed, with different enrichment, burn-up, and cooling time conditions. The end goal is to utilize the computational spent fuel library, predictive algorithm, and a pressurized 4He scintillator to verify the spent fuel assemblies inside a cask. This work identifies neutron spectral signatures that correlate with the cooling time of spent fuel. Both the total and relative contributions from spontaneous fission and (α,n) change noticeably with respect to cooling time, due to the relatively short half-life (18 years) of the major neutron source 244Cm. Identification of this and other neutron spectral signatures allows the characterization of spent nuclear fuels in dry cask storage.
Reference commercial high-level waste glass and canister definition
NASA Astrophysics Data System (ADS)
Slate, S. C.; Ross, W. A.; Partain, W. L.
1981-09-01
Technical data and performance characteristics of a high level waste glass and canister intended for use in the design of a complete waste encapsulation package suitable for disposal in a geologic repository are presented. The borosilicate glass contained in the stainless steel canister represents the probable type of high level waste product that is produced in a commercial nuclear-fuel reprocessing plant. Development history is summarized for high level liquid waste compositions, waste glass composition and characteristics, and canister design. The decay histories of the fission products and actinides (plus daughters) calculated by the ORIGEN-II code are presented.
NASA Astrophysics Data System (ADS)
Shi, Xue-Ming; Peng, Xian-Jue
2016-09-01
Fusion science and technology has made progress in the last decades. However, commercialization of fusion reactors still faces challenges relating to higher fusion energy gain, irradiation-resistant material, and tritium self-sufficiency. Fusion Fission Hybrid Reactors (FFHR) can be introduced to accelerate the early application of fusion energy. Traditionally, FFHRs have been classified as either breeders or transmuters. Both need partition of plutonium from spent fuel, which will pose nuclear proliferation risks. A conceptual design of a Fusion Fission Hybrid Reactor for Energy (FFHR-E), which can make full use of natural uranium with lower nuclear proliferation risk, is presented. The fusion core parameters are similar to those of the International Thermonuclear Experimental Reactor. An alloy of natural uranium and zirconium is adopted in the fission blanket, which is cooled by light water. In order to model blanket burnup problems, a linkage code MCORGS, which couples MCNP4B and ORIGEN-S, is developed and validated through several typical benchmarks. The average blanket energy Multiplication and Tritium Breeding Ratio can be maintained at 10 and 1.15 respectively over tens of years of continuous irradiation. If simple reprocessing without separation of plutonium from uranium is adopted every few years, FFHR-E can achieve better neutronic performance. MCORGS has also been used to analyze the ultra-deep burnup model of Laser Inertial Confinement Fusion Fission Energy (LIFE) from LLNL, and a new blanket design that uses Pb instead of Be as the neutron multiplier is proposed. In addition, MCORGS has been used to simulate the fluid transmuter model of the In-Zinerater from Sandia. A brief comparison of LIFE, In-Zinerater, and FFHR-E will be given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE depletion with TRITON (T5-DEPL/T6-DEPL),• CE sensitivity/uncertainty analysis with TSUNAMI-3D,• Simplified and efficient LWR lattice physics with Polaris,• Large scale detailed spent fuel characterization with ORIGAMI and ORIGAMI Automator,• Advanced fission source convergence acceleration capabilities with Sourcerer,• Nuclear data library generation with AMPX, and• Integrated user interface with Fulcrum.Enhanced capabilities include:• Accurate and efficient CE Monte Carlo methods for eigenvalue and fixed source calculations,• Improved MG resonance self-shielding methodologies and data,• Resonance self-shielding with modernized and efficient XSProc integrated into most sequences,• Accelerated calculations with TRITON/NEWT (generally 4x faster than SCALE 6.1),• Spent fuel characterization with 1470 new reactor-specific libraries for ORIGEN,• Modernization of ORIGEN (Chebyshev Rational Approximation Method [CRAM] solver, API for high-performance depletion, new keyword input format)• Extension of the maximum mixture number to values well beyond the previous limit of 2147 to ~2 billion,• Nuclear data formats enabling the use of more than 999 energy groups,• Updated standard composition library to provide more accurate use of natural abundances, andvi• Numerous other enhancements for improved usability and stability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomofumi Sakuragi; Hiromi Tanabe; Emiko Hirose
2013-07-01
Hull and end-piece wastes generated from reprocessing plant operations are expected to be disposed of in a deep underground repository as Group 2 TRU wastes under the Japanese classification system. The activated metals that compose the spent fuel assemblies such as Zircaloy claddings and stainless steel nozzles are mixed and compressed after fuel dissolution, and then stuffed into stainless steel canisters. Carbon 14 is a typical activated product in the hulls and end-pieces and is mainly generated by the {sup 14}N(n,p){sup 14}C reaction. In the previous safety assessment of the TRU waste in Japan, the radionuclides inventory was calculated bymore » ORIGEN-2 code. Some conservative assumptions and preliminary estimates were used in this calculation. For example, total radionuclides generated from a single type of fuel assembly (45 GWd/tU for a PWR unit), and the thickness of the Zircaloy oxide film on the hulls (80 μm) were both overestimated. The second assumption in particular has a large effect on exposure dose evaluation. Therefore, it is essential to have a realistic source term evaluation regarding such items as the C-14 inventory and its distribution to waste parts. In the present study, a C-14 inventory of the hull and end-piece wastes from the operation of a commercial reprocessing plant in Japan corresponding to 32,000 tU (16,000 tU in each BWR and PWR) was calculated. Analysis using individual irradiation conditions and fuel characteristics was conducted on 6 types of fuel assemblies for BWRs and 12 types for PWRs (4 pile types x 3 burnup limits). The oxide film thickness data for each fuel type cladding were obtained from the published literature. Activation calculations were performed by using ORIGEN-2 code. For the amount of spent assembly and other waste characteristics, representative values were assumed based on the published literature. As a preliminary experiment, C-14 in irradiated BWR claddings was measured and found to be consistent with the calculated activation. The total C-14 inventory was estimated as 4.46x10{sup 14} Bq, consisting of 2.58x10{sup 14} Bq for BWRs and 1.87x10{sup 14} Bq for PWRs, and is consistent with the safety assessment of 4.4x10{sup 14} Bq. However, the distribution of the C-14 inventory to hull oxide, which was estimated under the assumption of instantaneous radionuclide release in the safety assessment, decreased from 5.72x10{sup 13} Bq (13% of the total) in the previous assessment to 1.30x10{sup 13} Bq (2.9% of the total; consisting of 1.48x10{sup 12} for BWRs and 1.15x10{sup 13} for PWRs). In other words, the exposure dose peak is reduced to approximate 25% of its previous value due to the use of detailed oxide film data that the BWR cladding has a thin oxide film. Other instantaneous release components for C-14 such as the fuel residual were negligible. (authors)« less
Studies of Plutonium-238 Production at the High Flux Isotope Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lastres, Oscar; Chandler, David; Jarrell, Joshua J
The High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) is a versatile 85 MW{sub th}, pressurized, light water-cooled and -moderated research reactor. The core consists of two fuel elements, an inner fuel element (IFE) and an outer fuel element (OFE), each constructed of involute fuel plates containing high-enriched-uranium (HEU) fuel ({approx}93 wt% {sup 235}U/U) in the form of U{sub 3}O{sub 8} in an Al matrix and encapsulated in Al-6061 clad. An over-moderated flux trap is located in the center of the core, a large beryllium reflector is located on the outside of the core, and two controlmore » elements (CE) are located between the fuel and the reflector. The flux trap and reflector house numerous experimental facilities which are used for isotope production, material irradiation, and cold/thermal neutron scattering. Over the past five decades, the US Department of Energy (DOE) and its agencies have been producing radioisotope power systems used by the National Aeronautics and Space Administration (NASA) for unmanned, long-term space exploration missions. Plutonium-238 is used to power Radioisotope Thermoelectric Generators (RTG) because it has a very long half-life (t{sub 1/2} {approx} 89 yr.) and it generates about 0.5 watts/gram when it decays via alpha emission. Due to the recent shortage and uncertainty of future production, the DOE has proposed a plan to the US Congress to produce {sup 238}Pu by irradiating {sup 237}Np as early as in fiscal year 2011. An annual production rate of 1.5 to 2.0 kg of {sup 238}Pu is expected to satisfy these needs and could be produced in existing national nuclear facilities like HFIR and the Advanced Test Reactor (ATR) at the Idaho National Laboratory (INL). Reactors at the Savannah River Site were used in the past for {sup 238}Pu production but were shut down after the last production in 1988. The nation's {sup 237}Np inventory is currently stored at INL. A plan for producing {sup 238}Pu at US research reactor facilities such as the High Flux Isotope Reactor at ORNL has been initiated by the US DOE and NASA for space exploration needs. Two Monte Carlo-based depletion codes, TRITON (ORNL) and VESTA (IRSN), were used to study the {sup 238}Pu production rates with varying target configurations in a typical HFIR fuel cycle. Preliminary studies have shown that approximately 11 grams and within 15 to 17 grams of {sup 238}Pu could be produced in the first irradiation cycle in one small and one large VXF facility, respectively, when irradiating fresh target arrays as those herein described. Important to note is that in this study we discovered that small differences in assumptions could affect the production rates of Pu-238 observed. The exact flux at a specific target location can have a significant impact upon production, so any differences in how the control elements are modeled as a function of exposure, will also cause differences in production rates. In fact, the surface plot of the large VXF target Pu-238 production shown in Figure 3 illustrates that the pins closest to the core can potentially have production rates as high as 3 times those of pins away from the core, thus implying that a cycle-to-cycle rotation of the targets may be well advised. A methodology for generating spatially-dependent, multi-group self-shielded cross sections and flux files with the KENO and CENTRM codes has been created so that standalone ORIGEN-S inputs can be quickly constructed to perform a variety of {sup 238}Pu production scenarios, i.e. combinations of the number of arrays loaded and the number of irradiation cycles. The studies herein shown with VESTA and TRITON/KENO will be used to benchmark the standalone ORIGEN.« less
Comparison of Fission Product Yields and Their Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Harrison
2006-02-01
This memorandum describes the Naval Reactors Prime Contractor Team (NRPCT) Space Nuclear Power Program (SNPP) interest in determining the expected fission product yields from a Prometheus-type reactor and assessing the impact of these species on materials found in the fuel element and balance of plant. Theoretical yield calculations using ORIGEN-S and RACER computer models are included in graphical and tabular form in Attachment, with focus on the desired fast neutron spectrum data. The known fission product interaction concerns are the corrosive attack of iron- and nickel-based alloys by volatile fission products, such as cesium, tellurium, and iodine, and the radiologicalmore » transmutation of krypton-85 in the coolant to rubidium-85, a potentially corrosive agent to the coolant system metal piping.« less
PCP METHODOLOGY FOR DETERMINING DOSE RATES FOR SMALL GRAM QUANTITIES IN SHIPPING PACKAGINGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathan, S.
The Small Gram Quantity (SGQ) concept is based on the understanding that small amounts of hazardous materials, in this case radioactive materials, are significantly less hazardous than large amounts of the same materials. This study describes a methodology designed to estimate an SGQ for several neutron and gamma emitting isotopes that can be shipped in a package compliant with 10 CFR Part 71 external radiation level limits regulations. These regulations require packaging for the shipment of radioactive materials perform, under both normal and accident conditions, the essential functions of material containment, subcriticality, and maintain external radiation levels within regulatory limits.more » 10 CFR 71.33(b)(1)(2)&(3) state radioactive and fissile materials must be identified and their maximum quantity, chemical and physical forms be included in an application. Furthermore, the U.S. Federal Regulations require application contain an evaluation demonstrating the package (i.e., the packaging and its contents) satisfies the external radiation standards for all packages (10 CFR 71.31(2), 71.35(a), & 71.47). By placing the contents in a He leak-tight containment vessel, and limiting the mass to ensure subcriticality, the first two essential functions are readily met. Some isotopes emit sufficiently strong photon radiation that small amounts of material can yield a large external dose rate. Quantifying of the dose rate for a proposed content is a challenging issue for the SGQ approach. It is essential to quantify external radiation levels from several common gamma and neutron sources that can be safely placed in a specific packaging, to ensure compliance with federal regulations. The Packaging Certification Program (PCP) Methodology for Determining Dose Rate for Small Gram Quantities in Shipping Packagings described in this report provides bounding mass limits for a set of proposed SGQ isotopes. Methodology calculations were performed to estimate external radiation levels for the 9977 shipping package using the MCNP radiation transport code to develop a set of response multipliers (Green's functions) for 'dose per particle' for each neutron and photon spectral group. The source spectrum for each isotope generated using the ORIGEN-S and RASTA computer codes was folded with the response multipliers to generate the dose rate per gram of each isotope in the 9977 shipping package and its associated shielded containers. The maximum amount of a single isotope that could be shipped within the regulatory limits contained in 10 CFR 71.47 for dose rate at the surface of the package is determined. If a package contains a mixture of isotopes, the acceptability for shipment can be determined by a sum of fractions approach. Furthermore, the results of this analysis can be easily extended to additional radioisotopes by simply evaluating the neutron and/or photon spectra of those isotopes and folding the spectral data with the Green's functions provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwantes, Jon M.
Kelly Fitzgerald Kelly Fitzgerald assisted with laboratory testing for an ongoing R&D project known as Electrochemically Modulated Separation (EMS) for on-line rapid preseparations of actinides prior to mass spectrometry analysis. Ryne Burgess Ryne Burgess used SCALE 5.1 ORIGEN-ARP to predict isotope libraries for the Units 1, 2 and 3 reactors and Unit 4 spent fuel pool for comparing against measurements of environmental sampled collected at the site in order to identify the source terms of the accident. Comparison of the cesium 134/137 and cesium 136/137 ratios observed in environmental samples and ORIGEN-ARP predictions indicated that the Unit 4 Spent Fuelmore » Pool did not significantly contribute to radionuclide release during the Fukushima Daiichi accident.« less
Analysis of radiation safety for Small Modular Reactor (SMR) on PWR-100 MWe type
NASA Astrophysics Data System (ADS)
Udiyani, P. M.; Husnayani, I.; Deswandri; Sunaryo, G. R.
2018-02-01
Indonesia as an archipelago country, including big, medium and small islands is suitable to construction of Small Medium/Modular reactors. Preliminary technology assessment on various SMR has been started, indeed the SMR is grouped into Light Water Reactor, Gas Cooled Reactor, and Solid Cooled Reactor and from its site it is group into Land Based reactor and Water Based Reactor. Fukushima accident made people doubt about the safety of Nuclear Power Plant (NPP), which impact on the public perception of the safety of nuclear power plants. The paper will describe the assessment of safety and radiation consequences on site for normal operation and Design Basis Accident postulation of SMR based on PWR-100 MWe in Bangka Island. Consequences of radiation for normal operation simulated for 3 units SMR. The source term was generated from an inventory by using ORIGEN-2 software and the consequence of routine calculated by PC-Cream and accident by PC Cosyma. The adopted methodology used was based on site-specific meteorological and spatial data. According to calculation by PC-CREAM 08 computer code, the highest individual dose in site area for adults is 5.34E-02 mSv/y in ESE direction within 1 km distance from stack. The result of calculation is that doses on public for normal operation below 1mSv/y. The calculation result from PC Cosyma, the highest individual dose is 1.92.E+00 mSv in ESE direction within 1km distance from stack. The total collective dose (all pathway) is 3.39E-01 manSv, with dominant supporting from cloud pathway. Results show that there are no evacuation countermeasure will be taken based on the regulation of emergency.
Eventos de Desconexao na Cauda de Plasma do Cometa P/Halley
NASA Astrophysics Data System (ADS)
Voelzke, M. R.; Fahr, H. J.
2001-08-01
Observacoes cometárias e de vento solar sao comparadas com o propósito de determinar-se as condicoes do vento solar associadas aos eventos de desconexao (DEs) observados em caudas de plasma cometárias. Os dados cometários sao provenientes do The International Halley Watch Atlas of Large-Scale Phenomena. A análise visual sistemática das imagens do atlas revelou, entre outras estruturas morfológicas, 47 DEs ao longo da cauda de plasma do P/Halley. Estes 47 DEs registrados em 47 imagens distintas permitiram a descoberta de 19 origens de DEs, ou seja, o tempo em que as desconexoes iniciaram foi calculado. Os dados do vento solar sao provenientes de medidas feitas in situ pela sonda espacial IMP-8, as quais foram usadas para elaborar a variacao da velocidade do vento solar, densidade e pressao dinâmica durante o intervalo analisado. O presente trabalho compara as atuais teorias conflitantes, baseadas nos mecanismos de formacao, com o intuito de explicar o fenômeno cíclico dos DEs, ou seja, os efeitos de producao iônica, os efeitos de pressao e os efeitos de reconexao magnética sao analisados. Para cada uma das 19 origens de DEs comparou-se a densidade com a respectiva velocidade do vento solar com o intuito de determinar-se uma possível correlacao entre estas origens e os efeitos de pressao dinâmica. Quando da ocorrência de 6 origens de DEs o IMP-8 nao realizou medidas, nos outros 13 casos 10 origens (77%) mostraram uma anticorrelacao entre velocidade e densidade e apenas 3 (23%) revelaram uma tendência similar entre velocidade e densidade. Portanto, a análise inicial demonstra uma fraca correlacao entre as origens dos DEs e os efeitos de pressao.
Isotalo, Aarno E.; Wieselquist, William A.
2015-05-15
A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.
SCALE’s general depletion, activation, and spent fuel source terms analysis capabilities are enabled through a family of modules related to the main ORIGEN depletion/irradiation/decay solver. The nuclide tracking in ORIGEN is based on the principle of explicitly modeling all available nuclides and transitions in the current fundamental nuclear data for decay and neutron-induced transmutation and relies on fundamental cross section and decay data in ENDF/B VII. Cross section data for materials and reaction processes not available in ENDF/B-VII are obtained from the JEFF-3.0/A special purpose European activation library containing 774 materials and 23 reaction channels with 12,617 neutron-induced reactions belowmore » 20 MeV. Resonance cross section corrections in the resolved and unresolved range are performed using a continuous-energy treatment by data modules in SCALE. All nuclear decay data, fission product yields, and gamma-ray emission data are developed from ENDF/B-VII.1 evaluations. Decay data include all ground and metastable state nuclides with half-lives greater than 1 millisecond. Using these data sources, ORIGEN currently tracks 174 actinides, 1149 fission products, and 974 activation products. The purpose of this chapter is to describe the stand-alone capabilities and underlying methodology of ORIGEN—as opposed to the integrated depletion capability it provides in all coupled neutron transport/depletion sequences in SCALE, as described in other chapters.« less
Integrated Data Base Program: a status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Notz, K.J.; Klein, J.A.
1984-06-01
The Integrated Data Base (IDB) Program provides official Department of Energy (DOE) data on spent fuel and radioactive waste inventories, projections, and characteristics. The accomplishments of FY 1983 are summarized for three broad areas: (1) upgrading and issuing of the annual report on spent fuel and radioactive waste inventories, projections, and characteristics, including ORIGEN2 applications and a quality assurance plan; (2) creation of a summary data file in user-friendly format for use on a personal computer and enhancing user access to program data; and (3) optimizing and documentation of the data handling methodology used by the IDB Program and providingmore » direct support to other DOE programs and sites in data handling. Plans for future work in these three areas are outlined. 23 references, 11 figures.« less
Cai, Yao; Hu, Huasi; Lu, Shuangying; Jia, Qinggang
2018-05-01
To minimize the size and weight of a vehicle-mounted accelerator-driven D-T neutron source and protect workers from unnecessary irradiation after the equipment shutdown, a method to optimize radiation shielding material aiming at compactness, lightweight, and low activation for the fast neutrons was developed. The method employed genetic algorithm, combining MCNP and ORIGEN codes. A series of composite shielding material samples were obtained by the method step by step. The volume and weight needed to build a shield (assumed as a coaxial tapered cylinder) were adopted to compare the performance of the materials visually and conveniently. The results showed that the optimized materials have excellent performance in comparison with the conventional materials. The "MCNP6-ACT" method and the "rigorous two steps" (R2S) method were used to verify the activation grade of the shield irradiated by D-T neutrons. The types of radionuclide, the energy spectrum of corresponding decay gamma source, and the variation in decay gamma dose rate were also computed. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hardening neutron spectrum for advanced actinide transmutation experiments in the ATR.
Chang, G S; Ambrosek, R G
2005-01-01
The most effective method for transmuting long-lived isotopes contained in spent nuclear fuel into shorter-lived fission products is in a fast neutron spectrum reactor. In the absence of a fast test reactor in the United States, initial irradiation testing of candidate fuels can be performed in a thermal test reactor that has been modified to produce a test region with a hardened neutron spectrum. Such a test facility, with a spectrum similar but somewhat softer than that of the liquid-metal fast breeder reactor (LMFBR), has been constructed in the INEEL's Advanced Test Reactor (ATR). The radial fission power distribution of the actinide fuel pin, which is an important parameter in fission gas release modelling, needs to be accurately predicted and the hardened neutron spectrum in the ATR and the LMFBR fast neutron spectrum is compared. The comparison analyses in this study are performed using MCWO, a well-developed tool that couples the Monte Carlo transport code MCNP with the isotope depletion and build-up code ORIGEN-2. MCWO analysis yields time-dependent and neutron-spectrum-dependent minor actinide and Pu concentrations and detailed radial fission power profile calculations for a typical fast reactor (LMFBR) neutron spectrum and the hardened neutron spectrum test region in the ATR. The MCWO-calculated results indicate that the cadmium basket used in the advanced fuel test assembly in the ATR can effectively depress the linear heat generation rate in the experimental fuels and harden the neutron spectrum in the test region.
Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix
Pigni, M. T.; Croft, S.; Gauld, I. C.
2016-04-25
Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... securities and assumptions of liability. Any person desiring to intervene or to protest should file with the... with Internet access who will eFile a document and/or be listed as a contact for an intervenor must create and validate an eRegistration account using the eRegistration link. Select the eFiling link to log...
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Characterization of Filters Loaded With Reactor Strontium Carbonate - 13203
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josephson, Walter S.; Steen, Franciska H.
A collection of three highly radioactive filters containing reactor strontium carbonate were being prepared for disposal. All three filters were approximately characterized at the time of manufacture by gravimetric methods. The first filter had been partially emptied, and the quantity of residual activity was uncertain. Dose rate to activity modeling using the Monte-Carlo N Particle (MCNP) code was selected to confirm the gravimetric characterization of the full filters, and to fully characterize the partially emptied filter. Although dose rate to activity modeling using MCNP is a common technique, it is not often used for Bremsstrahlung-dominant materials such as reactor strontium.more » As a result, different MCNP modeling options were compared to determine the optimum approach. This comparison indicated that the accuracy of the results were heavily dependent on the MCNP modeling details and the location of the dose rate measurement point. The optimum model utilized a photon spectrum generated by the Oak Ridge Isotope Generation and Depletion (ORIGEN) code and dose rates measured at 30 cm. Results from the optimum model agreed with the gravimetric estimates within 15%. It was demonstrated that dose rate to activity modeling can be successful for Bremsstrahlung-dominant radioactive materials. However, the degree of success is heavily dependent on the choice of modeling techniques. (authors)« less
The Air Force Nuclear Engineering Center Structural Activation and Integrity Evaluation
1990-03-01
Vi1 List of Figures Figure Page 1. Inside Piqua Nuclear Power Facility containment building on top of the entombed reactor core ... 5...5. Predicted activity percentage of individual materials in the AFNEC ..... ........................ 21 6. Predicted radioisotope activity percentage...of total radioisotopic inventory within entombment at 20 years after shutdown ......................... 23 iv List of Tables Table Page 1. ORIGEN2
Konrad Adenauer’s Military Advisors
1989-02-13
Ausgabe. Hans-Peter Schwarz and 45 Rudolf Morsey, Hg. Vol. 1, Briefe 1945-1947 hg. v. Hans Peter Mensing. Berlin: Siedler Verlag, 1983. Vol. 2, Briefe 1949...Dietrich, Rudolf Morsey and Hans-Peter Schwarz, ed. Quellen zur Geschichte des Parlarnentarismus und der politischen Partein. Bd. 3, Auftakt zur Ara...New York: Penguin, 1982. Steiner , Jirg. European Democracies. New York: Longman, 1986. Taylor, A.J.P. The Origens of the Second World War. 2d. ed. New
Imoto, Junpei; Ochiai, Asumi; Furuki, Genki; Suetake, Mizuki; Ikehara, Ryohei; Horie, Kenji; Takehara, Mami; Yamasaki, Shinya; Nanba, Kenji; Ohnuki, Toshihiko; Law, Gareth T W; Grambow, Bernd; Ewing, Rodney C; Utsunomiya, Satoshi
2017-07-14
Highly radioactive cesium-rich microparticles (CsMPs) released from the Fukushima Daiichi Nuclear Power Plant (FDNPP) provide nano-scale chemical fingerprints of the 2011 tragedy. U, Cs, Ba, Rb, K, and Ca isotopic ratios were determined on three CsMPs (3.79-780 Bq) collected within ~10 km from the FDNPP to determine the CsMPs' origin and mechanism of formation. Apart from crystalline Fe-pollucite, CsFeSi 2 O 6 · nH 2 O, CsMPs are comprised mainly of Zn-Fe-oxide nanoparticles in a SiO 2 glass matrix (up to ~30 wt% of Cs and ~1 wt% of U mainly associated with Zn-Fe-oxide). The 235 U/ 238 U values in two CsMPs: 0.030 (±0.005) and 0.029 (±0.003), are consistent with that of enriched nuclear fuel. The values are higher than the average burnup estimated by the ORIGEN code and lower than non-irradiated fuel, suggesting non-uniform volatilization of U from melted fuels with different levels of burnup, followed by sorption onto Zn-Fe-oxides. The nano-scale texture and isotopic analyses provide a partial record of the chemical reactions that occurred in the fuel during meltdown. Also, the CsMPs were an important medium of transport for the released radionuclides in a respirable form.
Methodology for worker neutron exposure evaluation in the PDCF facility design.
Scherpelz, R I; Traub, R J; Pryor, K H
2004-01-01
A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were learned from this effect. This paper addresses these issues and the resulting methodology.
In-Field Performance Testing of the Fork Detector for Quantitative Spent Fuel Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauld, Ian C.; Hu, Jianwei; De Baere, P.
Expanding spent fuel dry storage activities worldwide are increasing demands on safeguards authorities that perform inspections. The European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) require measurements to verify declarations when spent fuel is transferred to difficult-to-access locations, such as dry storage casks and the repositories planned in Finland and Sweden. EURATOM makes routine use of the Fork detector to obtain gross gamma and total neutron measurements during spent fuel inspections. Data analysis is performed by modules in the integrated Review and Analysis Program (iRAP) software, developed jointly by EURATOM and the IAEA. Under the frameworkmore » of the US Department of Energy–EURATOM cooperation agreement, a module for automated Fork detector data analysis has been developed by Oak Ridge National Laboratory (ORNL) using the ORIGEN code from the SCALE code system and implemented in iRAP. EURATOM and ORNL recently performed measurements on 30 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel (Clab), operated by the Swedish Nuclear Fuel and Waste Management Company (SKB). The measured assemblies represent a broad range of fuel characteristics. Neutron count rates for 15 measured pressurized water reactor assemblies are predicted with an average relative standard deviation of 4.6%, and gamma signals are predicted on average within 2.6% of the measurement. The 15 measured boiling water reactor assemblies exhibit slightly larger deviations of 5.2% for the gamma signals and 5.7% for the neutron count rates, compared to measurements. These findings suggest that with improved analysis of the measurement data, existing instruments can provide increased verification of operator declarations of the spent fuel and thereby also provide greater ability to confirm integrity of an assembly. These results support the application of the Fork detector as a fully quantitative spent fuel verification technique.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dionne, B.; Tzanos, C. P.
To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model andmore » methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.« less
U.S. Commercial Spent Nuclear Fuel Assembly Characteristics - 1968-2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jianwei; Peterson, Joshua L.; Gauld, Ian C.
2016-09-01
Activities related to management of spent nuclear fuel (SNF) are increasing in the US and many other countries. Over 240,000 SNF assemblies have been discharged from US commercial reactors since the late 1960s. The enrichment and burnup of SNF have changed significantly over the past 40 years, and fuel assembly designs have also evolved. Understanding the general characteristics of SNF helps regulators and other stakeholders form overall strategies towards the final disposal of US SNF. This report documents a survey of all US commercial SNF assemblies in the GC-859 database and provides reference SNF source terms (e.g., nuclide inventories, decaymore » heat, and neutron/photon emission) at various cooling times up to 200 years after fuel discharge. This study reviews the distribution and evolution of fuel parameters of all SNF assemblies discharged over the past 40 years. Assemblies were categorized into three groups based on discharge year, and the median burnups and enrichments of each group were used to establish representative cases. An extended burnup case was created for boiling water reactor (BWR) fuels, and another was created for the pressurized water reactor (PWR) fuels. Two additional cases were developed to represent the eight mixed oxide (MOX) fuel assemblies in the database. Burnup calculations were performed for each representative case. Realistic parameters for fuel design and operations were used to model the SNF and to provide reference fuel characteristics representative of the current inventory. Burnup calculations were performed using the ORIGEN code, which is part of the SCALE nuclear modeling and simulation code system. Results include total activity, decay heat, photon emission, neutron flux, gamma heat, and plutonium content, as well as concentrations for 115 significant nuclides. These quantities are important in the design, regulation, and operations of SNF storage, transportation, and disposal systems.« less
Status of a standard for neutron skyshine calculation and measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Westfall, R.M.; Wright, R.Q.; Greenborg, J.
1990-01-01
An effort has been under way for several years to prepare a draft standard, ANS-6.6.2, Calculation and Measurement of Direct and Scattered Neutron Radiation from Contained Sources Due to Nuclear Power Operations. At the outset, the work group adopted a three-phase study involving one-dimensional analyses, a measurements program, and multi-dimensional analyses. Of particular interest are the neutron radiation levels associated with dry-fuel storage at reactor sites. The need for dry storage has been investigated for various scenarios of repository and monitored retrievable storage (MRS) facilities availability with the waste stream analysis model. The concern is with long-term integrated, low-level dosesmore » at long distances from a multiplicity of sources. To evaluate the conservatism associated with one-dimensional analyses, the work group has specified a series of simple problems. Sources as a function of fuel exposure were determined for a Westinghouse 17 x 17 pressurized water reactor assembly with the ORIGEN-S module of the SCALE system. The energy degradation of the 35 GWd/ton U sources was determined for two generic designs of dry-fuel storage casks.« less
Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne
2017-05-01
Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.
Chen, Chia-Yen; Lee, Phil H; Castro, Victor M; Minnier, Jessica; Charney, Alexander W; Stahl, Eli A; Ruderfer, Douglas M; Murphy, Shawn N; Gainer, Vivian; Cai, Tianxi; Jones, Ian; Pato, Carlos N; Pato, Michele T; Landén, Mikael; Sklar, Pamela; Perlis, Roy H; Smoller, Jordan W
2018-04-18
Bipolar disorder (BD) is a heritable mood disorder characterized by episodes of mania and depression. Although genomewide association studies (GWAS) have successfully identified genetic loci contributing to BD risk, sample size has become a rate-limiting obstacle to genetic discovery. Electronic health records (EHRs) represent a vast but relatively untapped resource for high-throughput phenotyping. As part of the International Cohort Collection for Bipolar Disorder (ICCBD), we previously validated automated EHR-based phenotyping algorithms for BD against in-person diagnostic interviews (Castro et al. Am J Psychiatry 172:363-372, 2015). Here, we establish the genetic validity of these phenotypes by determining their genetic correlation with traditionally ascertained samples. Case and control algorithms were derived from structured and narrative text in the Partners Healthcare system comprising more than 4.6 million patients over 20 years. Genomewide genotype data for 3330 BD cases and 3952 controls of European ancestry were used to estimate SNP-based heritability (h 2 g ) and genetic correlation (r g ) between EHR-based phenotype definitions and traditionally ascertained BD cases in GWAS by the ICCBD and Psychiatric Genomics Consortium (PGC) using LD score regression. We evaluated BD cases identified using 4 EHR-based algorithms: an NLP-based algorithm (95-NLP) and three rule-based algorithms using codified EHR with decreasing levels of stringency-"coded-strict", "coded-broad", and "coded-broad based on a single clinical encounter" (coded-broad-SV). The analytic sample comprised 862 95-NLP, 1968 coded-strict, 2581 coded-broad, 408 coded-broad-SV BD cases, and 3 952 controls. The estimated h 2 g were 0.24 (p = 0.015), 0.09 (p = 0.064), 0.13 (p = 0.003), 0.00 (p = 0.591) for 95-NLP, coded-strict, coded-broad and coded-broad-SV BD, respectively. The h 2 g for all EHR-based cases combined except coded-broad-SV (excluded due to 0 h 2 g ) was 0.12 (p = 0.004). These h 2 g were lower or similar to the h 2 g observed by the ICCBD + PGCBD (0.23, p = 3.17E-80, total N = 33,181). However, the r g between ICCBD + PGCBD and the EHR-based cases were high for 95-NLP (0.66, p = 3.69 × 10 -5 ), coded-strict (1.00, p = 2.40 × 10 -4 ), and coded-broad (0.74, p = 8.11 × 10 -7 ). The r g between EHR-based BD definitions ranged from 0.90 to 0.98. These results provide the first genetic validation of automated EHR-based phenotyping for BD and suggest that this approach identifies cases that are highly genetically correlated with those ascertained through conventional methods. High throughput phenotyping using the large data resources available in EHRs represents a viable method for accelerating psychiatric genetic research.
ERIC Educational Resources Information Center
Women's Bureau (DOL), Washington, DC.
Data on Hispanic women in the labor force between 1978 and 1988 show the following: (1) 6.5 percent of the women in the work force in 1988 were of Hispanic origin (3.6 million); (2) the median age of Hispanic women was 26.1 years, 2-5 years younger than Black or White women; (3) 66 percent of Hispanic women participate in the labor force, a higher…
NASA Astrophysics Data System (ADS)
Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria
2017-07-01
In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Genetic Programming-based Phononic Bandgap Structure Design
2011-09-01
derivative-based methods is that they require a good starting location to find the global minimum of a function. As can be seen from figure 2, there are many... FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130 G J ORRIS 7140 J S PERKINS CODE 7140 S A CHIN BING CODE 7180 4555 OVERLOOK AVE SW WASHINGTON DC
Formal Verification of Digital Logic
1991-12-01
INVERT circuit was based upon VHDL code provided in the Zycad Reference Manual [32:Ch 10,73]. The other circuits were based upon VHtDL code written...HALFADD.PL /* This file implements a simple half-adder that * /* is built from inverters and 2 input nand gates. * /* It is based upon a Zycad VHDL file...It is based upon a Zycad VHDL file written by * /* Capt Dave Banton, which is attached below the * /* Prolog code . *load..in(primitive). %h get nor2
Network Coding in Relay-based Device-to-Device Communications
Huang, Jun; Gharavi, Hamid; Yan, Huifang; Xing, Cong-cong
2018-01-01
Device-to-Device (D2D) communications has been realized as an effective means to improve network throughput, reduce transmission latency, and extend cellular coverage in 5G systems. Network coding is a well-established technique known for its capability to reduce the number of retransmissions. In this article, we review state-of-the-art network coding in relay-based D2D communications, in terms of application scenarios and network coding techniques. We then apply two representative network coding techniques to dual-hop D2D communications and present an efficient relay node selecting mechanism as a case study. We also outline potential future research directions, according to the current research challenges. Our intention is to provide researchers and practitioners with a comprehensive overview of the current research status in this area and hope that this article may motivate more researchers to participate in developing network coding techniques for different relay-based D2D communications scenarios. PMID:29503504
Towers of generalized divisible quantum codes
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
NASA Astrophysics Data System (ADS)
Miyake, Yasuto; Matsuzaki, Hiroyuki; Sasa, Kimikazu; Takahashi, Tsutomu
2015-10-01
In March 2011, vast amounts of radionuclides were released into the environment due to the Fukushima Daiichi Nuclear Power Plant (F1NPP) accident. However, very little work has been done concerning accident-derived long-lived nuclides such as 129I (T1/2 = 1.57 × 107 year) and 36Cl (T1/2 = 3.01 × 105 year). 129I and 131I are both produced by 235U fission in nuclear reactors. Being isotopes of iodine, these nuclides are expected to behave similarly in the environment. This makes 129I useful for retrospective reconstruction of 131I distribution during the initial stages of the accident. On the other hand, 36Cl is generated during reactor operation via neutron capture reaction of 35Cl, an impurity in the coolant or reactor component. Resulting 36Cl/Cl ratio within the reactor is thus much higher compared to that in environment. Similar to 129I, 36Cl is expected to have leaked out during the accident and it is important to evaluate its effects. In this study, 129I concentrations were determined in several surface soil samples collected around F1NPP. Average 129I/131I ratio was estimated to be 26.1 ± 5.8 as of March 11, 2011, consistent with calculations using ORIGEN2 code and other published data. 36Cl/Cl ratios in some of the soil samples were likewise measured and ranged from 1.1 × 10-12 to 2.6 × 10-11. These are higher compared to ratios measured around F1NPP before the accident. A positive correlation between 36Cl and 129I concentration was observed.
Estudio teórico del CO2. Orbitales de valencia y del ``core''
NASA Astrophysics Data System (ADS)
Olalla Gutiérrez, E.
Hemos calculado las intensidades de las transiciones E1 a los miembros de las series de Rydberg con origen en los orbitales ``no enlazantes'' del dióxido de carbono, especie de conocida relevancia atmosférica. Se han computado, asimismo, los continuos de fotoionización correspondientes a los distintos canales de ionización, representándolos como densidad espectral de fuerza de oscilador frente a la energía del fotón incidente; mostramos los resultados df/dE para la fotoionización total de esta especie en el intervalo 15-60 eV. Todos los cálculos se han llevado a cabo mediante la formulación Molecular del Método de los Orbitales de Defecto Cuántico, MQDO [1,2]. La calidad de los resultados que presentamos se ha evaluado en base a la comparación con los datos, tanto experimentales como teóricos, disponibles en la bibliografía. El acuerdo encontrado es altamente satisfactorio
Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom
ERIC Educational Resources Information Center
Lin, Zheng
2012-01-01
This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…
Supporting Situated Learning Based on QR Codes with Etiquetar App: A Pilot Study
ERIC Educational Resources Information Center
Camacho, Miguel Olmedo; Pérez-Sanagustín, Mar; Alario-Hoyos, Carlos; Soldani, Xavier; Kloos, Carlos Delgado; Sayago, Sergio
2014-01-01
EtiquetAR is an authoring tool for supporting the design and enactment of situated learning experiences based on QR tags. Practitioners use etiquetAR for creating, managing and personalizing collections of QR codes with special properties: (1) codes can have more than one link pointing at different multimedia resources, (2) codes can be updated…
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
REACTOR PHYSICS MODELING OF SPENT RESEARCH REACTOR FUEL FOR TECHNICAL NUCLEAR FORENSICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, T.; Beals, D.; Sternat, M.
2011-07-18
Technical nuclear forensics (TNF) refers to the collection, analysis and evaluation of pre- and post-detonation radiological or nuclear materials, devices, and/or debris. TNF is an integral component, complementing traditional forensics and investigative work, to help enable the attribution of discovered radiological or nuclear material. Research is needed to improve the capabilities of TNF. One research area of interest is determining the isotopic signatures of research reactors. Research reactors are a potential source of both radiological and nuclear material. Research reactors are often the least safeguarded type of reactor; they vary greatly in size, fuel type, enrichment, power, and burn-up. Manymore » research reactors are fueled with highly-enriched uranium (HEU), up to {approx}93% {sup 235}U, which could potentially be used as weapons material. All of them have significant amounts of radiological material with which a radioactive dispersal device (RDD) could be built. Therefore, the ability to attribute if material originated from or was produced in a specific research reactor is an important tool in providing for the security of the United States. Currently there are approximately 237 operating research reactors worldwide, another 12 are in temporary shutdown and 224 research reactors are reported as shut down. Little is currently known about the isotopic signatures of spent research reactor fuel. An effort is underway at Savannah River National Laboratory (SRNL) to analyze spent research reactor fuel to determine these signatures. Computer models, using reactor physics codes, are being compared to the measured analytes in the spent fuel. This allows for improving the reactor physics codes in modeling research reactors for the purpose of nuclear forensics. Currently the Oak Ridge Research reactor (ORR) is being modeled and fuel samples are being analyzed for comparison. Samples of an ORR spent fuel assembly were taken by SRNL for analytical and radiochemical analysis. The fuel assembly was modeled using MONTEBURNS(MCNP5/ ORIGEN2.2) and MCNPX/CINDER90. The results from the models have been compared to each other and to the measured data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called ''Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres'', (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the ''Robust design of artificial neural networks methodology'' and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored atmore » synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of {sup 252}Cf, {sup 241}AmBe and {sup 239}PuBe neutron sources measured with a Bonner spheres system.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work a neutron spectrum unfolding code, based on artificial intelligence technology is presented. The code called "Neutron Spectrometry and Dosimetry with Artificial Neural Networks and two Bonner spheres", (NSDann2BS), was designed in a graphical user interface under the LabVIEW programming environment. The main features of this code are to use an embedded artificial neural network architecture optimized with the "Robust design of artificial neural networks methodology" and to use two Bonner spheres as the only piece of information. In order to build the code here presented, once the net topology was optimized and properly trained, knowledge stored at synaptic weights was extracted and using a graphical framework build on the LabVIEW programming environment, the NSDann2BS code was designed. This code is friendly, intuitive and easy to use for the end user. The code is freely available upon request to authors. To demonstrate the use of the neural net embedded in the NSDann2BS code, the rate counts of 252Cf, 241AmBe and 239PuBe neutron sources measured with a Bonner spheres system.
Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C
2016-06-01
Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
75 FR 32099 - Enterprise Duty To Serve Underserved Markets
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-07
.... Delinquencies and defaults on chattel loans typically exceed rates on mortgage loans.\\31\\ \\29\\ See Ronald A... incentive to continue to repay their loans, which may lead to increased delinquencies and defaults.'' Origen...
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral
2002-01-01
An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
La Ruta de los Origenes stops in Montsec: observations and other educational activities
NASA Astrophysics Data System (ADS)
Ribas, S. J.
2013-05-01
La Ruta de los Origenes is an European Cooperation Project POCTEFA funded by FEDER- UE. The main objective of this project is to establish a touristic route on both sides of the Pyrenees to approach Origins topic to the public. This project is developed by six partners, three in Catalonia and three in Midi-Pyrenees, and is focused in Astronomy, Paleontology and Archaelogy. Montsec mountains are placed in the Pre-Pyrenees area of Lleida province and they are a very good place for astronomy (dark skies, good seeing, good weather conditions). Parc Astronomic Montsec is the major astronomy project in that area and it includes a research astronomical observatory and an outreach and education center. So Montsec mountains and their Parc Astronomic are one of the stops of this route with an important number of activities to approach astronomy to public.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Dose Rate Calculation of TRU Metal Ingot in Pyroprocessing - 12202
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Yoon Hee; Lee, Kunjai
Spent fuel management has been a main problem to be solved for continuous utilization of nuclear energy. Spent fuel management policy of Korea is 'Wait and See'. It is focused on Pyro-process and SFR (Sodium-cooled Fast Reactor) for closed-fuel cycle research and development in Korea. For peaceful use of nuclear facilities, the proliferation resistance has to be proved. Proliferation resistance is one of key constraints in the deployment of advanced nuclear energy systems. Non-proliferation and safeguard issues have been strengthening internationally. Barriers to proliferation are that reduces desirability or attractiveness as an explosive and makes it difficult to gain accessmore » to the materials, or makes it difficult to misuse facilities and/or technologies for weapons applications. Barriers to proliferation are classified into intrinsic and extrinsic barriers. Intrinsic barrier is inherent quality of reactor materials or the fuel cycle that is built into the reactor design and operation such as material and technical barriers. As one of the intrinsic measures, the radiation from the material is considered significantly. Therefore the radiation of TRU metal ingot from the pyro-process was calculated using ORIGEN and MCNP code. (authors)« less
Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.
Liu, Tao; Lin, Changyu; Djordjevic, Ivan B
2016-06-27
In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.
A novel construction method of QC-LDPC codes based on CRT for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
10Gbps 2D MGC OCDMA Code over FSO Communication System
NASA Astrophysics Data System (ADS)
Professor Urmila Bhanja, Associate, Dr.; Khuntia, Arpita; Alamasety Swati, (Student
2017-08-01
Currently, wide bandwidth signal dissemination along with low latency is a leading requisite in various applications. Free space optical wireless communication has introduced as a realistic technology for bridging the gap in present high data transmission fiber connectivity and as a provisional backbone for rapidly deployable wireless communication infrastructure. The manuscript highlights on the implementation of 10Gbps SAC-OCDMA FSO communications using modified two dimensional Golomb code (2D MGC) that possesses better auto correlation, minimum cross correlation and high cardinality. A comparison based on pseudo orthogonal (PSO) matrix code and modified two dimensional Golomb code (2D MGC) is developed in the proposed SAC OCDMA-FSO communication module taking different parameters into account. The simulative outcome signifies that the communication radius is bounded by the multiple access interference (MAI). In this work, a comparison is made in terms of bit error rate (BER), and quality factor (Q) based on modified two dimensional Golomb code (2D MGC) and PSO matrix code. It is observed that the 2D MGC yields better results compared to the PSO matrix code. The simulation results are validated using optisystem version 14.
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebe, A.; Leveling, A.; Lu, T.
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose
NASA Astrophysics Data System (ADS)
Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.
2018-01-01
The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.
Ieva, Antonio Di; Audigé, Laurent; Kellman, Robert M.; Shumrick, Kevin A.; Ringl, Helmut; Prein, Joachim; Matula, Christian
2014-01-01
The AOCMF Classification Group developed a hierarchical three-level craniomaxillofacial classification system with increasing level of complexity and details. The highest level 1 system distinguish four major anatomical units, including the mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). This tutorial presents the level 2 and more detailed level 3 systems for the skull base and cranial vault units. The level 2 system describes fracture location outlining the topographic boundaries of the anatomic regions, considering in particular the endocranial and exocranial skull base surfaces. The endocranial skull base is divided into nine regions; a central skull base adjoining a left and right side are divided into the anterior, middle, and posterior skull base. The exocranial skull base surface and cranial vault are divided in regions defined by the names of the bones involved: frontal, parietal, temporal, sphenoid, and occipital bones. The level 3 system allows assessing fracture morphology described by the presence of fracture fragmentation, displacement, and bone loss. A documentation of associated intracranial diagnostic features is proposed. This tutorial is organized in a sequence of sections dealing with the description of the classification system with illustrations of the topographical skull base and cranial vault regions along with rules for fracture location and coding, a series of case examples with clinical imaging and a general discussion on the design of this classification. PMID:25489394
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
Noise suppression system of OCDMA with spectral/spatial 2D hybrid code
NASA Astrophysics Data System (ADS)
Matem, Rima; Aljunid, S. A.; Junita, M. N.; Rashidi, C. B. M.; Shihab Aqrab, Israa
2017-11-01
In this paper, we propose a novel 2D spectral/spatial hybrid code based on 1D ZCC and 1D MD where the both present a zero cross correlation property analyzed and the influence of the noise of optical as Phase Induced Intensity Noise (PIIN), shot and thermal noise. This new code is shown effectively to mitigate the PIIN and suppresses MAI. Using 2D ZCC/MD code the performance of the system can be improved in term of as well as to support more simultaneous users compared of the 2D FCC/MDW and 2D DPDC codes.
Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R
2008-05-15
A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.
Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul
2017-11-01
The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Sinnott, Jennifer A; Cai, Fiona; Yu, Sheng; Hejblum, Boris P; Hong, Chuan; Kohane, Isaac S; Liao, Katherine P
2018-05-17
Standard approaches for large scale phenotypic screens using electronic health record (EHR) data apply thresholds, such as ≥2 diagnosis codes, to define subjects as having a phenotype. However, the variation in the accuracy of diagnosis codes can impair the power of such screens. Our objective was to develop and evaluate an approach which converts diagnosis codes into a probability of a phenotype (PheProb). We hypothesized that this alternate approach for defining phenotypes would improve power for genetic association studies. The PheProb approach employs unsupervised clustering to separate patients into 2 groups based on diagnosis codes. Subjects are assigned a probability of having the phenotype based on the number of diagnosis codes. This approach was developed using simulated EHR data and tested in a real world EHR cohort. In the latter, we tested the association between low density lipoprotein cholesterol (LDL-C) genetic risk alleles known for association with hyperlipidemia and hyperlipidemia codes (ICD-9 272.x). PheProb and thresholding approaches were compared. Among n = 1462 subjects in the real world EHR cohort, the threshold-based p-values for association between the genetic risk score (GRS) and hyperlipidemia were 0.126 (≥1 code), 0.123 (≥2 codes), and 0.142 (≥3 codes). The PheProb approach produced the expected significant association between the GRS and hyperlipidemia: p = .001. PheProb improves statistical power for association studies relative to standard thresholding approaches by leveraging information about the phenotype in the billing code counts. The PheProb approach has direct applications where efficient approaches are required, such as in Phenome-Wide Association Studies.
On origin of genetic code and tRNA before translation
2011-01-01
Background Synthesis of proteins is based on the genetic code - a nearly universal assignment of codons to amino acids (aas). A major challenge to the understanding of the origins of this assignment is the archetypal "key-lock vs. frozen accident" dilemma. Here we re-examine this dilemma in light of 1) the fundamental veto on "foresight evolution", 2) modular structures of tRNAs and aminoacyl-tRNA synthetases, and 3) the updated library of aa-binding sites in RNA aptamers successfully selected in vitro for eight amino acids. Results The aa-binding sites of arginine, isoleucine and tyrosine contain both their cognate triplets, anticodons and codons. We have noticed that these cases might be associated with palindrome-dinucleotides. For example, one-base shift to the left brings arginine codons CGN, with CG at 1-2 positions, to the respective anticodons NCG, with CG at 2-3 positions. Formally, the concomitant presence of codons and anticodons is also expected in the reverse situation, with codons containing palindrome-dinucleotides at their 2-3 positions, and anticodons exhibiting them at 1-2 positions. A closer analysis reveals that, surprisingly, RNA binding sites for Arg, Ile and Tyr "prefer" (exactly as in the actual genetic code) the anticodon(2-3)/codon(1-2) tetramers to their anticodon(1-2)/codon(2-3) counterparts, despite the seemingly perfect symmetry of the latter. However, since in vitro selection of aa-specific RNA aptamers apparently had nothing to do with translation, this striking preference provides a new strong support to the notion of the genetic code emerging before translation, in response to catalytic (and possibly other) needs of ancient RNA life. Consistently with the pre-translation origin of the code, we propose here a new model of tRNA origin by the gradual, Fibonacci process-like, elongation of a tRNA molecule from a primordial coding triplet and 5'DCCA3' quadruplet (D is a base-determinator) to the eventual 76 base-long cloverleaf-shaped molecule. Conclusion Taken together, our findings necessarily imply that primordial tRNAs, tRNA aminoacylating ribozymes, and (later) the translation machinery in general have been co-evolving to ''fit'' the (likely already defined) genetic code, rather than the opposite way around. Coding triplets in this primal pre-translational code were likely similar to the anticodons, with second and third nucleotides being more important than the less specific first one. Later, when the code was expanding in co-evolution with the translation apparatus, the importance of 2-3 nucleotides of coding triplets "transferred" to the 1-2 nucleotides of their complements, thus distinguishing anticodons from codons. This evolutionary primacy of anticodons in genetic coding makes the hypothesis of primal stereo-chemical affinity between amino acids and cognate triplets, the hypothesis of coding coenzyme handles for amino acids, the hypothesis of tRNA-like genomic 3' tags suggesting that tRNAs originated in replication, and the hypothesis of ancient ribozymes-mediated operational code of tRNA aminoacylation not mutually contradicting but rather co-existing in harmony. Reviewers This article was reviewed by Eugene V. Koonin, Wentao Ma (nominated by Juergen Brosius) and Anthony Poole. PMID:21342520
Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.
2016-01-01
Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331
Paper-Based Textbooks with Audio Support for Print-Disabled Students.
Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko
2015-01-01
Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.
Case-crossover analysis of heat-coded deaths and vulnerable subpopulations: Oklahoma, 1990-2011
NASA Astrophysics Data System (ADS)
Moore, Brianna F.; Brooke Anderson, G.; Johnson, Matthew G.; Brown, Sheryll; Bradley, Kristy K.; Magzamen, Sheryl
2017-11-01
The extent of the association between temperature and heat-coded deaths, for which heat is the primary cause of death, remains largely unknown. We explored the association between temperature and heat-coded deaths and potential interactions with various demographic and environmental factors. A total of 335 heat-coded deaths that occurred in Oklahoma from 1990 through 2011 were identified using heat-related International Classification of Diseases codes, cause-of-death nomenclature, and narrative descriptions. Conditional logistic regression models examined the association between temperature and heat index on heat-coded deaths. Interaction by demographic factors (age, sex, marital status, living alone, outdoor/heavy labor occupations) and environmental factors (ozone, PM10, PM2.5) was also explored. Temperatures ≥99 °F (the median value) were associated with approximately five times higher odds of a heat-coded death as compared to temperatures <99 °F (adjusted OR = 4.9, 95% CI 3.3, 7.2). The effect estimates were attenuated when exposure to heat was characterized by heat index. The interaction results suggest that effect of temperature on heat-coded deaths may depend on sex and occupation. For example, the odds of a heat-coded death among outdoor/heavy labor workers exposed to temperatures ≥99 °F was greater than expected based on the sum of the individual effects (observed OR = 14.0, 95% CI 2.7, 72.0; expected OR = 4.1 [2.8 + 2.3-1.0]). Our results highlight the extent of the association between temperature and heat-coded deaths and emphasize the need for a comprehensive, multisource definition of heat-coded deaths. Furthermore, based on the interaction results, we recommend that states implement or expand heat safety programs to protect vulnerable subpopulations, such as outdoor workers.
A novel approach of an absolute coding pattern based on Hamiltonian graph
NASA Astrophysics Data System (ADS)
Wang, Ya'nan; Wang, Huawei; Hao, Fusheng; Liu, Liqiang
2017-02-01
In this paper, a novel approach of an optical type absolute rotary encoder coding pattern is presented. The concept is based on the principle of the absolute encoder to find out a unique sequence that ensures an unambiguous shaft position of any angular. We design a single-ring and a n-by-2 matrix absolute encoder coding pattern by using the variations of Hamiltonian graph principle. 12 encoding bits is used in the single-ring by a linear array CCD to achieve an 1080-position cycle encoding. Besides, a 2-by-2 matrix is used as an unit in the 2-track disk to achieve a 16-bits encoding pattern by using an area array CCD sensor (as a sample). Finally, a higher resolution can be gained by an electronic subdivision of the signals. Compared with the conventional gray or binary code pattern (for a 2n resolution), this new pattern has a higher resolution (2n*n) with less coding tracks, which means the new pattern can lead to a smaller encoder, which is essential in the industrial production.
Monte Carol-based validation of neutronic methodology for EBR-II analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, J.R.; Finck, P.J.
1993-01-01
The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1991-01-01
The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.
Coding and Billing in Surgical Education: A Systems-Based Practice Education Program.
Ghaderi, Kimeya F; Schmidt, Scott T; Drolet, Brian C
Despite increased emphasis on systems-based practice through the Accreditation Council for Graduate Medical Education core competencies, few studies have examined what surgical residents know about coding and billing. We sought to create and measure the effectiveness of a multifaceted approach to improving resident knowledge and performance of documenting and coding outpatient encounters. We identified knowledge gaps and barriers to documentation and coding in the outpatient setting. We implemented a series of educational and workflow interventions with a group of 12 residents in a surgical clinic at a tertiary care center. To measure the effect of this program, we compared billing codes for 1 year before intervention (FY2012) to prospectively collected data from the postintervention period (FY2013). All related documentation and coding were verified by study-blinded auditors. Interventions took place at the outpatient surgical clinic at Rhode Island Hospital, a tertiary-care center. A cohort of 12 plastic surgery residents ranging from postgraduate year 2 through postgraduate year 6 participated in the interventional sequence. A total of 1285 patient encounters in the preintervention group were compared with 1170 encounters in the postintervention group. Using evaluation and management codes (E&M) as a measure of documentation and coding, we demonstrated a significant and durable increase in billing with supporting clinical documentation after the intervention. For established patient visits, the monthly average E&M code level increased from 2.14 to 3.05 (p < 0.01); for new patients the monthly average E&M level increased from 2.61 to 3.19 (p < 0.01). This study describes a series of educational and workflow interventions, which improved resident coding and billing of outpatient clinic encounters. Using externally audited coding data, we demonstrate significantly increased rates of higher complexity E&M coding in a stable patient population based on improved documentation and billing awareness by the residents. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Magnetic Turbulence in Clusters of Galaxies
2009-01-01
plasmas que nos permiten confrontar conceptos teóricos del origen del campo magnético con observaciones detalladas. La turbulencia magnética en...GALAXIES T. A. Enßlin,1 T. Clarke,2 C. Vogt,3 A. Waelkens,1 A. A. Schekochihin4 RESUMEN Los cúmulos de galaxias son grandes laboratorios de turbulencia en...éstos cúmulos puede ser estudiada a través de la emisión radio-sincrotrón del medio intra-cúmulo en forma de rélicas y halos de los cúmulos. El
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
Specific low temperature release of 131Xe from irradiated MOX fuel
NASA Astrophysics Data System (ADS)
Hiernaut, J.-P.; Wiss, T.; Rondinella, V. V.; Colle, J.-Y.; Sasahara, A.; Sonoda, T.; Konings, R. J. M.
2009-08-01
A particular low temperature behaviour of the 131Xe isotope was observed during release studies of fission gases from MOX fuel samples irradiated at 44.5 GWd/tHM. A reproducible release peak, representing 2.7% of the total release of the only 131Xe, was observed at ˜1000 K, the rest of the release curve being essentially identical for all the other xenon isotopes. The integral isotopic composition of the different xenon isotopes is in very good agreement with the inventory calculated using ORIGEN-2. The presence of this particular release is explained by the relation between the thermal diffusion and decay properties of the various iodine radioisotopes decaying all into xenon.
Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system
NASA Astrophysics Data System (ADS)
Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.
2017-11-01
This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.
Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy
2017-12-01
The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.
Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy
2018-03-01
The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.
Comparison of FDNS liquid rocket engine plume computations with SPF/2
NASA Technical Reports Server (NTRS)
Kumar, G. N.; Griffith, D. O., II; Warsi, S. A.; Seaford, C. M.
1993-01-01
Prediction of a plume's shape and structure is essential to the evaluation of base region environments. The JANNAF standard plume flowfield analysis code SPF/2 predicts plumes well, but cannot analyze base regions. Full Navier-Stokes CFD codes can calculate both zones; however, before they can be used, they must be validated. The CFD code FDNS3D (Finite Difference Navier-Stokes Solver) was used to analyze the single plume of a Space Transportation Main Engine (STME) and comparisons were made with SPF/2 computations. Both frozen and finite rate chemistry models were employed as well as two turbulence models in SPF/2. The results indicate that FDNS3D plume computations agree well with SPF/2 predictions for liquid rocket engine plumes.
George, Jaiben; Newman, Jared M; Ramanathan, Deepak; Klika, Alison K; Higuera, Carlos A; Barsoum, Wael K
2017-09-01
Research using large administrative databases has substantially increased in recent years. Accuracy with which comorbidities are represented in these databases has been questioned. The purpose of this study was to evaluate the extent of errors in obesity coding and its impact on arthroplasty research. Eighteen thousand thirty primary total knee arthroplasties (TKAs) and 10,475 total hip arthroplasties (THAs) performed at a single healthcare system from 2004-2014 were included. Patients were classified as obese or nonobese using 2 methods: (1) body mass index (BMI) ≥30 kg/m 2 and (2) international classification of disease, 9th edition codes. Length of stay, operative time, and 90-day complications were collected. Effect of obesity on various outcomes was analyzed separately for both BMI- and coding-based obesity. From 2004 to 2014, the prevalence of BMI-based obesity increased from 54% to 63% and 40% to 45% in TKA and THA, respectively. The prevalence of coding-based obesity increased from 15% to 28% and 8% to 17% in TKA and THA, respectively. Coding overestimated the growth of obesity in TKA and THA by 5.6 and 8.4 times, respectively. When obesity was defined by coding, obesity was falsely shown to be a significant risk factor for deep vein thrombosis (TKA), pulmonary embolism (THA), and longer hospital stay (TKA and THA). The growth in obesity observed in administrative databases may be an artifact because of improvements in coding over the years. Obesity defined by coding can overestimate the actual effect of obesity on complications after arthroplasty. Therefore, studies using large databases should be interpreted with caution, especially when variables prone to coding errors are involved. Copyright © 2017 Elsevier Inc. All rights reserved.
Overview of the H.264/AVC video coding standard
NASA Astrophysics Data System (ADS)
Luthra, Ajay; Topiwala, Pankaj N.
2003-11-01
H.264/MPEG-4 AVC is the latest coding standard jointly developed by the Video Coding Experts Group (VCEG) of ITU-T and Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state of the art coding tools and provides enhanced coding efficiency for a wide range of applications including video telephony, video conferencing, TV, storage (DVD and/or hard disk based), streaming video, digital video creation, digital cinema and others. In this paper an overview of this standard is provided. Some comparisons with the existing standards, MPEG-2 and MPEG-4 Part 2, are also provided.
Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.L.; Wilson, J.H.; Arwood, P.C.
The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less
Pseudorandom Noise Code-Based Technique for Cloud and Aerosol Discrimination Applications
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Prasad, Narasimha S.; Flood, Michael A.; Harrison, Fenton Wallace
2011-01-01
NASA Langley Research Center is working on a continuous wave (CW) laser based remote sensing scheme for the detection of CO2 and O2 from space based platforms suitable for ACTIVE SENSING OF CO2 EMISSIONS OVER NIGHTS, DAYS, AND SEASONS (ASCENDS) mission. ASCENDS is a future space-based mission to determine the global distribution of sources and sinks of atmospheric carbon dioxide (CO2). A unique, multi-frequency, intensity modulated CW (IMCW) laser absorption spectrometer (LAS) operating at 1.57 micron for CO2 sensing has been developed. Effective aerosol and cloud discrimination techniques are being investigated in order to determine concentration values with accuracies less than 0.3%. In this paper, we discuss the demonstration of a PN code based technique for cloud and aerosol discrimination applications. The possibility of using maximum length (ML)-sequences for range and absorption measurements is investigated. A simple model for accomplishing this objective is formulated, Proof-of-concept experiments carried out using SONAR based LIDAR simulator that was built using simple audio hardware provided promising results for extension into optical wavelengths. Keywords: ASCENDS, CO2 sensing, O2 sensing, PN codes, CW lidar
Zhang, Fangzheng; Ge, Xiaozhong; Gao, Bindong; Pan, Shilong
2015-08-24
A novel scheme for photonic generation of a phase-coded microwave signal is proposed and its application in one-dimension distance measurement is demonstrated. The proposed signal generator has a simple and compact structure based on a single dual-polarization modulator. Besides, the generated phase-coded signal is stable and free from the DC and low-frequency backgrounds. An experiment is carried out. A 2 Gb/s phase-coded signal at 20 GHz is successfully generated, and the recovered phase information agrees well with the input 13-bit Barker code. To further investigate the performance of the proposed signal generator, its application in one-dimension distance measurement is demonstrated. The measurement accuracy is less than 1.7 centimeters within a measurement range of ~2 meters. The experimental results can verify the feasibility of the proposed phase-coded microwave signal generator and also provide strong evidence to support its practical applications.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.
2008-09-01
Behavioural Point Process Data 234 Appendix B: Matlab Code 258 Matlab Code Used in Chapter 2 (Porpoise Prey Capture Analysis) 258 Click Extraction and...Measurement of Click Properties 258 Envelope-based Click Detector 262 Matlab Code Used in Chapter 3 (Transmission Loss in Porpoise Habitats) ..267...Click Extraction from Data Wavefiles 267 Click Level Determination (Grand Manan Datasets) 270 Click Level Determination (Danish Datasets) 287 Matlab
Biosemiotics: a new understanding of life.
Barbieri, Marcello
2008-07-01
Biosemiotics is the idea that life is based on semiosis, i.e., on signs and codes. This idea has been strongly suggested by the discovery of the genetic code, but so far it has made little impact in the scientific world and is largely regarded as a philosophy rather than a science. The main reason for this is that modern biology assumes that signs and meanings do not exist at the molecular level, and that the genetic code was not followed by any other organic code for almost four billion years, which implies that it was an utterly isolated exception in the history of life. These ideas have effectively ruled out the existence of semiosis in the organic world, and yet there are experimental facts against all of them. If we look at the evidence of life without the preconditions of the present paradigm, we discover that semiosis is there, in every single cell, and that it has been there since the very beginning. This is what biosemiotics is really about. It is not a philosophy. It is a new scientific paradigm that is rigorously based on experimental facts. Biosemiotics claims that the genetic code (1) is a real code and (2) has been the first of a long series of organic codes that have shaped the history of life on our planet. The reality of the genetic code and the existence of other organic codes imply that life is based on two fundamental processes--copying and coding--and this in turn implies that evolution took place by two distinct mechanisms, i.e., by natural selection (based on copying) and by natural conventions (based on coding). It also implies that the copying of genes works on individual molecules, whereas the coding of proteins operates on collections of molecules, which means that different mechanisms of evolution exist at different levels of organization. This review intends to underline the scientific nature of biosemiotics, and to this purpose, it aims to prove (1) that the cell is a real semiotic system, (2) that the genetic code is a real code, (3) that evolution took place by natural selection and by natural conventions, and (4) that it was natural conventions, i.e., organic codes, that gave origin to the great novelties of macroevolution. Biological semiosis, in other words, is a scientific reality because the codes of life are experimental realities. The time has come, therefore, to acknowledge this fact of life, even if that means abandoning the present theoretical framework in favor of a more general one where biology and semiotics finally come together and become biosemiotics.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes
NASA Astrophysics Data System (ADS)
Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.
The effect of client ethnicity on clinical interpretation of the MMPI-2.
Knaster, Cara A; Micucci, Joseph A
2013-02-01
Client ethnicity has been shown to affect clinicians' diagnostic impressions. However, it is not known whether interpretation of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) clinical scales is affected by ethnic bias. In this study, clinicians (82 males, 60 females) provided severity ratings for six symptoms based on three MMPI-2 profiles (representing the 27/72, 49/94, and 68/86 code-types) with the ethnicity of the client randomly assigned as either African American or Caucasian. To determine whether symptom severity ratings based on MMPI-2 profiles were affected by ethnicity, a 3 (code-type) × 2 (ethnicity) MANOVA was performed. Neither the main effect for ethnicity nor the ethnicity × code-type interaction was significant. These results indicated that the symptom severity ratings based on the MMPI-2 clinical scales were not affected by the client's identification as African American or Caucasian. Future studies are needed to explore the interpretation of profiles from clients representing other ethnic groups and for female clients.
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
BRYNTRN: A baryon transport model
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.
1989-01-01
The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Numerical MHD codes for modeling astrophysical flows
NASA Astrophysics Data System (ADS)
Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.
2016-05-01
We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.
Biosemiotics: a new understanding of life
NASA Astrophysics Data System (ADS)
Barbieri, Marcello
2008-07-01
Biosemiotics is the idea that life is based on semiosis, i.e., on signs and codes. This idea has been strongly suggested by the discovery of the genetic code, but so far it has made little impact in the scientific world and is largely regarded as a philosophy rather than a science. The main reason for this is that modern biology assumes that signs and meanings do not exist at the molecular level, and that the genetic code was not followed by any other organic code for almost four billion years, which implies that it was an utterly isolated exception in the history of life. These ideas have effectively ruled out the existence of semiosis in the organic world, and yet there are experimental facts against all of them. If we look at the evidence of life without the preconditions of the present paradigm, we discover that semiosis is there, in every single cell, and that it has been there since the very beginning. This is what biosemiotics is really about. It is not a philosophy. It is a new scientific paradigm that is rigorously based on experimental facts. Biosemiotics claims that the genetic code (1) is a real code and (2) has been the first of a long series of organic codes that have shaped the history of life on our planet. The reality of the genetic code and the existence of other organic codes imply that life is based on two fundamental processes—copying and coding—and this in turn implies that evolution took place by two distinct mechanisms, i.e., by natural selection (based on copying) and by natural conventions (based on coding). It also implies that the copying of genes works on individual molecules, whereas the coding of proteins operates on collections of molecules, which means that different mechanisms of evolution exist at different levels of organization. This review intends to underline the scientific nature of biosemiotics, and to this purpose, it aims to prove (1) that the cell is a real semiotic system, (2) that the genetic code is a real code, (3) that evolution took place by natural selection and by natural conventions, and (4) that it was natural conventions, i.e., organic codes, that gave origin to the great novelties of macroevolution. Biological semiosis, in other words, is a scientific reality because the codes of life are experimental realities. The time has come, therefore, to acknowledge this fact of life, even if that means abandoning the present theoretical framework in favor of a more general one where biology and semiotics finally come together and become biosemiotics.
Inclusion of pressure and flow in a new 3D MHD equilibrium code
NASA Astrophysics Data System (ADS)
Raburn, Daniel; Fukuyama, Atsushi
2012-10-01
Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).
Erickson, Steven R; Workman, Paul
2014-01-01
To document the availability of selected pharmacy services and out-of-pocket cost of medication throughout a diverse county in Michigan and to assess possible associations between availability of services and price of medication and characteristics of residents of the ZIP codes in which the pharmacies were located. Cross-sectional telephone survey of pharmacies coupled with ZIP code-level census data. 503 pharmacies throughout the 63 ZIP codes of Wayne County, MI. The out-of-pocket cost for a 30 days' supply of levothyroxine 50 mcg and brand-name atorvastatin (Lipitor-Pfizer) 20 mg, availability of discount generic drug programs, home delivery of medications, hours of pharmacy operation, and availability of pharmacy-based immunization services. Census data aggregated at the ZIP code level included race, annual household income, age, and number of residents per pharmacy. The overall results per ZIP code showed that the average cost for levothyroxine was $10.01 ± $2.29 and $140.45 + $14.70 for Lipitor. Per ZIP code, the mean (± SD) percentages of pharmacies offering discount generic drug programs was 66.9% ± 15.0%; home delivery of medications was 44.5% ± 22.7%; and immunization for influenza was 46.7% ± 24.3% of pharmacies. The mean (± SD) hours of operation per pharmacy per ZIP code was 67.0 ± 25.2. ZIP codes with higher household income as well as higher percentage of residents being white had lower levothyroxine price, greater percentage of pharmacies offering discount generic drug programs, more hours of operation per week, and more pharmacy-based immunization services. The cost of Lipitor was not associated with any ZIP code characteristic. Disparities in the cost of generic levothyroxine, the availability of services such as discount generic drug programs, hours of operation, and pharmacy-based immunization services are evident based on race and household income within this diverse metropolitan county.
Turbulence modeling for hypersonic flight
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1992-01-01
The objective of the present work is to develop, verify, and incorporate two equation turbulence models which account for the effect of compressibility at high speeds into a three dimensional Reynolds averaged Navier-Stokes code and to provide documented model descriptions and numerical procedures so that they can be implemented into the National Aerospace Plane (NASP) codes. A summary of accomplishments is listed: (1) Four codes have been tested and evaluated against a flat plate boundary layer flow and an external supersonic flow; (2) a code named RANS was chosen because of its speed, accuracy, and versatility; (3) the code was extended from thin boundary layer to full Navier-Stokes; (4) the K-omega two equation turbulence model has been implemented into the base code; (5) a 24 degree laminar compression corner flow has been simulated and compared to other numerical simulations; and (6) work is in progress in writing the numerical method of the base code including the turbulence model.
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.
2014-01-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Polyanskiy, Mikhail N.
2015-01-01
We describe a computer code for simulating the amplification of ultrashort mid-infrared laser pulses in CO 2 amplifiers and their propagation through arbitrary optical systems. This code is based on a comprehensive model that includes an accurate consideration of the CO 2 active medium and a physical optics propagation algorithm, and takes into account the interaction of the laser pulse with the material of the optical elements. Finally, the application of the code for optimizing an isotopic regenerative amplifier is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giantsoudi, D; Schuemann, J; Dowdell, S
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavitiesmore » and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.« less
ERIC Educational Resources Information Center
Cardenas-Claros, Monica Stella; Gruba, Paul A.
2013-01-01
This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…
TOPAZ2D heat transfer code users manual and thermal property data base
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.B.; Edwards, A.L.
1990-05-01
TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependentmore » boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.« less
Nine-year-old children use norm-based coding to visually represent facial expression.
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
2013-10-01
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
The design of dual-mode complex signal processors based on quadratic modular number codes
NASA Astrophysics Data System (ADS)
Jenkins, W. K.; Krogmeier, J. V.
1987-04-01
It has been known for a long time that quadratic modular number codes admit an unusual representation of complex numbers which leads to complete decoupling of the real and imaginary channels, thereby simplifying complex multiplication and providing error isolation between the real and imaginary channels. This paper first presents a tutorial review of the theory behind the different types of complex modular rings (fields) that result from particular parameter selections, and then presents a theory for a 'dual-mode' complex signal processor based on the choice of augmented power-of-2 moduli. It is shown how a diminished-1 binary code, used by previous designers for the realization of Fermat number transforms, also leads to efficient realizations for dual-mode complex arithmetic for certain augmented power-of-2 moduli. Then a design is presented for a recursive complex filter based on a ROM/ACCUMULATOR architecture and realized in an augmented power-of-2 quadratic code, and a computer-generated example of a complex recursive filter is shown to illustrate the principles of the theory.
NASA Astrophysics Data System (ADS)
Jung, Sungmo; Kim, Jong Hyun; Cagalaban, Giovanni; Lim, Ji-Hoon; Kim, Seoksoo
More recently, botnet-based cyber attacks, including a spam mail or a DDos attack, have sharply increased, which poses a fatal threat to Internet services. At present, antivirus businesses make it top priority to detect malicious code in the shortest time possible (Lv.2), based on the graph showing a relation between spread of malicious code and time, which allows them to detect after malicious code occurs. Despite early detection, however, it is not possible to prevent malicious code from occurring. Thus, we have developed an algorithm that can detect precursor symptoms at Lv.1 to prevent a cyber attack using an evasion method of 'an executing environment aware attack' by analyzing system behaviors and monitoring memory.
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
Tracing the Spanish Language/Determinando el Origen del Idioma Espanol.
ERIC Educational Resources Information Center
Lozano, Anthony G.
1980-01-01
Discusses the history of the Spanish language in America and notes the influence of Caribbean languages, Nahuatl, and English on Spanish. Describes the archaisms in lexicon, phonology, and grammar of the Spanish of New Mexico and Colorado. Discusses Spanish language maintenance in Mexico, Puerto Rico, Cuba, and the United States. (SB)
Los Protectores del Planeta: actividades para niños y recursos educativos sobre reciclaje
Información sobre el Club Protectores del Planeta que enseña cómo reducir la basura desde el punto de origen, cómo reciclar y conservar los recursos naturales. La información incluye recursos educativos para el salón de clases.
Isotopic composition and neutronics of the Okelobondo natural reactor
NASA Astrophysics Data System (ADS)
Palenik, Christopher Samuel
The Oklo-Okelobondo and Bangombe uranium deposits, in Gabon, Africa host Earth's only known natural nuclear fission reactors. These 2 billion year old reactors represent a unique opportunity to study used nuclear fuel over geologic periods of time. The reactors in these deposits have been studied as a means by which to constrain the source term of fission product concentrations produced during reactor operation. The source term depends on the neutronic parameters, which include reactor operation duration, neutron flux and the neutron energy spectrum. Reactor operation has been modeled using a point-source computer simulation (Oak Ridge Isotope Generation and Depletion, ORIGEN, code) for a light water reactor. Model results have been constrained using secondary ionization mass spectroscopy (SIMS) isotopic measurements of the fission products Nd and Te, as well as U in uraninite from samples collected in the Okelobondo reactor zone. Based upon the constraints on the operating conditions, the pre-reactor concentrations of Nd (150 ppm +/- 75 ppm) and Te (<1 ppm) in uraninite were estimated. Related to the burnup measured in Okelobondo samples (0.7 to 13.8 GWd/MTU), the final fission product inventories of Nd (90 to 1200 ppm) and Te (10 to 110 ppm) were calculated. By the same means, the ranges of all other fission products and actinides produced during reactor operation were calculated as a function of burnup. These results provide a source term against which the present elemental and decay abundances at the fission reactor can be compared. Furthermore, they provide new insights into the extent to which a "fossil" nuclear reactor can be characterized on the basis of its isotopic signatures. In addition, results from the study of two other natural systems related to the radionuclide and fission product transport are included. A detailed mineralogical characterization of the uranyl mineralogy at the Bangombe uranium deposit in Gabon, Africa was completed to improve geochemical models of the solubility-limiting phase. A study of the competing effects of radiation damage and annealing in a U-bearing crystal of zircon shows that low temperature annealing in actinide-bearing phases is significant in the annealing of radiation damage.
27 CFR 73.12 - What security controls must I use for identification codes and passwords?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 2 2010-04-01 2010-04-01 false What security controls... controls must I use for identification codes and passwords? If you use electronic signatures based upon use of identification codes in combination with passwords, you must employ controls to ensure their...
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Noussa-Yao, Joseph; Heudes, Didier; Escudie, Jean-Baptiste; Degoulet, Patrice
2016-01-01
Short-stay MSO (Medicine, Surgery, Obstetrics) hospitalization activities in public and private hospitals providing public services are funded through charges for the services provided (T2A in French). Coding must be well matched to the severity of the patient's condition, to ensure that appropriate funding is provided to the hospital. We propose the use of an autocompletion process and multidimensional matrix, to help physicians to improve the expression of information and to optimize clinical coding. With this approach, physicians without knowledge of the encoding rules begin from a rough concept, which is gradually refined through semantic proximity and uses information on the associated codes stemming of optimized knowledge bases of diagnosis code.
Características del viento en estrellas Be derivadas del perfil Hα
NASA Astrophysics Data System (ADS)
Rohrmann, R.; Cidale, L.
El estudio teórico de perfiles Hα y su variabilidad en estrellas Be ha sido frecuentemente desarrollado en base a modelos de envolturas circunestelares inhomogéneas, donde la geometría del material es responsable de la forma del perfil dependiendo de la dirección de observación. Nosotros damos una interpretación alternativa y proponemos que la mayoría de las propiedades de esta línea tienen origen en la base de un viento estelar y de una estructura cromosférica anexa a la fotósfera. Encontramos que típicos perfiles Hα en Be, como son los llamados pole-on y winebottle, pueden ser reproducidos cualitativamente sin recurrir a la existencia de una envoltura asimétrica. Analizamos como la línea Hα permite identificar la posible estructura del viento en la región donde éste se inicia.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.
Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.
Martins, Renata Cristófani; Buchalla, Cassia Maria
2015-01-01
To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.
MPACT Standard Input User s Manual, Version 2.2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew
The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.
Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.
Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao
2017-07-01
In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Topiwala, Pankaj N.; Luthra, Ajay
2004-11-01
H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
User's manual for the ALS base heating prediction code, volume 2
NASA Technical Reports Server (NTRS)
Reardon, John E.; Fulton, Michael S.
1992-01-01
The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
A feasibility study of reactor-based deep-burn concepts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T. K.; Taiwo, T. A.; Hill, R. N.
2005-09-16
A systematic assessment of the General Atomics (GA) proposed Deep-Burn concept based on the Modular Helium-Cooled Reactor design (DB-MHR) has been performed. Preliminary benchmarking of deterministic physics codes was done by comparing code results to those from MONTEBURNS (MCNP-ORIGEN) calculations. Detailed fuel cycle analyses were performed in order to provide an independent evaluation of the physics and transmutation performance of the one-pass and two-pass concepts. Key performance parameters such as transuranic consumption, reactor performance, and spent fuel characteristics were analyzed. This effort has been undertaken in close collaborations with the General Atomics design team and Brookhaven National Laboratory evaluation team.more » The study was performed primarily for a 600 MWt reference DB-MHR design having a power density of 4.7 MW/m{sup 3}. Based on parametric and sensitivity study, it was determined that the maximum burnup (TRU consumption) can be obtained using optimum values of 200 {micro}m and 20% for the fuel kernel diameter and fuel packing fraction, respectively. These values were retained for most of the one-pass and two-pass design calculations; variation to the packing fraction was necessary for the second stage of the two-pass concept. Using a four-batch fuel management scheme for the one-pass DB-MHR core, it was possible to obtain a TRU consumption of 58% and a cycle length of 286 EFPD. By increasing the core power to 800 MWt and the power density to 6.2 MW/m{sup 3}, it was possible to increase the TRU consumption to 60%, although the cycle length decreased by {approx}64 days. The higher TRU consumption (burnup) is due to the reduction of the in-core decay of fissile Pu-241 to Am-241 relative to fission, arising from the higher power density (specific power), which made the fuel more reactivity over time. It was also found that the TRU consumption can be improved by utilizing axial fuel shuffling or by operating with lower material temperatures (colder core). Results also showed that the transmutation performance of the one-pass deep-burn concept is sensitive to the initial TRU vector, primarily because longer cooling time reduces the fissile content (Pu-241 specifically.) With a cooling time of 5 years, the TRU consumption increases to 67%, while conversely, with 20-year cooling the TRU consumption is about 58%. For the two-pass DB-MHR (TRU recycling option), a fuel packing fraction of about 30% is required in the second pass (the recycled TRU). It was found that using a heterogeneous core (homogeneous fuel element) concept, the TRU consumption is dependent on the cooling interval before the 2nd pass, again due to Pu-241 decay during the time lag between the first pass fuel discharge and the second pass fuel charge. With a cooling interval of 7 years (5 and 2 years before and after reprocessing) a TRU consumption of 55% is obtained. With an assumed ''no cooling'' interval, the TRU consumption is 63%. By using a cylindrical core to reduce neutron leakage, TRU consumption of the case with 7-year cooling interval increases to 58%. For a two-pass concept using a heterogeneous fuel element (and homogeneous core) with first and second pass volume ratio of 2:1, the TRU consumption is 62.4%. Finally, the repository loading benefits arising from the deep-burn and Inert Matrix Fuel (IMF) concepts were estimated and compared, for the same initial TRU vector. The DB-MHR concept resulted in slightly higher TRU consumption and repository loading benefit compared to the IMF concept (58.1% versus 55.1% for TRU consumption and 2.0 versus 1.6 for estimated repository loading benefit).« less
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Edge Simulation Laboratory Progress and Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, R
The Edge Simulation Laboratory (ESL) is a project to develop a gyrokinetic code for MFE edge plasmas based on continuum (Eulerian) techniques. ESL is a base-program activity of OFES, with an allied algorithm research activity funded by the OASCR base math program. ESL OFES funds directly support about 0.8 FTE of career staff at LLNL, a postdoc and a small fraction of an FTE at GA, and a graduate student at UCSD. In addition the allied OASCR program funds about 1/2 FTE each in the computations directorates at LBNL and LLNL. OFES ESL funding for LLNL and UCSD began inmore » fall 2005, while funding for GA and the math team began about a year ago. ESL's continuum approach is a complement to the PIC-based methods of the CPES Project, and was selected (1) because of concerns about noise issues associated with PIC in the high-density-contrast environment of the edge pedestal, (2) to be able to exploit advanced numerical methods developed for fluid codes, and (3) to build upon the successes of core continuum gyrokinetic codes such as GYRO, GS2 and GENE. The ESL project presently has three components: TEMPEST, a full-f, full-geometry (single-null divertor, or arbitrary-shape closed flux surfaces) code in E, {mu} (energy, magnetic-moment) coordinates; EGK, a simple-geometry rapid-prototype code, presently of; and the math component, which is developing and implementing algorithms for a next-generation code. Progress would be accelerated if we could find funding for a fourth, computer science, component, which would develop software infrastructure, provide user support, and address needs for data handing and analysis. We summarize the status and plans for the three funded activities.« less
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Variable Coded Modulation software simulation
NASA Astrophysics Data System (ADS)
Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise
This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.
¿Origen dinámico del polvo sobre la superficie de Iapetus?
NASA Astrophysics Data System (ADS)
Leiva, A. M.; Briozzo, C. B.
We implement the three-dimensional Circular Restricted Three Body Prob- lem for low energy particles entering the SaturnIapetus system from the recently discovered dust ring. The distribution of impacts with the surface of Iapetus so obtained shows features resembling that of the dark regions observed in this satellite. FULL TEXT IN SPANISH
NASA Technical Reports Server (NTRS)
Stoenescu, Tudor M.; Woo, Simon S.
2009-01-01
In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.
Morikawa, Naoki; Tanaka, Toshihisa; Islam, Md Rabiul
2018-07-01
Mixed frequency and phase coding (FPC) can achieve the significant increase of the number of commands in steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI). However, the inconsistent phases of the SSVEP over channels in a trial and the existence of non-contributing channels due to noise effects can decrease accurate detection of stimulus frequency. We propose a novel command detection method based on a complex sparse spatial filter (CSSF) by solving ℓ 1 - and ℓ 2,1 -regularization problems for a mixed-coded SSVEP-BCI. In particular, ℓ 2,1 -regularization (aka group sparsification) can lead to the rejection of electrodes that are not contributing to the SSVEP detection. A calibration data based canonical correlation analysis (CCA) and CSSF with ℓ 1 - and ℓ 2,1 -regularization cases were demonstrated for a 16-target stimuli with eleven subjects. The results of statistical test suggest that the proposed method with ℓ 1 - and ℓ 2,1 -regularization significantly achieved the highest ITR. The proposed approaches do not need any reference signals, automatically select prominent channels, and reduce the computational cost compared to the other mixed frequency-phase coding (FPC)-based BCIs. The experimental results suggested that the proposed method can be usable implementing BCI effectively with reduce visual fatigue. Copyright © 2018 Elsevier B.V. All rights reserved.
FPCAS2D user's guide, version 1.0
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.
1994-01-01
The FPCAS2D computer code has been developed for aeroelastic stability analysis of bladed disks such as those in fans, compressors, turbines, propellers, or propfans. The aerodynamic analysis used in this code is based on the unsteady two-dimensional full potential equation which is solved for a cascade of blades. The structural analysis is based on a two degree-of-freedom rigid typical section model for each blade. Detailed explanations of the aerodynamic analysis, the numerical algorithms, and the aeroelastic analysis are not given in this report. This guide can be used to assist in the preparation of the input data required by the FPCAS2D code. A complete description of the input data is provided in this report. In addition, four test cases, including inputs and outputs, are provided.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
DNA barcode goes two-dimensions: DNA QR code web server.
Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.
Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin
2011-01-01
The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system. PMID:22163983
Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin
2011-01-01
The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system.
NASA Technical Reports Server (NTRS)
Myhill, Elizabeth A.; Boss, Alan P.
1993-01-01
In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Kenne, Deric; Wolfram, Taylor M; Abram, Jenica K; Fleming, Michael
2016-01-01
Background Given the high penetration of social media use, social media has been proposed as a method for the dissemination of information to health professionals and patients. This study explored the potential for social media dissemination of the Academy of Nutrition and Dietetics Evidence-Based Nutrition Practice Guideline (EBNPG) for Heart Failure (HF). Objectives The objectives were to (1) describe the existing social media content on HF, including message content, source, and target audience, and (2) describe the attitude of physicians and registered dietitian nutritionists (RDNs) who care for outpatient HF patients toward the use of social media as a method to obtain information for themselves and to share this information with patients. Methods The methods were divided into 2 parts. Part 1 involved conducting a content analysis of tweets related to HF, which were downloaded from Twitonomy and assigned codes for message content (19 codes), source (9 codes), and target audience (9 codes); code frequency was described. A comparison in the popularity of tweets (those marked as favorites or retweeted) based on applied codes was made using t tests. Part 2 involved conducting phone interviews with RDNs and physicians to describe health professionals’ attitude toward the use of social media to communicate general health information and information specifically related to the HF EBNPG. Interviews were transcribed and coded; exemplar quotes representing frequent themes are presented. Results The sample included 294 original tweets with the hashtag “#heartfailure.” The most frequent message content codes were “HF awareness” (166/294, 56.5%) and “patient support” (97/294, 33.0%). The most frequent source codes were “professional, government, patient advocacy organization, or charity” (112/277, 40.4%) and “patient or family” (105/277, 37.9%). The most frequent target audience codes were “unable to identify” (111/277, 40.1%) and “other” (55/277, 19.9%). Significant differences were found in the popularity of tweets with (mean 1, SD 1.3 favorites) or without (mean 0.7, SD 1.3 favorites), the content code being “HF research” (P=.049). Tweets with the source code “professional, government, patient advocacy organizations, or charities” were significantly more likely to be marked as a favorite and retweeted than those without this source code (mean 1.2, SD 1.4 vs mean 0.8, SD 1.2, P=.03) and (mean 1.5, SD 1.8 vs mean 0.9, SD 2.0, P=.03). Interview participants believed that social media was a useful way to gather professional information. They did not believe that social media was useful for communicating with patients due to privacy concerns and the fact that the information had to be kept general rather than be tailored for a specific patient and the belief that their patients did not use social media or technology. Conclusions Existing Twitter content related to HF comes from a combination of patients and evidence-based organizations; however, there is little nutrition content. That gap may present an opportunity for EBNPG dissemination. Health professionals use social media to gather information for themselves but are skeptical of its value when communicating with patients, particularly due to privacy concerns and misconceptions about the characteristics of social media users. PMID:27847349
Hand, Rosa K; Kenne, Deric; Wolfram, Taylor M; Abram, Jenica K; Fleming, Michael
2016-11-15
Given the high penetration of social media use, social media has been proposed as a method for the dissemination of information to health professionals and patients. This study explored the potential for social media dissemination of the Academy of Nutrition and Dietetics Evidence-Based Nutrition Practice Guideline (EBNPG) for Heart Failure (HF). The objectives were to (1) describe the existing social media content on HF, including message content, source, and target audience, and (2) describe the attitude of physicians and registered dietitian nutritionists (RDNs) who care for outpatient HF patients toward the use of social media as a method to obtain information for themselves and to share this information with patients. The methods were divided into 2 parts. Part 1 involved conducting a content analysis of tweets related to HF, which were downloaded from Twitonomy and assigned codes for message content (19 codes), source (9 codes), and target audience (9 codes); code frequency was described. A comparison in the popularity of tweets (those marked as favorites or retweeted) based on applied codes was made using t tests. Part 2 involved conducting phone interviews with RDNs and physicians to describe health professionals' attitude toward the use of social media to communicate general health information and information specifically related to the HF EBNPG. Interviews were transcribed and coded; exemplar quotes representing frequent themes are presented. The sample included 294 original tweets with the hashtag "#heartfailure." The most frequent message content codes were "HF awareness" (166/294, 56.5%) and "patient support" (97/294, 33.0%). The most frequent source codes were "professional, government, patient advocacy organization, or charity" (112/277, 40.4%) and "patient or family" (105/277, 37.9%). The most frequent target audience codes were "unable to identify" (111/277, 40.1%) and "other" (55/277, 19.9%). Significant differences were found in the popularity of tweets with (mean 1, SD 1.3 favorites) or without (mean 0.7, SD 1.3 favorites), the content code being "HF research" (P=.049). Tweets with the source code "professional, government, patient advocacy organizations, or charities" were significantly more likely to be marked as a favorite and retweeted than those without this source code (mean 1.2, SD 1.4 vs mean 0.8, SD 1.2, P=.03) and (mean 1.5, SD 1.8 vs mean 0.9, SD 2.0, P=.03). Interview participants believed that social media was a useful way to gather professional information. They did not believe that social media was useful for communicating with patients due to privacy concerns and the fact that the information had to be kept general rather than be tailored for a specific patient and the belief that their patients did not use social media or technology. Existing Twitter content related to HF comes from a combination of patients and evidence-based organizations; however, there is little nutrition content. That gap may present an opportunity for EBNPG dissemination. Health professionals use social media to gather information for themselves but are skeptical of its value when communicating with patients, particularly due to privacy concerns and misconceptions about the characteristics of social media users. ©Rosa K Hand, Deric Kenne, Taylor M Wolfram, Jenica K Abram, Michael Fleming. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.11.2016.
2008-12-01
SHA256 DIGEST LENGTH) ) ; peAddSection(&sF i l e , " . S i g S t u b " , dwStubSecSize , dwStubSecSize ) ; 169 peSecure(&sF i l e , deqAddrSize...deqAuthPageAddrSize . s i z e ( ) /2) ∗ (8 + SHA256 DIGEST LENGTH) ) + 16 ; bCode [ 3 4 ] = ( ( char∗)&dwSize ) [ 0 ] ; bCode [ 3 5 ] = ( ( char∗)&dwSize ) [ 1...2) ∗ (8 + SHA256 DIGEST LENGTH... ) ) ; AES KEY aesKey ; unsigned char i v s a l t [ 1 6 ] , temp iv [ 1 6 ] ; 739 unsigned char ∗key
Phonologically-Based Priming in the Same-Different Task With L1 Readers.
Lupker, Stephen J; Nakayama, Mariko; Yoshihara, Masahiro
2018-02-01
The present experiment provides an investigation of a promising new tool, the masked priming same-different task, for investigating the orthographic coding process. Orthographic coding is the process of establishing a mental representation of the letters and letter order in the word being read which is then used by readers to access higher-level (e.g., semantic) information about that word. Prior research (e.g., Norris & Kinoshita, 2008) had suggested that performance in this task may be based entirely on orthographic codes. As reported by Lupker, Nakayama, and Perea (2015a), however, in at least some circumstances, phonological codes also play a role. Specifically, even though their 2 languages are completely different orthographically, Lupker et al.'s Japanese-English bilinguals showed priming in this task when masked L1 primes were phonologically similar to L2 targets. An obvious follow-up question is whether Lupker et al.'s effect might have resulted from a strategy that was adopted by their bilinguals to aid in processing of, and memory for, the somewhat unfamiliar L2 targets. In the present experiment, Japanese readers responded to (Japanese) Kanji targets with phonologically identical primes (on "related" trials) being presented in a completely different but highly familiar Japanese script, Hiragana. Once again, significant priming effects were observed, indicating that, although performance in the masked priming same-different task may be mainly based on orthographic codes, phonological codes can play a role even when the stimuli being matched are familiar words from a reader's L1. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
DIANA-LncBase v2: indexing microRNA targets on non-coding transcripts
Paraskevopoulou, Maria D.; Vlachos, Ioannis S.; Karagkouni, Dimitra; Georgakilas, Georgios; Kanellos, Ilias; Vergoulis, Thanasis; Zagganas, Konstantinos; Tsanakas, Panayiotis; Floros, Evangelos; Dalamagas, Theodore; Hatzigeorgiou, Artemis G.
2016-01-01
microRNAs (miRNAs) are short non-coding RNAs (ncRNAs) that act as post-transcriptional regulators of coding gene expression. Long non-coding RNAs (lncRNAs) have been recently reported to interact with miRNAs. The sponge-like function of lncRNAs introduces an extra layer of complexity in the miRNA interactome. DIANA-LncBase v1 provided a database of experimentally supported and in silico predicted miRNA Recognition Elements (MREs) on lncRNAs. The second version of LncBase (www.microrna.gr/LncBase) presents an extensive collection of miRNA:lncRNA interactions. The significantly enhanced database includes more than 70 000 low and high-throughput, (in)direct miRNA:lncRNA experimentally supported interactions, derived from manually curated publications and the analysis of 153 AGO CLIP-Seq libraries. The new experimental module presents a 14-fold increase compared to the previous release. LncBase v2 hosts in silico predicted miRNA targets on lncRNAs, identified with the DIANA-microT algorithm. The relevant module provides millions of predicted miRNA binding sites, accompanied with detailed metadata and MRE conservation metrics. LncBase v2 caters information regarding cell type specific miRNA:lncRNA regulation and enables users to easily identify interactions in 66 different cell types, spanning 36 tissues for human and mouse. Database entries are also supported by accurate lncRNA expression information, derived from the analysis of more than 6 billion RNA-Seq reads. PMID:26612864
A bandwidth efficient coding scheme for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J., Jr.
1991-01-01
As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.
Savundranayagam, Marie Y; Moore-Nielsen, Kelsey
2015-10-01
There are many recommended language-based strategies for effective communication with persons with dementia. What is unknown is whether effective language-based strategies are also person centered. Accordingly, the objective of this study was to examine whether language-based strategies for effective communication with persons with dementia overlapped with the following indicators of person-centered communication: recognition, negotiation, facilitation, and validation. Conversations (N = 46) between staff-resident dyads were audio-recorded during routine care tasks over 12 weeks. Staff utterances were coded twice, using language-based and person-centered categories. There were 21 language-based categories and 4 person-centered categories. There were 5,800 utterances transcribed: 2,409 without indicators, 1,699 coded as language or person centered, and 1,692 overlapping utterances. For recognition, 26% of utterances were greetings, 21% were affirmations, 13% were questions (yes/no and open-ended), and 15% involved rephrasing. Questions (yes/no, choice, and open-ended) comprised 74% of utterances that were coded as negotiation. A similar pattern was observed for utterances coded as facilitation where 51% of utterances coded as facilitation were yes/no questions, open-ended questions, and choice questions. However, 21% of facilitative utterances were affirmations and 13% involved rephrasing. Finally, 89% of utterances coded as validation were affirmations. The findings identify specific language-based strategies that support person-centered communication. However, between 1 and 4, out of a possible 21 language-based strategies, overlapped with at least 10% of utterances coded as each person-centered indicator. This finding suggests that staff need training to use more diverse language strategies that support personhood of residents with dementia.
An interactive toolbox for atlas-based segmentation and coding of volumetric images
NASA Astrophysics Data System (ADS)
Menegaz, G.; Luti, S.; Duay, V.; Thiran, J.-Ph.
2007-03-01
Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and legal reasons and yet provide high compression rates for reduced storage and transmission time. The images usually consist of a region of interest representing the part of the body under investigation surrounded by a "background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas based 3D segmentation method combining active contours with information theoretic principles, and the resulting segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The process is initiated by the user through the selection of either one pre-de.ned reference image or one image of the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the performance with respect to both segmentation and coding proved the high potential of the proposed system in providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.
Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim
2017-01-01
Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening
NASA Technical Reports Server (NTRS)
Dolinar, Sam
2005-01-01
We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.
Empirical evidence for site coefficients in building code provisions
Borcherdt, R.D.
2002-01-01
Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.
BRYNTRN: A baryon transport computer code, computation procedures and data base
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank
1988-01-01
The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).
A Subband Coding Method for HDTV
NASA Technical Reports Server (NTRS)
Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.
1995-01-01
This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
NASA Astrophysics Data System (ADS)
Tayama, Ryuichi; Wakasugi, Kenichi; Kawanaka, Ikunori; Kadota, Yoshinobu; Murakami, Yasuhiro
We measured the skyshine dose from turbine buildings at Shimane Nuclear Power Station Unit 1 (NS-1) and Unit 2 (NS-2), and then compared it with the dose calculated with the Monte Carlo transport code MCNP5. The skyshine dose values calculated with the MCNP5 code agreed with the experimental data within a factor of 2.8, when the roof of the turbine building was precisely modeled. We concluded that our MCNP5 calculation was valid for BWR turbine skyshine dose evaluation.
Ideas linguisticas de Bernardo de Aldrete (The Linguistic Ideas of Bernardo de Aldrete).
ERIC Educational Resources Information Center
Molina Redondo, Jose Andres de
1968-01-01
In 1602, Bernardo Jose de Aldrete wrote a book on Spanish linguistics entitled "Del origen de la lengua castellana o romance que hoy se usa en Espana" (The Origin of Castilian Presently Spoken in Spain") in which he explains and defends Castilian against the purist preference for Latin. This article reviews and evaluates Aldrete's linguistic ideas…
NASA Technical Reports Server (NTRS)
Steinke, Ronald J.
1989-01-01
The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
Moreno, M A; Domínguez, L; Teshager, T; Herrero, I A; Porrero, M C
2000-05-01
Antimicrobial resistance is a problem in modern public health and antimicrobial use and especially misuse, the most important selecting force for bacterial antibiotic resistance. As this resistance must be monitored we have designed the Spanish network 'Red de Vigilancia de Resistencias Antibióticas en Bacterias de Origen Veterinario'. This network covers the three critical points of veterinary responsibility, bacteria from sick animals, bacteria from healthy animals and bacteria from food animals. Key bacteria, antimicrobials and animal species have been defined for each of these groups along with laboratory methods for testing antimicrobial susceptibility and for data analysis and reporting. Surveillance of sick animals was first implemented using Escherichia coli as the sentinel bacterium. Surveillance of E. coli and Enterococcus faecium from healthy pigs was implemented in 1998. In July 1999, data collection on Salmonella spp. was initiated in poultry slaughterhouses. Additionally, the prevalence of vancomycin resistant E. faecium was also monitored. This network has specific topics of interest related to methods of determining resistance, analysis and reporting of data, methods of use for veterinary practitioners and collaboration with public health authorities.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability
NASA Astrophysics Data System (ADS)
Guruvareddiar, Palanivel; Joseph, Biju K.
2014-03-01
Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.
Tan, N C; Ang, A; Heng, D; Chen, J; Wong, H B
2007-01-01
The survey is aimed to describe the epidemiology of playground related injuries in Singapore based on the ICD-9, AIS/ ISS and PTS scoring systems, and mechanisms and causes of such injuries according to E codes and ICECI codes. A cross-sectional questionnaire survey examined children (< 16 years old), who sought treatment for or died of unintentional injuries in the ED of three hospitals, two primary care centers and the sole Forensic Medicine Department of Singapore. A data dictionary was compiled using guidelines from CDC/WHO. The ISS, AIS and PTS, ICD-9, ICECI v1 and E codes were used to describe the details of the injuries. 19,094 childhood injuries were recorded in the database, of which 1617 were playground injuries (8.5%). The injured children (mean age=6.8 years, SD 2.9 years) were predo-minantly male (M:F ratio = 1.71:1). Falls were the most frequent in-juries (70.7%) using ICECI. 25.0% of injuries involved radial and ulnar fractures (ICD-9 code). 99.4% of these injuries were minor, with PTS scores of 9-12. Children aged 6-10 years, were prone to upper limb injuries (71.1%) based on AIS. The use of international coding systems in injury surveillance facilitated standardisation of description and comparison of playground injuries.
An Architecture for Coexistence with Multiple Users in Frequency Hopping Cognitive Radio Networks
2013-03-01
the base WARP system, a custom IP core written in VHDL , and the Virtex IV’s embedded PowerPC core with C code to implement the radio and hopset...shown in Appendix C as Figure C.2. All VHDL code necessary to implement this IP core is included in Appendix G. 69 Figure 3.19: FPGA bus structure...subsystem functionality. A total of 1,430 lines of VHDL code were implemented for this research. 1 library ieee; 2 use ieee.std logic 1164.all; 3 use
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Quality improvement utilizing in-situ simulation for a dual-hospital pediatric code response team.
Yager, Phoebe; Collins, Corey; Blais, Carlene; O'Connor, Kathy; Donovan, Patricia; Martinez, Maureen; Cummings, Brian; Hartnick, Christopher; Noviski, Natan
2016-09-01
Given the rarity of in-hospital pediatric emergency events, identification of gaps and inefficiencies in the code response can be difficult. In-situ, simulation-based medical education programs can identify unrecognized systems-based challenges. We hypothesized that developing an in-situ, simulation-based pediatric emergency response program would identify latent inefficiencies in a complex, dual-hospital pediatric code response system and allow rapid intervention testing to improve performance before implementation at an institutional level. Pediatric leadership from two hospitals with a shared pediatric code response team employed the Institute for Healthcare Improvement's (IHI) Breakthrough Model for Collaborative Improvement to design a program consisting of Plan-Do-Study-Act cycles occurring in a simulated environment. The objectives of the program were to 1) identify inefficiencies in our pediatric code response; 2) correlate to current workflow; 3) employ an iterative process to test quality improvement interventions in a safe environment; and 4) measure performance before actual implementation at the institutional level. Twelve dual-hospital, in-situ, simulated, pediatric emergencies occurred over one year. The initial simulated event allowed identification of inefficiencies including delayed provider response, delayed initiation of cardiopulmonary resuscitation (CPR), and delayed vascular access. These gaps were linked to process issues including unreliable code pager activation, slow elevator response, and lack of responder familiarity with layout and contents of code cart. From first to last simulation with multiple simulated process improvements, code response time for secondary providers coming from the second hospital decreased from 29 to 7 min, time to CPR initiation decreased from 90 to 15 s, and vascular access obtainment decreased from 15 to 3 min. Some of these simulated process improvements were adopted into the institutional response while others continue to be trended over time for evidence that observed changes represent a true new state of control. Utilizing the IHI's Breakthrough Model, we developed a simulation-based program to 1) successfully identify gaps and inefficiencies in a complex, dual-hospital, pediatric code response system and 2) provide an environment in which to safely test quality improvement interventions before institutional dissemination. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DNA Barcode Goes Two-Dimensions: DNA QR Code Web Server
Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, “DNA barcode” actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications. PMID:22574113
13 CFR 125.15 - What requirements must an SDVO SBC meet to submit an offer on a contract?
Code of Federal Regulations, 2010 CFR
2010-01-01
... corresponding to the NAICS code assigned to the contract, provided: (A) For a procurement having a revenue-based... to the contract; or (B) For a procurement having an employee-based size standard, the procurement... specific contract: (1) It is an SDVO SBC; (2) It is small under the NAICS code assigned to the procurement...
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Michikawa, Takehiro; Morokuma, Seiichi; Nitta, Hiroshi; Kato, Kiyoko; Yamazaki, Shin
2017-06-13
Numerous earlier studies examining the association of air pollution with maternal and foetal health estimated maternal exposure to air pollutants based on the women's residential addresses. However, residential addresses, which are personally identifiable information, are not always obtainable. Since a majority of pregnant women reside near their delivery hospitals, the concentrations of air pollutants at the respective delivery hospitals may be surrogate markers of pollutant exposure at home. We compared air pollutant concentrations measured at the nearest monitoring station to Kyushu University Hospital with those measured at the closest monitoring stations to the respective residential postal code regions of pregnant women in Fukuoka. Aggregated postal code data for the home addresses of pregnant women who delivered at Kyushu University Hospital in 2014 was obtained from Kyushu University Hospital. For each of the study's 695 women who resided in Fukuoka Prefecture, we assigned pollutant concentrations measured at the nearest monitoring station to Kyushu University Hospital and pollutant concentrations measured at the nearest monitoring station to their respective residential postal code regions. Among the 695 women, 584 (84.0%) resided in the proximity of the nearest monitoring station to hospital or one of the four other stations (as the nearest stations to their respective residential postal code region) in Fukuoka city. Pearson's correlation for daily mean concentrations among the monitoring stations in Fukuoka city was strong for fine particulate matter (PM 2.5 ), suspended particulate matter (SPM), and photochemical oxidants (Ox) (coefficients ≥0.9), but moderate for coarse particulate matter (the result of subtracting the PM 2.5 from the SPM concentrations), nitrogen dioxide, and sulphur dioxide. Hospital-based and residence-based concentrations of PM 2.5 , SPM, and Ox were comparable. For PM 2.5 , SPM, and Ox, exposure estimation based on the delivery hospital is likely to approximate that based on the home of pregnant women.
Houyel, Lucile; Khoshnood, Babak; Anderson, Robert H; Lelong, Nathalie; Thieulin, Anne-Claire; Goffinet, François; Bonnet, Damien
2011-10-03
Classification of the overall spectrum of congenital heart defects (CHD) has always been challenging, in part because of the diversity of the cardiac phenotypes, but also because of the oft-complex associations. The purpose of our study was to establish a comprehensive and easy-to-use classification of CHD for clinical and epidemiological studies based on the long list of the International Paediatric and Congenital Cardiac Code (IPCCC). We coded each individual malformation using six-digit codes from the long list of IPCCC. We then regrouped all lesions into 10 categories and 23 subcategories according to a multi-dimensional approach encompassing anatomic, diagnostic and therapeutic criteria. This anatomic and clinical classification of congenital heart disease (ACC-CHD) was then applied to data acquired from a population-based cohort of patients with CHD in France, made up of 2867 cases (82% live births, 1.8% stillbirths and 16.2% pregnancy terminations). The majority of cases (79.5%) could be identified with a single IPCCC code. The category "Heterotaxy, including isomerism and mirror-imagery" was the only one that typically required more than one code for identification of cases. The two largest categories were "ventricular septal defects" (52%) and "anomalies of the outflow tracts and arterial valves" (20% of cases). Our proposed classification is not new, but rather a regrouping of the known spectrum of CHD into a manageable number of categories based on anatomic and clinical criteria. The classification is designed to use the code numbers of the long list of IPCCC but can accommodate ICD-10 codes. Its exhaustiveness, simplicity, and anatomic basis make it useful for clinical and epidemiologic studies, including those aimed at assessment of risk factors and outcomes.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
Code-based Diagnostic Algorithms for Idiopathic Pulmonary Fibrosis. Case Validation and Improvement.
Ley, Brett; Urbania, Thomas; Husson, Gail; Vittinghoff, Eric; Brush, David R; Eisner, Mark D; Iribarren, Carlos; Collard, Harold R
2017-06-01
Population-based studies of idiopathic pulmonary fibrosis (IPF) in the United States have been limited by reliance on diagnostic code-based algorithms that lack clinical validation. To validate a well-accepted International Classification of Diseases, Ninth Revision, code-based algorithm for IPF using patient-level information and to develop a modified algorithm for IPF with enhanced predictive value. The traditional IPF algorithm was used to identify potential cases of IPF in the Kaiser Permanente Northern California adult population from 2000 to 2014. Incidence and prevalence were determined overall and by age, sex, and race/ethnicity. A validation subset of cases (n = 150) underwent expert medical record and chest computed tomography review. A modified IPF algorithm was then derived and validated to optimize positive predictive value. From 2000 to 2014, the traditional IPF algorithm identified 2,608 cases among 5,389,627 at-risk adults in the Kaiser Permanente Northern California population. Annual incidence was 6.8/100,000 person-years (95% confidence interval [CI], 6.1-7.7) and was higher in patients with older age, male sex, and white race. The positive predictive value of the IPF algorithm was only 42.2% (95% CI, 30.6 to 54.6%); sensitivity was 55.6% (95% CI, 21.2 to 86.3%). The corrected incidence was estimated at 5.6/100,000 person-years (95% CI, 2.6-10.3). A modified IPF algorithm had improved positive predictive value but reduced sensitivity compared with the traditional algorithm. A well-accepted International Classification of Diseases, Ninth Revision, code-based IPF algorithm performs poorly, falsely classifying many non-IPF cases as IPF and missing a substantial proportion of IPF cases. A modification of the IPF algorithm may be useful for future population-based studies of IPF.
García-Betances, Rebeca I; Huerta, Mónica K
2012-01-01
A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies' backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones' present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients' identification processes in low-budget situations.
García-Betances, Rebeca I.; Huerta, Mónica K.
2012-01-01
A comparative review is presented of available technologies suitable for automatic reading of patient identification bracelet tags. Existing technologies’ backgrounds, characteristics, advantages and disadvantages, are described in relation to their possible use by public health care centers with budgetary limitations. A comparative assessment is presented of suitable automatic identification systems based on graphic codes, both one- (1D) and two-dimensional (2D), printed on labels, as well as those based on radio frequency identification (RFID) tags. The analysis looks at the tradeoffs of these technologies to provide guidance to hospital administrator looking to deploy patient identification technology. The results suggest that affordable automatic patient identification systems can be easily and inexpensively implemented using 2D code printed on low cost bracelet labels, which can then be read and automatically decoded by ordinary mobile smart phones. Because of mobile smart phones’ present versatility and ubiquity, the implantation and operation of 2D code, and especially Quick Response® (QR) Code, technology emerges as a very attractive alternative to automate the patients’ identification processes in low-budget situations. PMID:23569629
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.
Shen, Qin; Wang, Xuan; Yu, Bo; Shi, Shanshan; Liu, Biao; Wang, Yanfen; Xia, Qiuyuan; Rao, Qiu; Zhou, Xiaojun
2015-12-01
Anaplastic lymphoma kinase (ALK)-rearranged non-small cell lung cancer (NSCLC) screening is essential to its treatment such as crizotinib. Different assays have been developed to detect ALK rearrangements, such as fluorescence in situ hybridization (FISH), reverse transcriptase-PCR (RT-PCR), and immunohistochemistry (IHC). However, ALK detection has not been applied widely in all hospitals. Moreover, IHC has been proposed to be a pre-screening tool because of its wide application in clinics. Since the low expression of ALK protein, the sensitivity and specificity of ALK antibody are the keys to the success of IHC screening. Therefore, we compared different antibodies to find the best one for IHC detection. We evaluated ALK expression by four different ALK antibodies: clone D5F3 (Ventana), clone D5F3 (CST), clone 1A4/1H7 (OriGene Tech.), and clone 5A4 (Abcam) based on manual IHC in a cohort of 60 NSCLCs. The results were compared with those from automated IHC (clone D5F3, Ventana). All cases were evaluated independently by ALK FISH. 32 ALK-positive and 28 ALK-negative NSCLCs were identified by automated IHC (D5F3, Ventana) and FISH analysis. Based on conventional manual IHC, the sensitivity of four antibodies-D5F3 (Ventana), D5F3 (CST), 1A4/1H7 (OriGene Tech.), and 5A4 (Abcam)-was 93.8%, 84.4%, 93.8%, and 56.3%, respectively. Their specificities and positive predictive values were 100%. The percentage of strong-moderate staining was 65.6%, 62.5%, 68.8%, and 21.9%, respectively. Compared with automated IHC (D5F3, Ventana), each staining concordance was 96.7%, 91.7%, 96.7%, and 76.7%, respectively, and each presented staining heterogeneity (weak-moderate-strong intensity). These data indicated that manual IHC with a more reliable ALK antibody might provide an effective strategy for screening ALK gene rearrangements in all NSCLC patients, followed by confirmatory FISH analysis in IHC-positive cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The fourfold way of the genetic code.
Jiménez-Montaño, Miguel Angel
2009-11-01
We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
NASA Astrophysics Data System (ADS)
Satou, Yukihiko; Sueki, Keisuke; Sasa, Kimikazu; Matsunaka, Tetsuya; Shibayama, Nao; Takahashi, Tsutomu; Kinoshita, Norikazu
2014-05-01
The Fukushima Dai-ichi Nuclear power plant (FDNPP) accident caused radioactive contamination on the surface soil at Fukushima and its adjacent prefectures. Substantial contamination has been found in the northwestern area from the FDNPP, according to the airborne monitoring and ground base survey by the Japanese government. Activity ratios would have characteristic information on emission sources because each relevant reactor had different amount of radionuclide and different activity ratio. The ratios can be used to clarify more detailed source and process in the contamination. We have addressed to consider them in Namie town, northwestern region from the FDNPP. This study focused on the gamma-ray emitting radionuclides of 134Cs, 137Cs, and 110mAg. The activities were decay-corrected as of 11th March, 2011 when all nuclear reactors scrammed. Data of activity ratios by our results and the Japanese official report classified the investigated northwestern region into 3 groups. Ratios of 0.02 for 110mAg/137Cs and 0.90 for 134Cs/137Cs were observed in the northern region of 15 km inside from the FDNPP. On the other hand, two kinds of 110mAg/137Cs ratios of 0.005 and 0.002 were distributed broadly in the region 60 km away from the plant. The 134Cs/137Cs ratio was 0.98 there. The activity ratios of 110mAg/137Cs and 134Cs/137Cs in the northern region from the FDNPP correspond to those of nuclear fuel in Unit 1 according to estimation using the ORIGEN code. The 134Cs/137Cs in the northwestern area from FDNPP agrees with that of Unit 2 and 3. The 110mAg/137Cs ratios of 0.005 and0.002 are 1/5 - 1/10 of the Unit 2 and 3. Official report has announced that discharges of the radionuclides from Unit 2 and 3 occurred on 14th March, 2011. It is known that contamination in the northwestern region from the FDNPP took place on 15th March, 2011. Plausible species for silver in reactor core, metal, and halide etc. have higher boiling point than those species for cesium. The core would be cooled down to lower temperature of the boiling point of silver at the timing contamination occurred. Thus, silver with higher boiling point was not much released than cesium with lower boiling point. The 110mAg/137Cs ratio has served to identify the specific sources of contamination in the northwestern area from the FDNPP.
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
NASA Astrophysics Data System (ADS)
Liebovitch, Larry
1998-03-01
The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Przekwas, Andrzej
2004-01-01
The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allow the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.
DIANA-LncBase v2: indexing microRNA targets on non-coding transcripts.
Paraskevopoulou, Maria D; Vlachos, Ioannis S; Karagkouni, Dimitra; Georgakilas, Georgios; Kanellos, Ilias; Vergoulis, Thanasis; Zagganas, Konstantinos; Tsanakas, Panayiotis; Floros, Evangelos; Dalamagas, Theodore; Hatzigeorgiou, Artemis G
2016-01-04
microRNAs (miRNAs) are short non-coding RNAs (ncRNAs) that act as post-transcriptional regulators of coding gene expression. Long non-coding RNAs (lncRNAs) have been recently reported to interact with miRNAs. The sponge-like function of lncRNAs introduces an extra layer of complexity in the miRNA interactome. DIANA-LncBase v1 provided a database of experimentally supported and in silico predicted miRNA Recognition Elements (MREs) on lncRNAs. The second version of LncBase (www.microrna.gr/LncBase) presents an extensive collection of miRNA:lncRNA interactions. The significantly enhanced database includes more than 70 000 low and high-throughput, (in)direct miRNA:lncRNA experimentally supported interactions, derived from manually curated publications and the analysis of 153 AGO CLIP-Seq libraries. The new experimental module presents a 14-fold increase compared to the previous release. LncBase v2 hosts in silico predicted miRNA targets on lncRNAs, identified with the DIANA-microT algorithm. The relevant module provides millions of predicted miRNA binding sites, accompanied with detailed metadata and MRE conservation metrics. LncBase v2 caters information regarding cell type specific miRNA:lncRNA regulation and enables users to easily identify interactions in 66 different cell types, spanning 36 tissues for human and mouse. Database entries are also supported by accurate lncRNA expression information, derived from the analysis of more than 6 billion RNA-Seq reads. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
A prototype Knowledge-Based System to Aid Space System Restoration Management.
1986-12-01
Systems. ......... 122 Appendix B: Computation of Weights With AHP . . .. 132 Appendix C: ART Code .. ............... 138 Appendix D: Test Outputs...45 5.1 Earth Coverage With Geosynchronous Satellites 49 5.2 Space System Configurations ... ........... . 50 5.3 AHP Hierarchy...67 5.4 AHP Hierarchy With Weights .... ............ 68 6.1 TALK Schema Structure ..... .............. 75 6.2 ART Code for TALK Satellite C
Variable weight spectral amplitude coding for multiservice OCDMA networks
NASA Astrophysics Data System (ADS)
Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.
2017-09-01
The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
Vargas-Caro, Carolina; Bustamante, Carlos; Lamilla, Julio; Bennett, Michael B; Ovenden, Jennifer R
2016-07-01
The complete mitochondrial genome of the roughskin skate Dipturus trachyderma is described from 1 455 724 sequences obtained using Illumina NGS technology. Total length of the mitogenome was 16 909 base pairs, comprising 2 rRNAs, 13 protein-coding genes, 22 tRNAs and 2 non-coding regions. Phylogenetic analysis based on mtDNA revealed low genetic divergence among longnose skates, in particular, those dwelling the continental shelf and slope off the coasts of Chile and Argentina.
Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J
1997-01-01
To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.
1984-06-29
effort that requires hard copy documentation. As a result, there are generally numerous delays in providing current quality information. In the FoF...process have had fixed controls or were based on " hard -coded" information. A template, for example, is hard -coded information defining the shape of a...represents soft-coded control information. (Although manual handling of punch tapes still possess some of the limitations of " hard -coded" controls
Photoionization and High Density Gas
NASA Technical Reports Server (NTRS)
Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)
2002-01-01
We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.
Methodology of decreasing software complexity using ontology
NASA Astrophysics Data System (ADS)
DÄ browska-Kubik, Katarzyna
2015-09-01
In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.
Timofeeva, Maria N.; Kinnersley, Ben; Farrington, Susan M.; Whiffin, Nicola; Palles, Claire; Svinti, Victoria; Lloyd, Amy; Gorman, Maggie; Ooi, Li-Yin; Hosking, Fay; Barclay, Ella; Zgaga, Lina; Dobbins, Sara; Martin, Lynn; Theodoratou, Evropi; Broderick, Peter; Tenesa, Albert; Smillie, Claire; Grimes, Graeme; Hayward, Caroline; Campbell, Archie; Porteous, David; Deary, Ian J.; Harris, Sarah E.; Northwood, Emma L.; Barrett, Jennifer H.; Smith, Gillian; Wolf, Roland; Forman, David; Morreau, Hans; Ruano, Dina; Tops, Carli; Wijnen, Juul; Schrumpf, Melanie; Boot, Arnoud; Vasen, Hans F A; Hes, Frederik J.; van Wezel, Tom; Franke, Andre; Lieb, Wolgang; Schafmayer, Clemens; Hampe, Jochen; Buch, Stephan; Propping, Peter; Hemminki, Kari; Försti, Asta; Westers, Helga; Hofstra, Robert; Pinheiro, Manuela; Pinto, Carla; Teixeira, Manuel; Ruiz-Ponte, Clara; Fernández-Rozadilla, Ceres; Carracedo, Angel; Castells, Antoni; Castellví-Bel, Sergi; Campbell, Harry; Bishop, D. Timothy; Tomlinson, Ian P M; Dunlop, Malcolm G.; Houlston, Richard S.
2015-01-01
Whilst common genetic variation in many non-coding genomic regulatory regions are known to impart risk of colorectal cancer (CRC), much of the heritability of CRC remains unexplained. To examine the role of recurrent coding sequence variation in CRC aetiology, we genotyped 12,638 CRCs cases and 29,045 controls from six European populations. Single-variant analysis identified a coding variant (rs3184504) in SH2B3 (12q24) associated with CRC risk (OR = 1.08, P = 3.9 × 10−7), and novel damaging coding variants in 3 genes previously tagged by GWAS efforts; rs16888728 (8q24) in UTP23 (OR = 1.15, P = 1.4 × 10−7); rs6580742 and rs12303082 (12q13) in FAM186A (OR = 1.11, P = 1.2 × 10−7 and OR = 1.09, P = 7.4 × 10−8); rs1129406 (12q13) in ATF1 (OR = 1.11, P = 8.3 × 10−9), all reaching exome-wide significance levels. Gene based tests identified associations between CRC and PCDHGA genes (P < 2.90 × 10−6). We found an excess of rare, damaging variants in base-excision (P = 2.4 × 10−4) and DNA mismatch repair genes (P = 6.1 × 10−4) consistent with a recessive mode of inheritance. This study comprehensively explores the contribution of coding sequence variation to CRC risk, identifying associations with coding variation in 4 genes and PCDHG gene cluster and several candidate recessive alleles. However, these findings suggest that recurrent, low-frequency coding variants account for a minority of the unexplained heritability of CRC. PMID:26553438
Efficient Modeling of Laser-Plasma Accelerators with INF&RNO
NASA Astrophysics Data System (ADS)
Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.
2010-11-01
The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.
Non-coding RNAs and Berberine: A new mechanism of its anti-diabetic activities.
Chang, Wenguang
2017-01-15
Type 2 Diabetes (T2D) is a metabolic disease with high mortality and morbidity. Non-coding RNAs, including small and long non-coding RNAs, are a novel class of functional RNA molecules that regulate multiple biological functions through diverse mechanisms. Studies in the last decade have demonstrated that non-coding RNAs may represent compelling therapeutic targets and play important roles in regulating the course of insulin resistance and T2D. Berberine, a plant-based alkaloid, has shown promise as an anti-hyperglycaemic, anti-hyperlipidaemic agent against T2D. Previous studies have primarily focused on a diverse array of efficacy end points of berberine in the pathogenesis of metabolic syndromes and inflammation or oxidative stress. Currently, an increasing number of studies have revealed the importance of non-coding RNAs as regulators of the anti-diabetic effects of berberine. The regulation of non-coding RNAs has been associated with several therapeutic actions of berberine in T2D progression. Thus, this review summarizes the anti-diabetic mechanisms of berberine by focusing on its role in regulating non-coding RNA, thus demonstrating that berberine exerts global anti-diabetic effects by targeting non-coding RNAs and that these effects involve several miRNAs, lncRNAs and multiple signal pathways, which may enhance the current understanding of the anti-diabetic mechanism actions of berberine and provide new pathological targets for the development of berberine-related drugs. Copyright © 2016 Elsevier B.V. All rights reserved.
2006-01-01
collected, code both. Code Type of Analysis Code Type of Analysis A Physical properties I Common ions/trace elements B Common ions J Sanitary analysis and...1) A ground-water site is coded as if it is a single point, not a geographic area or property . (2) Latitude and longitude should be determined at a...terrace from an adjacent upland on one side, and a lowland coast or valley on the other. Due to the effects of erosion, the terrace surface may not be as
ERIC Educational Resources Information Center
Employment Standards Administration (DOL), Washington, DC. Women's Bureau.
The report presents data on selected social, economic, and demographic characteristics of women of Spanish origin in the United States. Derived from the population reports of the U.S. Census Bureau and the March 1973 Manpower Report of the President, the statistical data pertain to age, residence, marital status, heads of families and households,…
Analysis of Coolant Options for Advanced Metal Cooled Nuclear Reactors
2006-12-01
24 Table 3.3 Hazards of Sodium Reaction Products, Hydride And Oxide...........................26 Table 3.4 Chemical Reactivity Of Selected...Liquid Metal Fast Breeder Reactor ORIGEN Oak Ridge Isotope Generator ORIGENARP Oak Ridge Isotope Generator Automated Rapid Processing PWR ...nuclear reactors, both because of the possibility of increased reactivity due to boiling and the potential loss of effectiveness of coolant heat transfer
A comparison of cosmological hydrodynamic codes
NASA Technical Reports Server (NTRS)
Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.
1994-01-01
We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Advancing the Fork detector for quantitative spent nuclear fuel verification
Vaccaro, S.; Gauld, I. C.; Hu, J.; ...
2018-01-31
The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less
Advancing the Fork detector for quantitative spent nuclear fuel verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaccaro, S.; Gauld, I. C.; Hu, J.
The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less
Advancing the Fork detector for quantitative spent nuclear fuel verification
NASA Astrophysics Data System (ADS)
Vaccaro, S.; Gauld, I. C.; Hu, J.; De Baere, P.; Peterson, J.; Schwalbach, P.; Smejkal, A.; Tomanin, A.; Sjöland, A.; Tobin, S.; Wiarda, D.
2018-04-01
The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations. A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This paper describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. The results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.
Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim
2017-01-25
To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding 'poor' quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled 'poor' quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun
2013-12-01
A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically feasible and sensible DGS system for describing pediatric emergent conditions in Korea. The modified PECARN DGS showed good comprehensiveness and demonstrated reliable construct validity. This modified DGS based on PECARN DGS framework may be effectively implemented for research, reporting, and resource planning in pediatric emergency system of South Korea.
Phase II Evaluation of Clinical Coding Schemes
Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith
1997-01-01
Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343
NASA Technical Reports Server (NTRS)
Campbell, Joel F.; Prasad, Narasimha S.; Flood, Michael A.
2011-01-01
NASA Langley Research Center is working on a continuous wave (CW) laser based remote sensing scheme for the detection of CO2 and O2 from space based platforms suitable for ACTIVE SENSING OF CO2 EMISSIONS OVER NIGHTS, DAYS, AND SEASONS (ASCENDS) mission. ASCENDS is a future space-based mission to determine the global distribution of sources and sinks of atmospheric carbon dioxide (CO2). A unique, multi-frequency, intensity modulated CW (IMCW) laser absorption spectrometer (LAS) operating at 1.57 micron for CO2 sensing has been developed. Effective aerosol and cloud discrimination techniques are being investigated in order to determine concentration values with accuracies less than 0.3%. In this paper, we discuss the demonstration of a pseudo noise (PN) code based technique for cloud and aerosol discrimination applications. The possibility of using maximum length (ML)-sequences for range and absorption measurements is investigated. A simple model for accomplishing this objective is formulated, Proof-of-concept experiments carried out using SONAR based LIDAR simulator that was built using simple audio hardware provided promising results for extension into optical wavelengths.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Cai, Yong; Li, Xiwen; Wang, Runmiao; Yang, Qing; Li, Peng; Hu, Hao
2016-01-01
Currently, the chemical fingerprint comparison and analysis is mainly based on professional equipment and software, it's expensive and inconvenient. This study aims to integrate QR (Quick Response) code with quality data and mobile intelligent technology to develop a convenient query terminal for tracing quality in the whole industrial chain of TCM (traditional Chinese medicine). Three herbal medicines were randomly selected and their chemical two-dimensional barcode (2D) barcodes fingerprints were constructed. Smartphone application (APP) based on Android system was developed to read initial data of 2D chemical barcodes, and compared multiple fingerprints from different batches of same species or different species. It was demonstrated that there were no significant differences between original and scanned TCM chemical fingerprints. Meanwhile, different TCM chemical fingerprint QR codes could be rendered in the same coordinate and showed the differences very intuitively. To be able to distinguish the variations of chemical fingerprint more directly, linear interpolation angle cosine similarity algorithm (LIACSA) was proposed to get similarity ratio. This study showed that QR codes can be used as an effective information carrier to transfer quality data. Smartphone application can rapidly read quality information in QR codes and convert data into TCM chemical fingerprints.
Cai, Yong; Li, Xiwen; Li, Mei; Chen, Xiaojia; Hu, Hao; Ni, Jingyun; Wang, Yitao
2015-01-01
Chemical fingerprinting is currently a widely used tool that enables rapid and accurate quality evaluation of Traditional Chinese Medicine (TCM). However, chemical fingerprints are not amenable to information storage, recognition, and retrieval, which limit their use in Chinese medicine traceability. In this study, samples of three kinds of Chinese medicines were randomly selected and chemical fingerprints were then constructed by using high performance liquid chromatography. Based on chemical data, the process of converting the TCM chemical fingerprint into two-dimensional code is presented; preprocess and filtering algorithm are also proposed aiming at standardizing the large amount of original raw data. In order to know which type of two-dimensional code (2D) is suitable for storing data of chemical fingerprints, current popular types of 2D codes are analyzed and compared. Results show that QR Code is suitable for recording the TCM chemical fingerprint. The fingerprint information of TCM can be converted into data format that can be stored as 2D code for traceability and quality control.
Rudmik, Luke; Xu, Yuan; Kukec, Edward; Liu, Mingfu; Dean, Stafford; Quan, Hude
2016-11-01
Pharmacoepidemiological research using administrative databases has become increasingly popular for chronic rhinosinusitis (CRS); however, without a validated case definition the cohort evaluated may be inaccurate resulting in biased and incorrect outcomes. The objective of this study was to develop and validate a generalizable administrative database case definition for CRS using International Classification of Diseases, 9th edition (ICD-9)-coded claims. A random sample of 100 patients with a guideline-based diagnosis of CRS and 100 control patients were selected and then linked to a Canadian physician claims database from March 31, 2010, to March 31, 2015. The proportion of CRS ICD-9-coded claims (473.x and 471.x) for each of these 200 patients were reviewed and the validity of 7 different ICD-9-based coding algorithms was evaluated. The CRS case definition of ≥2 claims with a CRS ICD-9 code (471.x or 473.x) within 2 years of the reference case provides a balanced validity with a sensitivity of 77% and specificity of 79%. Applying this CRS case definition to the claims database produced a CRS cohort of 51,000 patients with characteristics that were consistent with published demographics and rates of comorbid asthma, allergic rhinitis, and depression. This study has validated several coding algorithms; based on the results a case definition of ≥2 physician claims of CRS (ICD-9 of 471.x or 473.x) within 2 years provides an optimal level of validity. Future studies will need to validate this administrative case definition from different health system perspectives and using larger retrospective chart reviews from multiple providers. © 2016 ARS-AAOA, LLC.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
Performance and limitations of administrative data in the identification of AKI.
Grams, Morgan E; Waikar, Sushrut S; MacMahon, Blaithin; Whelton, Seamus; Ballew, Shoshana H; Coresh, Josef
2014-04-01
Billing codes are frequently used to identify AKI events in epidemiologic research. The goals of this study were to validate billing code-identified AKI against the current AKI consensus definition and to ascertain whether sensitivity and specificity vary by patient characteristic or over time. The study population included 10,056 Atherosclerosis Risk in Communities study participants hospitalized between 1996 and 2008. Billing code-identified AKI was compared with the 2012 Kidney Disease Improving Global Outcomes (KDIGO) creatinine-based criteria (AKIcr) and an approximation of the 2012 KDIGO creatinine- and urine output-based criteria (AKIcr_uop) in a subset with available outpatient data. Sensitivity and specificity of billing code-identified AKI were evaluated over time and according to patient age, race, sex, diabetes status, and CKD status in 546 charts selected for review, with estimates adjusted for sampling technique. A total of 34,179 hospitalizations were identified; 1353 had a billing code for AKI. The sensitivity of billing code-identified AKI was 17.2% (95% confidence interval [95% CI], 13.2% to 21.2%) compared with AKIcr (n=1970 hospitalizations) and 11.7% (95% CI, 8.8% to 14.5%) compared with AKIcr_uop (n=1839 hospitalizations). Specificity was >98% in both cases. Sensitivity was significantly higher in the more recent time period (2002-2008) and among participants aged 65 years and older. Billing code-identified AKI captured a more severe spectrum of disease than did AKIcr and AKIcr_uop, with a larger proportion of patients with stage 3 AKI (34.9%, 19.7%, and 11.5%, respectively) and higher in-hospital mortality (41.2%, 18.7%, and 12.8%, respectively). The use of billing codes to identify AKI has low sensitivity compared with the current KDIGO consensus definition, especially when the urine output criterion is included, and results in the identification of a more severe phenotype. Epidemiologic studies using billing codes may benefit from a high specificity, but the variation in sensitivity may result in bias, particularly when trends over time are the outcome of interest.
Niu, Bolin; Forde, Kimberly A; Goldberg, David S
2015-01-01
Despite the use of administrative data to perform epidemiological and cost-effectiveness research on patients with hepatitis B or C virus (HBV, HCV), there are no data outside of the Veterans Health Administration validating whether International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes can accurately identify cirrhotic patients with HBV or HCV. The validation of such algorithms is necessary for future epidemiological studies. We evaluated the positive predictive value (PPV) of ICD-9-CM codes for identifying chronic HBV or HCV among cirrhotic patients within the University of Pennsylvania Health System, a large network that includes a tertiary care referral center, a community-based hospital, and multiple outpatient practices across southeastern Pennsylvania and southern New Jersey. We reviewed a random sample of 200 cirrhotic patients with ICD-9-CM codes for HCV and 150 cirrhotic patients with ICD-9-CM codes for HBV. The PPV of 1 inpatient or 2 outpatient HCV codes was 88.0% (168/191, 95% CI: 82.5-92.2%), while the PPV of 1 inpatient or 2 outpatient HBV codes was 81.3% (113/139, 95% CI: 73.8-87.4%). Several variations of the primary coding algorithm were evaluated to determine if different combinations of inpatient and/or outpatient ICD-9-CM codes could increase the PPV of the coding algorithm. ICD-9-CM codes can identify chronic HBV or HCV in cirrhotic patients with a high PPV and can be used in future epidemiologic studies to examine disease burden and the proper allocation of resources. Copyright © 2014 John Wiley & Sons, Ltd.
Job coding (PCS 2003): feedback from a study conducted in an Occupational Health Service
Henrotin, Jean-Bernard; Vaissière, Monique; Etaix, Maryline; Malard, Stéphane; Dziurla, Mathieu; Lafon, Dominique
2016-10-19
Aim: To examine the quality of manual job coding carried out by occupational health teams with access to a software application that provides assistance in job and business sector coding (CAPS). Methods: Data from a study conducted in an Occupational Health Service were used to examine the first-level coding of 1,495 jobs by occupational health teams according to the French job classification entitled “PSC- Professions and socio-professional categories” (INSEE, 2003 version). A second level of coding was also performed by an experienced coder and the first and second level codes were compared. Agreement between the two coding systems was studied using the kappa coefficient (κ) and frequencies were compared by Chi2 tests. Results: Missing data or incorrect codes were observed for 14.5% of social groups (1 digit) and 25.7% of job codes (4 digits). While agreement between the first two levels of PCS 2003 appeared to be satisfactory (κ=0.73 and κ=0.75), imbalances in reassignment flows were effectively noted. The divergent job code rate was 48.2%. Variation in the frequency of socio-occupational variables was as high as 8.6% after correcting for missing data and divergent codes. Conclusions: Compared with other studies, the use of the CAPS tool appeared to provide effective coding assistance. However, our results indicate that job coding based on PSC 2003 should be conducted using ancillary data by personnel trained in the use of this tool.
Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Potapczuk, Mark G.
1993-01-01
A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.
Continuous-variable quantum network coding for coherent states
NASA Astrophysics Data System (ADS)
Shang, Tao; Li, Ke; Liu, Jian-wei
2017-04-01
As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.
Efficient Modeling of Laser-Plasma Accelerators with INF and RNO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Esarey, E.
2010-11-04
The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less
Newtonian CAFE: a new ideal MHD code to study the solar atmosphere
NASA Astrophysics Data System (ADS)
González-Avilés, J. J.; Cruz-Osorio, A.; Lora-Clavijo, F. D.; Guzmán, F. S.
2015-12-01
We present a new code designed to solve the equations of classical ideal magnetohydrodynamics (MHD) in three dimensions, submitted to a constant gravitational field. The purpose of the code centres on the analysis of solar phenomena within the photosphere-corona region. We present 1D and 2D standard tests to demonstrate the quality of the numerical results obtained with our code. As solar tests we present the transverse oscillations of Alfvénic pulses in coronal loops using a 2.5D model, and as 3D tests we present the propagation of impulsively generated MHD-gravity waves and vortices in the solar atmosphere. The code is based on high-resolution shock-capturing methods, uses the Harten-Lax-van Leer-Einfeldt (HLLE) flux formula combined with Minmod, MC, and WENO5 reconstructors. The divergence free magnetic field constraint is controlled using the Flux Constrained Transport method.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
NASA Astrophysics Data System (ADS)
Hori, Takane; Ichimura, Tsuyoshi; Takahashi, Narumi
2017-04-01
Here we propose a system for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. Although, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2015, SC15) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Fujita et al. (2016, SC16) has improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, AGU Fall Meeting) has improved the high-fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Heather E.; Antonopoulos, Chrissi A.; Solana, Amy E.
As the model energy codes are improved to reach efficiency levels 50 percent greater than current codes, use of on-site renewable energy generation is likely to become a code requirement. This requirement will be needed because traditional mechanisms for code improvement, including envelope, mechanical and lighting, have been pressed to the end of reasonable limits. Research has been conducted to determine the mechanism for implementing this requirement (Kaufman 2011). Kaufmann et al. determined that the most appropriate way to structure an on-site renewable requirement for commercial buildings is to define the requirement in terms of an installed power density permore » unit of roof area. This provides a mechanism that is suitable for the installation of photovoltaic (PV) systems on future buildings to offset electricity and reduce the total building energy load. Kaufmann et al. suggested that an appropriate maximum for the requirement in the commercial sector would be 4 W/ft{sup 2} of roof area or 0.5 W/ft{sup 2} of conditioned floor area. As with all code requirements, there must be an alternative compliance path for buildings that may not reasonably meet the renewables requirement. This might include conditions like shading (which makes rooftop PV arrays less effective), unusual architecture, undesirable roof pitch, unsuitable building orientation, or other issues. In the short term, alternative compliance paths including high performance mechanical equipment, dramatic envelope changes, or controls changes may be feasible. These options may be less expensive than many renewable systems, which will require careful balance of energy measures when setting the code requirement levels. As the stringency of the code continues to increase however, efficiency trade-offs will be maximized, requiring alternative compliance options to be focused solely on renewable electricity trade-offs or equivalent programs. One alternate compliance path includes purchase of Renewable Energy Credits (RECs). Each REC represents a specified amount of renewable electricity production and provides an offset of environmental externalities associated with non-renewable electricity production. The purpose of this paper is to explore the possible issues with RECs and comparable alternative compliance options. Existing codes have been examined to determine energy equivalence between the energy generation requirement and the RECs alternative over the life of the building. The price equivalence of the requirement and the alternative are determined to consider the economic drivers for a market decision. This research includes case studies that review how the few existing codes have incorporated RECs and some of the issues inherent with REC markets. Section 1 of the report reviews compliance options including RECs, green energy purchase programs, shared solar agreements and leases, and other options. Section 2 provides detailed case studies on codes that include RECs and community based alternative compliance methods. The methods the existing code requirements structure alternative compliance options like RECs are the focus of the case studies. Section 3 explores the possible structure of the renewable energy generation requirement in the context of energy and price equivalence. The price of RECs have shown high variation by market and over time which makes it critical to for code language to be updated frequently for a renewable energy generation requirement or the requirement will not remain price-equivalent over time. Section 4 of the report provides a maximum case estimate for impact to the PV market and the REC market based on the Kaufmann et al. proposed requirement levels. If all new buildings in the commercial sector complied with the requirement to install rooftop PV arrays, nearly 4,700 MW of solar would be installed in 2012, a major increase from EIA estimates of 640 MW of solar generation capacity installed in 2009. The residential sector could contribute roughly an additional 2,300 MW based on the same code requirement levels of 4 W/ft{sup 2} of roof area. Section 5 of the report provides a basic framework for draft code language recommendations based on the analysis of the alternative compliance levels.« less
Zhou, Carol L Ecale
2015-01-01
In order to better define regions of similarity among related protein structures, it is useful to identify the residue-residue correspondences among proteins. Few codes exist for constructing a one-to-many multiple sequence alignment derived from a set of structure or sequence alignments, and a need was evident for creating such a tool for combining pairwise structure alignments that would allow for insertion of gaps in the reference structure. This report describes a new Python code, CombAlign, which takes as input a set of pairwise sequence alignments (which may be structure based) and generates a one-to-many, gapped, multiple structure- or sequence-based sequence alignment (MSSA). The use and utility of CombAlign was demonstrated by generating gapped MSSAs using sets of pairwise structure-based sequence alignments between structure models of the matrix protein (VP40) and pre-small/secreted glycoprotein (sGP) of Reston Ebolavirus and the corresponding proteins of several other filoviruses. The gapped MSSAs revealed structure-based residue-residue correspondences, which enabled identification of structurally similar versus differing regions in the Reston proteins compared to each of the other corresponding proteins. CombAlign is a new Python code that generates a one-to-many, gapped, multiple structure- or sequence-based sequence alignment (MSSA) given a set of pairwise sequence alignments (which may be structure based). CombAlign has utility in assisting the user in distinguishing structurally conserved versus divergent regions on a reference protein structure relative to other closely related proteins. CombAlign was developed in Python 2.6, and the source code is available for download from the GitHub code repository.
Gupta, Sumit; Nathan, Paul C; Baxter, Nancy N; Lau, Cindy; Daly, Corinne; Pole, Jason D
2018-06-01
Despite the importance of estimating population level cancer outcomes, most registries do not collect critical events such as relapse. Attempts to use health administrative data to identify these events have focused on older adults and have been mostly unsuccessful. We developed and tested administrative data-based algorithms in a population-based cohort of adolescents and young adults with cancer. We identified all Ontario adolescents and young adults 15-21 years old diagnosed with leukemia, lymphoma, sarcoma, or testicular cancer between 1992-2012. Chart abstraction determined the end of initial treatment (EOIT) date and subsequent cancer-related events (progression, relapse, second cancer). Linkage to population-based administrative databases identified fee and procedure codes indicating cancer treatment or palliative care. Algorithms determining EOIT based on a time interval free of treatment-associated codes, and new cancer-related events based on billing codes, were compared with chart-abstracted data. The cohort comprised 1404 patients. Time periods free of treatment-associated codes did not validly identify EOIT dates; using subsequent codes to identify new cancer events was thus associated with low sensitivity (56.2%). However, using administrative data codes that occurred after the EOIT date based on chart abstraction, the first cancer-related event was identified with excellent validity (sensitivity, 87.0%; specificity, 93.3%; positive predictive value, 81.5%; negative predictive value, 95.5%). Although administrative data alone did not validly identify cancer-related events, administrative data in combination with chart collected EOIT dates was associated with excellent validity. The collection of EOIT dates by cancer registries would significantly expand the potential of administrative data linkage to assess cancer outcomes.
Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Liu, Bing
This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less
NASA Astrophysics Data System (ADS)
Ahmed, Ali; Hasan, Rafiq; Pekau, Oscar A.
2016-12-01
Two recent developments have come into the forefront with reference to updating the seismic design provisions for codes: (1) publication of new seismic hazard maps for Canada by the Geological Survey of Canada, and (2) emergence of the concept of new spectral format outdating the conventional standardized spectral format. The fourth -generation seismic hazard maps are based on enriched seismic data, enhanced knowledge of regional seismicity and improved seismic hazard modeling techniques. Therefore, the new maps are more accurate and need to incorporate into the Canadian Highway Bridge Design Code (CHBDC) for its next edition similar to its building counterpart National Building Code of Canada (NBCC). In fact, the code writers expressed similar intentions with comments in the commentary of CHBCD 2006. During the process of updating codes, NBCC, and AASHTO Guide Specifications for LRFD Seismic Bridge Design, American Association of State Highway and Transportation Officials, Washington (2009) lowered the probability level from 10 to 2% and 10 to 5%, respectively. This study has brought five sets of hazard maps corresponding to 2%, 5% and 10% probability of exceedance in 50 years developed by the GSC under investigation. To have a sound statistical inference, 389 Canadian cities are selected. This study shows the implications of the changes of new hazard maps on the design process (i.e., extent of magnification or reduction of the design forces).
Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation
NASA Astrophysics Data System (ADS)
Frybort, Jan
2017-09-01
Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.
Puckett, Larry; Hitt, Kerie; Alexander, Richard
1998-01-01
names that correspond to the FIPS codes. 2. Tabular component - Nine tab-delimited ASCII lookup tables of animal counts and nutrient estimates organized by 5-digit state/county FIPS (Federal Information Processing Standards) code. Another table lists the county names that correspond to the FIPS codes. The use of trade names is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey.
Qu, Zhen; Djordjevic, Ivan B
2017-08-15
We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.
Lasaygues, Philippe; Arciniegas, Andres; Espinosa, Luis; Prieto, Flavio; Brancheriau, Loïc
2018-05-26
Ultrasound computed tomography (USCT) using the transmission mode is a way to detect and assess the extent of decay in wood structures. The resolution of the ultrasonic image is closely related to the different anatomical features of wood. The complexity of the wave propagation process generates complex signals consisting of several wave packets with different signatures. Wave paths, depth dependencies, wave velocities or attenuations are often difficult to interpret. For this kind of assessment, the focus is generally on signal pre-processing. Several approaches have been used so far including filtering, spectrum analysis and a method involving deconvolution using a characteristic transfer function of the experimental device. However, all these approaches may be too sophisticated and/or unstable. The alternative methods proposed in this work are based on coded excitation, which makes it possible to process both local and general information available such as frequency and time parameters. Coded excitation is based on the filtering of the transmitted signal using a suitable electric input signal. The aim of the present study was to compare two coded-excitation methods, a chirp- and a wavelet-coded excitation method, to determine the time of flight of the ultrasonic wave, and to investigate the feasibility, the robustness and the precision of the measurement of geometrical and acoustical properties in laboratory conditions. To obtain control experimental data, the two methods were compared with the conventional ultrasonic pulse method. Experiments were conducted on a polyurethane resin sample and two samples of different wood species using two 500 kHz-transducers. The relative errors in the measurement of thickness compared with the results of caliper measurements ranged from 0.13% minimum for the wavelet-coded excitation method to 2.3% maximum for the chirp-coded excitation method. For the relative errors in the measurement of ultrasonic wave velocity, the coded excitation methods showed differences ranging from 0.24% minimum for the wavelet-coded excitation method to 2.62% maximum for the chirp-coded excitation method. Methods based on coded excitation algorithms thus enable accurate measurements of thickness and ultrasonic wave velocity in samples of wood species. Copyright © 2018 Elsevier B.V. All rights reserved.
Multidimensional Multiphysics Simulation of TRISO Particle Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Hales; R. L. Williamson; S. R. Novascone
2013-11-01
Multidimensional multiphysics analysis of TRISO-coated particle fuel using the BISON finite-element based nuclear fuels code is described. The governing equations and material models applicable to particle fuel and implemented in BISON are outlined. Code verification based on a recent IAEA benchmarking exercise is described, and excellant comparisons are reported. Multiple TRISO-coated particles of increasing geometric complexity are considered. It is shown that the code's ability to perform large-scale parallel computations permits application to complex 3D phenomena while very efficient solutions for either 1D spherically symmetric or 2D axisymmetric geometries are straightforward. Additionally, the flexibility to easily include new physical andmore » material models and uncomplicated ability to couple to lower length scale simulations makes BISON a powerful tool for simulation of coated-particle fuel. Future code development activities and potential applications are identified.« less
2014-01-01
Background Coronary heart disease and stroke are major contributors to preventable mortality. Evidence links work conditions to these diseases; however, occupational data are perceived to be difficult to collect for large population-based cohorts. We report methodological details and the feasibility of conducting an occupational ancillary study for a large U.S. prospective cohort being followed longitudinally for cardiovascular disease and stroke. Methods Current and historical occupational information were collected from active participants of the REasons for Geographic And Racial Differences in Stroke (REGARDS) Study. A survey was designed to gather quality occupational data among this national cohort of black and white men and women aged 45 years and older (enrolled 2003–2007). Trained staff conducted Computer-Assisted Telephone Interviews (CATI). After a brief pilot period, interviewers received additional training in the collection of narrative industry and occupation data before administering the survey to remaining cohort members. Trained coders used a computer-assisted coding system to assign U.S. Census codes for industry and occupation. All data were double coded; discrepant codes were independently resolved. Results Over a 2-year period, 17,648 participants provided consent and completed the occupational survey (87% response rate). A total of 20,427 jobs were assigned Census codes. Inter-rater reliability was 80% for industry and 74% for occupation. Less than 0.5% of the industry and occupation data were uncodable, compared with 12% during the pilot period. Concordance between the current and longest-held jobs was moderately high. The median time to collect employment status plus narrative and descriptive job information by CATI was 1.6 to 2.3 minutes per job. Median time to assign Census codes was 1.3 minutes per rater. Conclusions The feasibility of conducting high-quality occupational data collection and coding for a large heterogeneous population-based sample was demonstrated. We found that training for interview staff was important in ensuring that narrative responses for industry and occupation were adequately specified for coding. Estimates of survey administration time and coding from digital records provide an objective basis for planning future studies. The social and environmental conditions of work are important understudied risk factors that can be feasibly integrated into large population-based health studies. PMID:24512119
MacDonald, Leslie A; Pulley, LeaVonne; Hein, Misty J; Howard, Virginia J
2014-02-10
Coronary heart disease and stroke are major contributors to preventable mortality. Evidence links work conditions to these diseases; however, occupational data are perceived to be difficult to collect for large population-based cohorts. We report methodological details and the feasibility of conducting an occupational ancillary study for a large U.S. prospective cohort being followed longitudinally for cardiovascular disease and stroke. Current and historical occupational information were collected from active participants of the REasons for Geographic And Racial Differences in Stroke (REGARDS) Study. A survey was designed to gather quality occupational data among this national cohort of black and white men and women aged 45 years and older (enrolled 2003-2007). Trained staff conducted Computer-Assisted Telephone Interviews (CATI). After a brief pilot period, interviewers received additional training in the collection of narrative industry and occupation data before administering the survey to remaining cohort members. Trained coders used a computer-assisted coding system to assign U.S. Census codes for industry and occupation. All data were double coded; discrepant codes were independently resolved. Over a 2-year period, 17,648 participants provided consent and completed the occupational survey (87% response rate). A total of 20,427 jobs were assigned Census codes. Inter-rater reliability was 80% for industry and 74% for occupation. Less than 0.5% of the industry and occupation data were uncodable, compared with 12% during the pilot period. Concordance between the current and longest-held jobs was moderately high. The median time to collect employment status plus narrative and descriptive job information by CATI was 1.6 to 2.3 minutes per job. Median time to assign Census codes was 1.3 minutes per rater. The feasibility of conducting high-quality occupational data collection and coding for a large heterogeneous population-based sample was demonstrated. We found that training for interview staff was important in ensuring that narrative responses for industry and occupation were adequately specified for coding. Estimates of survey administration time and coding from digital records provide an objective basis for planning future studies. The social and environmental conditions of work are important understudied risk factors that can be feasibly integrated into large population-based health studies.
Application of Aeroelastic Solvers Based on Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Srivastava, Rakesh
1998-01-01
A pre-release version of the Navier-Stokes solver (TURBO) was obtained from MSU. Along with Dr. Milind Bakhle of the University of Toledo, subroutines for aeroelastic analysis were developed and added to the TURBO code to develop versions 1 and 2 of the TURBO-AE code. For specified mode shape, frequency and inter-blade phase angle the code calculates the work done by the fluid on the rotor for a prescribed sinusoidal motion. Positive work on the rotor indicates instability of the rotor. The version 1 of the code calculates the work for in-phase blade motions only. In version 2 of the code, the capability for analyzing all possible inter-blade phase angles, was added. The version 2 of TURBO-AE code was validated and delivered to NASA and the industry partners of the AST project. The capabilities and the features of the code are summarized in Refs. [1] & [2]. To release the version 2 of TURBO-AE, a workshop was organized at NASA Lewis, by Dr. Srivastava and Dr. M. A. Bakhle, both of the University of Toledo, in October of 1996 for the industry partners of NASA Lewis. The workshop provided the potential users of TURBO-AE, all the relevant information required in preparing the input data, executing the code, interpreting the results and bench marking the code on their computer systems. After the code was delivered to the industry partners, user support was also provided. A new version of the Navier-Stokes solver (TURBO) was later released by MSU. This version had significant changes and upgrades over the previous version. This new version was merged with the TURBO-AE code. Also, new boundary conditions for 3-D unsteady non-reflecting boundaries, were developed by researchers from UTRC, Ref. [3]. Time was spent on understanding, familiarizing, executing and implementing the new boundary conditions into the TURBO-AE code. Work was started on the phase lagged (time-shifted) boundary condition version (version 4) of the code. This will allow the users to calculate non-zero interblade phase angles using, only one blade passage for analysis.
Identifying Pediatric Severe Sepsis and Septic Shock: Accuracy of Diagnosis Codes.
Balamuth, Fran; Weiss, Scott L; Hall, Matt; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Centkowski, Sierra; Baumer-Mouradian, Shannon; Weiser, Jason; Hayes, Katie; Shah, Samir S; Alpern, Elizabeth R
2015-12-01
To evaluate accuracy of 2 established administrative methods of identifying children with sepsis using a medical record review reference standard. Multicenter retrospective study at 6 US children's hospitals. Subjects were children >60 days to <19 years of age and identified in 4 groups based on International Classification of Diseases, Ninth Revision, Clinical Modification codes: (1) severe sepsis/septic shock (sepsis codes); (2) infection plus organ dysfunction (combination codes); (3) subjects without codes for infection, organ dysfunction, or severe sepsis; and (4) infection but not severe sepsis or organ dysfunction. Combination codes were allowed, but not required within the sepsis codes group. We determined the presence of reference standard severe sepsis according to consensus criteria. Logistic regression was performed to determine whether addition of codes for sepsis therapies improved case identification. A total of 130 out of 432 subjects met reference SD of severe sepsis. Sepsis codes had sensitivity 73% (95% CI 70-86), specificity 92% (95% CI 87-95), and positive predictive value 79% (95% CI 70-86). Combination codes had sensitivity 15% (95% CI 9-22), specificity 71% (95% CI 65-76), and positive predictive value 18% (95% CI 11-27). Slight improvements in model characteristics were observed when codes for vasoactive medications and endotracheal intubation were added to sepsis codes (c-statistic 0.83 vs 0.87, P = .008). Sepsis specific International Classification of Diseases, Ninth Revision, Clinical Modification codes identify pediatric patients with severe sepsis in administrative data more accurately than a combination of codes for infection plus organ dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.
Four year-olds use norm-based coding for face identity.
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-05-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
DROP: Detecting Return-Oriented Programming Malicious Code
NASA Astrophysics Data System (ADS)
Chen, Ping; Xiao, Hai; Shen, Xiaobin; Yin, Xinchun; Mao, Bing; Xie, Li
Return-Oriented Programming (ROP) is a new technique that helps the attacker construct malicious code mounted on x86/SPARC executables without any function call at all. Such technique makes the ROP malicious code contain no instruction, which is different from existing attacks. Moreover, it hides the malicious code in benign code. Thus, it circumvents the approaches that prevent control flow diversion outside legitimate regions (such as W ⊕ X ) and most malicious code scanning techniques (such as anti-virus scanners). However, ROP has its own intrinsic feature which is different from normal program design: (1) uses short instruction sequence ending in "ret", which is called gadget, and (2) executes the gadgets contiguously in specific memory space, such as standard GNU libc. Based on the features of the ROP malicious code, in this paper, we present a tool DROP, which is focused on dynamically detecting ROP malicious code. Preliminary experimental results show that DROP can efficiently detect ROP malicious code, and have no false positives and negatives.
Continuous Codes and Standards Improvement (CCSI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivkin, Carl H; Burgess, Robert M; Buttner, William J
2015-10-21
As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less
Primer development to obtain complete coding sequence of HA and NA genes of influenza A/H3N2 virus.
Agustiningsih, Agustiningsih; Trimarsanto, Hidayat; Setiawaty, Vivi; Artika, I Made; Muljono, David Handojo
2016-08-30
Influenza is an acute respiratory illness and has become a serious public health problem worldwide. The need to study the HA and NA genes in influenza A virus is essential since these genes frequently undergo mutations. This study describes the development of primer sets for RT-PCR to obtain complete coding sequence of Hemagglutinin (HA) and Neuraminidase (NA) genes of influenza A/H3N2 virus from Indonesia. The primers were developed based on influenza A/H3N2 sequence worldwide from Global Initiative on Sharing All Influenza Data (GISAID) and further tested using Indonesian influenza A/H3N2 archived samples of influenza-like illness (ILI) surveillance from 2008 to 2009. An optimum RT-PCR condition was acquired for all HA and NA fragments designed to cover complete coding sequence of HA and NA genes. A total of 71 samples were successfully sequenced for complete coding sequence both of HA and NA genes out of 145 samples of influenza A/H3N2 tested. The developed primer sets were suitable for obtaining complete coding sequences of HA and NA genes of Indonesian samples from 2008 to 2009.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
GRADSPMHD: A parallel MHD code based on the SPH formalism
NASA Astrophysics Data System (ADS)
Vanaverbeke, S.; Keppens, R.; Poedts, S.
2014-03-01
We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a Sedov test including 15625 particles on a single CPU. Classification: 12. Nature of problem: Evolution of a plasma in the ideal MHD approximation. Solution method: The equations of magnetohydrodynamics are solved using the SPH method. Running time: The test provided takes approximately 20 min using 4 processors.
PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan
PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less
ERIC Educational Resources Information Center
Bantum, Erin O'Carroll; Owen, Jason E.
2009-01-01
Psychological interventions provide linguistic data that are particularly useful for testing mechanisms of action and improving intervention methodologies. For this study, emotional expression in an Internet-based intervention for women with breast cancer (n = 63) was analyzed via rater coding and 2 computerized coding methods (Linguistic Inquiry…
Bare Forms and Lexical Insertions in Code-Switching: A Processing-Based Account
ERIC Educational Resources Information Center
Owens, Jonathan
2005-01-01
Bare forms (or [slashed O] forms), uninflected lexical L2 insertions in contexts where the matrix language expects morphological marking, have been recognized as an anomaly in different approaches to code-switching. Myers-Scotton (1997, 2002) has explained their existence in terms of structural incongruity between the matrix and embedded…
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
Aerothermo-Structural Analysis of Low Cost Composite Nozzle/Inlet Components
NASA Technical Reports Server (NTRS)
Shivakumar, Kuwigai; Challa, Preeli; Sree, Dave; Reddy, D.
1999-01-01
This research is a cooperative effort among the Turbomachinery and Propulsion Division of NASA Glenn, CCMR of NC A&T State University, and the Tuskegee University. The NC A&T is the lead center and Tuskegee University is the participating institution. Objectives of the research were to develop an integrated aerodynamic, thermal and structural analysis code for design of aircraft engine components, such as, nozzles and inlets made of textile composites; conduct design studies on typical inlets for hypersonic transportation vehicles and setup standards test examples and finally manufacture a scaled down composite inlet. These objectives are accomplished through the following seven tasks: (1) identify the relevant public domain codes for all three types of analysis; (2) evaluate the codes for the accuracy of results and computational efficiency; (3) develop aero-thermal and thermal structural mapping algorithms; (4) integrate all the codes into one single code; (5) write a graphical user interface to improve the user friendliness of the code; (6) conduct test studies for rocket based combined-cycle engine inlet; and finally (7) fabricate a demonstration inlet model using textile preform composites. Tasks one, two and six are being pursued. Selected and evaluated NPARC for flow field analysis, CSTEM for in-depth thermal analysis of inlets and nozzles and FRAC3D for stress analysis. These codes have been independently verified for accuracy and performance. In addition, graphical user interface based on micromechanics analysis for laminated as well as textile composites was developed. Demonstration of this code will be made at the conference. A rocket based combined cycle engine was selected for test studies. Flow field analysis of various inlet geometries were studied. Integration of codes is being continued. The codes developed are being applied to a candidate example of trailblazer engine proposed for space transportation. A successful development of the code will provide a simpler, faster and user-friendly tool for conducting design studies of aircraft and spacecraft engines, applicable in high speed civil transport and space missions.
Sukanya, Chongthawonsatid
2017-10-01
This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.
NASA Astrophysics Data System (ADS)
Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.; Dominic, David F.; Freedman, Vicky L.; Scheibe, Timothy D.; Lunt, Ian A.
2010-04-01
A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the kilometer scale to the centimeter scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing of upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in part 1 of this paper. In part 2 (Guin et al., 2010), models generated by the code are presented and evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Ramya; Guin, Arijit; Ritzi, Robert W.
A geometric-based simulation methodology was developed and incorporated into a computer code to model the hierarchical stratal architecture, and the corresponding spatial distribution of permeability, in braided channel belt deposits. The code creates digital models of these deposits as a three-dimensional cubic lattice, which can be used directly in numerical aquifer or reservoir models for fluid flow. The digital models have stratal units defined from the km scale to the cm scale. These synthetic deposits are intended to be used as high-resolution base cases in various areas of computational research on multiscale flow and transport processes, including the testing ofmore » upscaling theories. The input parameters are primarily univariate statistics. These include the mean and variance for characteristic lengths of sedimentary unit types at each hierarchical level, and the mean and variance of log-permeability for unit types defined at only the lowest level (smallest scale) of the hierarchy. The code has been written for both serial and parallel execution. The methodology is described in Part 1 of this series. In Part 2, models generated by the code are presented and evaluated.« less
Decoding DNA labels by melting curve analysis using real-time PCR.
Balog, József A; Fehér, Liliána Z; Puskás, László G
2017-12-01
Synthetic DNA has been used as an authentication code for a diverse number of applications. However, existing decoding approaches are based on either DNA sequencing or the determination of DNA length variations. Here, we present a simple alternative protocol for labeling different objects using a small number of short DNA sequences that differ in their melting points. Code amplification and decoding can be done in two steps using quantitative PCR (qPCR). To obtain a DNA barcode with high complexity, we defined 8 template groups, each having 4 different DNA templates, yielding 158 (>2.5 billion) combinations of different individual melting temperature (Tm) values and corresponding ID codes. The reproducibility and specificity of the decoding was confirmed by using the most complex template mixture, which had 32 different products in 8 groups with different Tm values. The industrial applicability of our protocol was also demonstrated by labeling a drone with an oil-based paint containing a predefined DNA code, which was then successfully decoded. The method presented here consists of a simple code system based on a small number of synthetic DNA sequences and a cost-effective, rapid decoding protocol using a few qPCR reactions, enabling a wide range of authentication applications.
The Development of a Programming Support System for Rapid Prototyping. Tasks 2 and 3.
1985-01-01
of the problems that played a role in the larger system. The principal work in Task 2 was the design of a new method for code-generation...develop- Ing a prototype of the interpreter. Due to the limitation of funds, only a design , not a prototype of the bi-directional l 2 scanner was...Introduction LiTmitUn uupii The Problem of designing a code generator for an interpreter-based languae Jas provided a chance to reoexamine fth
Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model
2014-09-19
the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to
Niu, Bolin; Forde, Kimberly A; Goldberg, David S.
2014-01-01
Background & Aims Despite the use of administrative data to perform epidemiological and cost-effectiveness research on patients with hepatitis B or C virus (HBV, HCV), there are no data outside of the Veterans Health Administration validating whether International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes can accurately identify cirrhotic patients with HBV or HCV. The validation of such algorithms is necessary for future epidemiological studies. Methods We evaluated the positive predictive value (PPV) of ICD-9-CM codes for identifying chronic HBV or HCV among cirrhotic patients within the University of Pennsylvania Health System, a large network that includes a tertiary care referral center, a community-based hospital, and multiple outpatient practices across southeastern Pennsylvania and southern New Jersey. We reviewed a random sample of 200 cirrhotic patients with ICD-9-CM codes for HCV and 150 cirrhotic patients with ICD-9-CM codes for HBV. Results The PPV of 1 inpatient or 2 outpatient HCV codes was 88.0% (168/191, 95% CI: 82.5–92.2%), while the PPV of 1 inpatient or 2 outpatient HBV codes was 81.3% (113/139, 95% CI: 73.8–87.4%). Several variations of the primary coding algorithm were evaluated to determine if different combinations of inpatient and/or outpatient ICD-9-CM codes could increase the PPV of the coding algorithm. Conclusions ICD-9-CM codes can identify chronic HBV or HCV in cirrhotic patients with a high PPV, and can be used in future epidemiologic studies to examine disease burden and the proper allocation of resources. PMID:25335773
110 °C range athermalization of wavefront coding infrared imaging systems
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong
2017-09-01
110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.
Neural networks for data compression and invariant image recognition
NASA Technical Reports Server (NTRS)
Gardner, Sheldon
1989-01-01
An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.
ME(SSY)**2: Monte Carlo Code for Star Cluster Simulations
NASA Astrophysics Data System (ADS)
Freitag, Marc Dewi
2013-02-01
ME(SSY)**2 stands for “Monte-carlo Experiments with Spherically SYmmetric Stellar SYstems." This code simulates the long term evolution of spherical clusters of stars; it was devised specifically to treat dense galactic nuclei. It is based on the pioneering Monte Carlo scheme proposed by Hénon in the 70's and includes all relevant physical ingredients (2-body relaxation, stellar mass spectrum, collisions, tidal disruption, ldots). It is basically a Monte Carlo resolution of the Fokker-Planck equation. It can cope with any stellar mass spectrum or velocity distribution. Being a particle-based method, it also allows one to take stellar collisions into account in a very realistic way. This unique code, featuring most important physical processes, allows million particle simulations, spanning a Hubble time, in a few CPU days on standard personal computers and provides a wealth of data only rivalized by N-body simulations. The current version of the software requires the use of routines from the "Numerical Recipes in Fortran 77" (http://www.nrbook.com/a/bookfpdf.php).
Experimental program for real gas flow code validation at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Deiwert, George S.; Strawa, Anthony W.; Sharma, Surendra P.; Park, Chul
1989-01-01
The experimental program for validating real gas hypersonic flow codes at NASA Ames Rsearch Center is described. Ground-based test facilities used include ballistic ranges, shock tubes and shock tunnels, arc jet facilities and heated-air hypersonic wind tunnels. Also included are large-scale computer systems for kinetic theory simulations and benchmark code solutions. Flight tests consist of the Aeroassist Flight Experiment, the Space Shuttle, Project Fire 2, and planetary probes such as Galileo, Pioneer Venus, and PAET.
Psaty, Bruce M; Delaney, Joseph A; Arnold, Alice M; Curtis, Lesley H; Fitzpatrick, Annette L; Heckbert, Susan R; McKnight, Barbara; Ives, Diane; Gottdiener, John S; Kuller, Lewis H; Longstreth, W T
2015-01-01
Background Increasingly, the diagnostic codes from administrative claims data are being used as clinical outcomes. Methods and Results Data from the Cardiovascular Health Study (CHS) were used to compare event rates and risk-factor associations between adjudicated hospitalized cardiovascular events and claims-based methods of defining events. The outcomes of myocardial infarction (MI), stroke, and heart failure (HF) were defined in three ways: 1) the CHS adjudicated event (CHS[adj]); 2) selected ICD9 diagnostic codes only in the primary position for Medicare claims data from the Center for Medicare and Medicaid Services (CMS[1st]); and 3) the same selected diagnostic codes in any position (CMS[any]). Conventional claims-based methods of defining events had high positive predictive values (PPVs) but low sensitivities. For instance, the PPV of an ICD9 code of 410.×1 for a new acute MI in the first position was 90.6%, but this code identified only 53.8% of incident MIs. The observed event rates were low. For MI, the incidence was 14.9 events per 1000 person years for CHS[adj] MI, 8.6 for CMS[1st] and 12.2 for CMS[any]. In general, CVD risk factor associations were similar across the three methods of defining events. Indeed, traditional CVD risk factors were also associated with all first hospitalizations not due to an MI. Conclusions The use of diagnostic codes from claims data as clinical events, especially when restricted to primary diagnoses, leads to an underestimation of event rates. Additionally, claims-based events data represent a composite endpoint that includes the outcome of interest and selected (misclassified) non-event hospitalizations. PMID:26538580
Cai, Yong; Li, Xiwen; Wang, Runmiao; Yang, Qing; Li, Peng; Hu, Hao
2016-01-01
Currently, the chemical fingerprint comparison and analysis is mainly based on professional equipment and software, it’s expensive and inconvenient. This study aims to integrate QR (Quick Response) code with quality data and mobile intelligent technology to develop a convenient query terminal for tracing quality in the whole industrial chain of TCM (traditional Chinese medicine). Three herbal medicines were randomly selected and their chemical two-dimensional barcode (2D) barcodes fingerprints were constructed. Smartphone application (APP) based on Android system was developed to read initial data of 2D chemical barcodes, and compared multiple fingerprints from different batches of same species or different species. It was demonstrated that there were no significant differences between original and scanned TCM chemical fingerprints. Meanwhile, different TCM chemical fingerprint QR codes could be rendered in the same coordinate and showed the differences very intuitively. To be able to distinguish the variations of chemical fingerprint more directly, linear interpolation angle cosine similarity algorithm (LIACSA) was proposed to get similarity ratio. This study showed that QR codes can be used as an effective information carrier to transfer quality data. Smartphone application can rapidly read quality information in QR codes and convert data into TCM chemical fingerprints. PMID:27780256
Simultaneous Semi-Distributed Model Calibration Guided by ...
Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la
Development of 1D Particle-in-Cell Code and Simulation of Plasma-Wall Interactions
NASA Astrophysics Data System (ADS)
Rose, Laura P.
This thesis discusses the development of a 1D particle-in-cell (PIC) code and the analysis of plasma-wall interactions. The 1D code (Plasma and Wall Simulation -- PAWS) is a kinetic simulation of plasma done by treating both electrons and ions as particles. The goal of this thesis is to study near wall plasma interaction to better understand the mechanism that occurs in this region. The main focus of this investigation is the effects that secondary electrons have on the sheath profile. The 1D code is modeled using the PIC method. Treating both the electrons and ions as macroparticles the field is solved on each node and weighted to each macro particle. A pre-ionized plasma was loaded into the domain and the velocities of particles were sampled from the Maxwellian distribution. An important part of this code is the boundary conditions at the wall. If a particle hits the wall a secondary electron may be produced based on the incident energy. To study the sheath profile the simulations were run for various cases. Varying background neutral gas densities were run with the 2D code and compared to experimental values. Different wall materials were simulated to show their effects of SEE. In addition different SEE yields were run, including one study with very high SEE yields to show the presence of a space charge limited sheath. Wall roughness was also studied with the 1D code using random angles of incidence. In addition to the 1D code, an external 2D code was also used to investigate wall roughness without secondary electrons. The roughness profiles where created upon investigation of wall roughness inside Hall Thrusters based off of studies done on lifetime erosion of the inner and outer walls of these devices. The 2D code, Starfish[33], is a general 2D axisymmetric/Cartesian code for modeling a wide a range of plasma and rarefied gas problems. These results show that higher SEE yield produces a smaller sheath profile and that wall roughness produces a lower SEE yield. Modeling near wall interactions is not a simple or perfected task. Due to the lack of a second dimension and a sputtering model it is not possible with this study to show the positive effects wall roughness could have on Hall thruster performance since roughness occurs from the negative affect of sputtering.
Four Year-Olds Use Norm-Based Coding for Face Identity
ERIC Educational Resources Information Center
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-01-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Cai, Yong; Li, Xiwen; Li, Mei; Chen, Xiaojia; Ni, Jingyun; Wang, Yitao
2015-01-01
Chemical fingerprinting is currently a widely used tool that enables rapid and accurate quality evaluation of Traditional Chinese Medicine (TCM). However, chemical fingerprints are not amenable to information storage, recognition, and retrieval, which limit their use in Chinese medicine traceability. In this study, samples of three kinds of Chinese medicines were randomly selected and chemical fingerprints were then constructed by using high performance liquid chromatography. Based on chemical data, the process of converting the TCM chemical fingerprint into two-dimensional code is presented; preprocess and filtering algorithm are also proposed aiming at standardizing the large amount of original raw data. In order to know which type of two-dimensional code (2D) is suitable for storing data of chemical fingerprints, current popular types of 2D codes are analyzed and compared. Results show that QR Code is suitable for recording the TCM chemical fingerprint. The fingerprint information of TCM can be converted into data format that can be stored as 2D code for traceability and quality control. PMID:26089936
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
Fully-kinetic Ion Simulation of Global Electrostatic Turbulent Transport in C-2U
NASA Astrophysics Data System (ADS)
Fulton, Daniel; Lau, Calvin; Bao, Jian; Lin, Zhihong; Tajima, Toshiki; TAE Team
2017-10-01
Understanding the nature of particle and energy transport in field-reversed configuration (FRC) plasmas is a crucial step towards an FRC-based fusion reactor. The C-2U device at Tri Alpha Energy (TAE) achieved macroscopically stable plasmas and electron energy confinement time which scaled favorably with electron temperature. This success led to experimental and theoretical investigation of turbulence in C-2U, including gyrokinetic ion simulations with the Gyrokinetic Toroidal Code (GTC). A primary objective of TAE's new C-2W device is to explore transport scaling in an extended parameter regime. In concert with the C-2W experimental campaign, numerical efforts have also been extended in A New Code (ANC) to use fully-kinetic (FK) ions and a Vlasov-Poisson field solver. Global FK ion simulations are presented. Future code development is also discussed.
NASA Astrophysics Data System (ADS)
Chiavassa, S.; Aubineau-Lanièce, I.; Bitar, A.; Lisbona, A.; Barbet, J.; Franck, D.; Jourdain, J. R.; Bardiès, M.
2006-02-01
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
NASA Astrophysics Data System (ADS)
Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf
2016-11-01
This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.
A Computer Program for Flow-Log Analysis of Single Holes (FLASH)
Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.
2011-01-01
A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Di; Mo, Kun; Ye, Bei
2015-09-30
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL). Two major accomplishments in FY 15 are summarized in this report: (1) implementation of the FASTGRASS module in the BISON code; and (2) a Xe implantation experiment for large-grained UO 2. Both BISON AND MARMOT codes have been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. To contribute to the development of the Moose-Bison-Marmot (MBM) code suite, we have implemented the FASTGRASS fission gas model as a module inmore » the BISON code. Based on rate theory formulations, the coupled FASTGRASS module in BISON is capable of modeling LWR oxide fuel fission gas behavior and fission gas release. In addition, we conducted a Xe implantation experiment at the Argonne Tandem Linac Accelerator System (ATLAS) in order to produce the needed UO 2 samples with desired bubble morphology. With these samples, further experiments to study the fission gas diffusivity are planned to provide validation data for the Fission Gas Release Model in MARMOT codes.« less
Incorporating Code-Based Software in an Introductory Statistics Course
ERIC Educational Resources Information Center
Doehler, Kirsten; Taylor, Laura
2015-01-01
This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…
Coding update of the SMFM definition of low risk for cesarean delivery from ICD-9-CM to ICD-10-CM.
Armstrong, Joanne; McDermott, Patricia; Saade, George R; Srinivas, Sindhu K
2017-07-01
In 2015, the Society for Maternal-Fetal Medicine developed a low risk for cesarean delivery definition based on administrative claims-based diagnosis codes described by the International Classification of Diseases, Ninth Revision, Clinical Modification. The Society for Maternal-Fetal Medicine definition is a clinical enrichment of 2 available measures from the Joint Commission and the Agency for Healthcare Research and Quality measures. The Society for Maternal-Fetal Medicine measure excludes diagnosis codes that represent clinically relevant risk factors that are absolute or relative contraindications to vaginal birth while retaining diagnosis codes such as labor disorders that are discretionary risk factors for cesarean delivery. The introduction of the International Statistical Classification of Diseases, 10th Revision, Clinical Modification in October 2015 expanded the number of available diagnosis codes and enabled a greater depth and breadth of clinical description. These coding improvements further enhance the clinical validity of the Society for Maternal-Fetal Medicine definition and its potential utility in tracking progress toward the goal of safely lowering the US cesarean delivery rate. This report updates the Society for Maternal-Fetal Medicine definition of low risk for cesarean delivery using International Statistical Classification of Diseases, 10th Revision, Clinical Modification coding. Copyright © 2017. Published by Elsevier Inc.
Medical Surveillance Monthly Report (MSMR). Volume 22, Number 9, September 2015
2015-09-01
MEDICAL SURVEILLANCE MONTHLY REPORT PAGE 2 PAGE 6 PAGE 12 Assessment of ICD-9-based case definitions for influenza -like illness surveillance...appropriate when there is a need to maximize specifi city. Assessment of ICD-9-based Case Definitions for Influenza -like Illness Surveillance Angelia A. Eick...matched to the spec- imen; if such a match was not possible, T A 8 L E 1. ICD-9 codes for original influenza -like illness case definition ICD-9 code
2004-01-01
Based Research Inc . 1595 Spring Hill Road, Suite 250 Vienna, VA 22182-2216 Wheatleyg@je.jfcom.mil Mr. J. Wilder UNITED STATES US. Army Training...Sarin, 2000). Collaboration C2 Metrics The following collaboration metrics have evolved out of work done by Evidence Based Research, Inc . for the...Enemy, Troops, Terrain, Troops, Time, and Civil considerations OOTW Operations Other Than War PESTLE Political, Economic, Social, Technological
Multidimensional Fuel Performance Code: BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
BISON is a finite element based nuclear fuel performance code applicable to a variety of fuel forms including light water reactor fuel rods, TRISO fuel particles, and metallic rod and plate fuel (Refs. [a, b, c]). It solves the fully-coupled equations of thermomechanics and species diffusion and includes important fuel physics such as fission gas release and material property degradation with burnup. BISON is based on the MOOSE framework (Ref. [d]) and can therefore efficiently solve problems on 1-, 2- or 3-D meshes using standard workstations or large high performance computers. BISON is also coupled to a MOOSE-based mesoscale phasemore » field material property simulation capability (Refs. [e, f]). As described here, BISON includes the code library named FOX, which was developed concurrent with BISON. FOX contains material and behavioral models that are specific to oxide fuels.« less
Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.
Majumder, Saikat; Verma, Shrish
2015-01-01
Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.
Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.
2003-01-01
The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.
Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.
2014-01-01
Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90% of spine patients into their respective SPORT cohorts. Therefore, claims data appears to be a reasonably valid approach to classifying patients by surgical indication. PMID:24525995
Efficient Modeling of Laser-Plasma Accelerators with INF&RNO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, C.; Schroeder, C. B.; Esarey, E.
2010-06-01
The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The codemore » has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
... Water Infrastructure Project at Marine Corps Base Camp Pendleton, California AGENCY: Department of the... Environmental Policy Act (NEPA) of 1969, 42 United States Code (U.S.C.) Section 4332(2)(c), the regulations of the Council on Environmental Quality for Implementing the Procedural Provisions of NEPA (40 Code of...
Geographic Information Systems: A Primer
1990-10-01
AVAILABILITY OF REPORT Approved for public release; distribution 2b DECLASSjFICATION/ DOWNGRADING SCHEDULE unlimited. 4 PERFORMING ORGANIZATION REPORT...utilizing sophisticated integrated databases (usually vector-based), avoid the indirect value coding scheme by recognizing names or direct magnitudes...intricate involvement required by the operator in order to establish a functional coding scheme . A simple raster system, in which cell values indicate
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
Cadieux, Geneviève; Tamblyn, Robyn; Buckeridge, David L; Dendukuri, Nandini
2017-08-01
Valid measurement of outcomes such as disease prevalence using health care utilization data is fundamental to the implementation of a "learning health system." Definitions of such outcomes can be complex, based on multiple diagnostic codes. The literature on validating such data demonstrates a lack of awareness of the need for a stratified sampling design and corresponding statistical methods. We propose a method for validating the measurement of diagnostic groups that have: (1) different prevalences of diagnostic codes within the group; and (2) low prevalence. We describe an estimation method whereby: (1) low-prevalence diagnostic codes are oversampled, and the positive predictive value (PPV) of the diagnostic group is estimated as a weighted average of the PPV of each diagnostic code; and (2) claims that fall within a low-prevalence diagnostic group are oversampled relative to claims that are not, and bias-adjusted estimators of sensitivity and specificity are generated. We illustrate our proposed method using an example from population health surveillance in which diagnostic groups are applied to physician claims to identify cases of acute respiratory illness. Failure to account for the prevalence of each diagnostic code within a diagnostic group leads to the underestimation of the PPV, because low-prevalence diagnostic codes are more likely to be false positives. Failure to adjust for oversampling of claims that fall within the low-prevalence diagnostic group relative to those that do not leads to the overestimation of sensitivity and underestimation of specificity.
National Underground Mines Inventory
1983-10-01
system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.
1999-01-01
The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.
Biases in GNSS-Data Processing
NASA Astrophysics Data System (ADS)
Schaer, S. C.; Dach, R.; Lutz, S.; Meindl, M.; Beutler, G.
2010-12-01
Within the Global Positioning System (GPS) traditionally different types of pseudo-range measurements (P-code, C/A-code) are available on the first frequency that are tracked by the receivers with different technologies. For that reason, P1-C1 and P1-P2 Differential Code Biases (DCB) need to be considered in a GPS data processing with a mix of different receiver types. Since the Block IIR-M series of GPS satellites also provide C/A-code on the second frequency, P2-C2 DCB need to be added to the list of biases for maintenance. Potential quarter-cycle biases between different phase observables (specifically L2P and L2C) are another issue. When combining GNSS (currently GPS and GLONASS), careful consideration of inter-system biases (ISB) is indispensable, in particular when an adequate combination of individual GLONASS clock correction results from different sources (using, e.g., different software packages) is intended. Facing the GPS and GLONASS modernization programs and the upcoming GNSS, like the European Galileo and the Chinese Compass, an increasing number of types of biases is expected. The Center for Orbit Determination in Europe (CODE) is monitoring these GPS and GLONASS related biases for a long time based on RINEX files of the tracking network of the International GNSS Service (IGS) and in the frame of the data processing as one of the global analysis centers of the IGS. Within the presentation we give an overview on the stability of the biases based on the monitoring. Biases derived from different sources are compared. Finally, we give an outlook on the potential handling of such biases with the big variety of signals and systems expected in the future.
Recent developments in multidimensional transport methods for the APOLLO 2 lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zmijarevic, I.; Sanchez, R.
1995-12-31
A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less
Synchrotron characterization of nanograined UO 2 grain growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Kun; Miao, Yinbin; Yun, Di
2015-09-30
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less
Supplying materials needed for grain growth characterizations of nano-grained UO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Kun; Miao, Yinbin; Yun, Di
2015-09-30
This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL) and aims at providing experimental data for the validation of the mesoscale simulation code MARMOT. MARMOT is a mesoscale multiphysics code that predicts the coevolution of microstructure and properties within reactor fuel during its lifetime in the reactor. It is an important component of the Moose-Bison-Marmot (MBM) code suite that has been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. In order to ensure the accuracy of the microstructuremore » based materials models being developed within the MARMOT code, extensive validation efforts must be carried out. In this report, we summarize our preliminary synchrotron radiation experiments at APS to determine the grain size of nanograin UO 2. The methodology and experimental setup developed in this experiment can directly apply to the proposed in-situ grain growth measurements. The investigation of the grain growth kinetics was conducted based on isothermal annealing and grain growth characterization as functions of duration and temperature. The kinetic parameters such as activation energy for grain growth for UO 2 with different stoichiometry are obtained and compared with molecular dynamics (MD) simulations.« less
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
MacRae, Jayden; Love, Tom; Baker, Michael G; Dowell, Anthony; Carnachan, Matthew; Stubbe, Maria; McBain, Lynn
2015-10-06
We designed and validated a rule-based expert system to identify influenza like illness (ILI) from routinely recorded general practice clinical narrative to aid a larger retrospective research study into the impact of the 2009 influenza pandemic in New Zealand. Rules were assessed using pattern matching heuristics on routine clinical narrative. The system was trained using data from 623 clinical encounters and validated using a clinical expert as a gold standard against a mutually exclusive set of 901 records. We calculated a 98.2 % specificity and 90.2 % sensitivity across an ILI incidence of 12.4 % measured against clinical expert classification. Peak problem list identification of ILI by clinical coding in any month was 9.2 % of all detected ILI presentations. Our system addressed an unusual problem domain for clinical narrative classification; using notational, unstructured, clinician entered information in a community care setting. It performed well compared with other approaches and domains. It has potential applications in real-time surveillance of disease, and in assisted problem list coding for clinicians. Our system identified ILI presentation with sufficient accuracy for use at a population level in the wider research study. The peak coding of 9.2 % illustrated the need for automated coding of unstructured narrative in our study.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
[QR-Code based patient tracking: a cost-effective option to improve patient safety].
Fischer, M; Rybitskiy, D; Strauß, G; Dietz, A; Dressler, C R
2013-03-01
Hospitals are implementing a risk management system to avoid patient or surgery mix-ups. The trend is to use preoperative checklists. This work deals specifically with a type of patient identification, which is realized by storing patient data on a patient-fixed medium. In 127 ENT surgeries data relevant for patient identification were encrypted in a 2D-QR-Code. The code, as a separate document coming with the patient chart or as a patient wristband, has been decrypted in the OR and the patient data were presented visible for all persons. The decoding time, the compliance of the patient data, as well as the duration of the patient identification was compared with the traditional patient identification by inspection of the patient chart. A total of 125 QR codes were read. The time for the decrypting of QR-Code was 5.6 s, the time for the screen view for patient identification was 7.9 s, and for a comparison group of 75 operations traditional patient identification was 27.3 s. Overall, there were 6 relevant information errors in the two parts of the experiment. This represents a ratio of 0.6% for 8 relevant classes per each encrypted QR code. This work allows a cost effective way to technically support patient identification based on electronic patient data. It was shown that the use in the clinical routine is possible. The disadvantage is a potential misinformation from incorrect or missing information in the HIS, or due to changes of the data after the code was created. The QR-code-based patient tracking is seen as a useful complement to the already widely used identification wristband. © Georg Thieme Verlag KG Stuttgart · New York.
Location Based Service in Indoor Environment Using Quick Response Code Technology
NASA Astrophysics Data System (ADS)
Hakimpour, F.; Zare Zardiny, A.
2014-10-01
Today by extensive use of intelligent mobile phones, increased size of screens and enriching the mobile phones by Global Positioning System (GPS) technology use of location based services have been considered by public users more than ever.. Based on the position of users, they can receive the desired information from different LBS providers. Any LBS system generally includes five main parts: mobile devices, communication network, positioning system, service provider and data provider. By now many advances have been gained in relation to any of these parts; however the users positioning especially in indoor environments is propounded as an essential and critical issue in LBS. It is well known that GPS performs too poorly inside buildings to provide usable indoor positioning. On the other hand, current indoor positioning technologies such as using RFID or WiFi network need different hardware and software infrastructures. In this paper, we propose a new method to overcome these challenges. This method is using the Quick Response (QR) Code Technology. QR Code is a 2D encrypted barcode with a matrix structure which consists of black modules arranged in a square grid. Scanning and data retrieving process from QR Code is possible by use of different camera-enabled mobile phones only by installing the barcode reader software. This paper reviews the capabilities of QR Code technology and then discusses the advantages of using QR Code in Indoor LBS (ILBS) system in comparison to other technologies. Finally, some prospects of using QR Code are illustrated through implementation of a scenario. The most important advantages of using this new technology in ILBS are easy implementation, spending less expenses, quick data retrieval, possibility of printing the QR Code on different products and no need for complicated hardware and software infrastructures.
NASA Astrophysics Data System (ADS)
Chatterjee, S.; Bakshi, A. K.; Tripathy, S. P.
2010-09-01
Response matrix for CaSO 4:Dy based neutron dosimeter was generated using Monte Carlo code FLUKA in the energy range thermal to 20 MeV for a set of eight Bonner spheres of diameter 3-12″ including the bare one. Response of the neutron dosimeter was measured for the above set of spheres for 241Am-Be neutron source covered with 2 mm lead. An analytical expression for the response function was devised as a function of sphere mass. Using Frascati Unfolding Iteration Tool (FRUIT) unfolding code, the neutron spectrum of 241Am-Be was unfolded and compared with standard IAEA spectrum for the same.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Greiver, Michelle; Wintemute, Kimberly; Aliarzadeh, Babak; Martin, Ken; Khan, Shahriar; Jackson, Dave; Leggett, Jannet; Lambert-Lanning, Anita; Siu, Maggie
2016-10-12
Consistent and standardized coding for chronic conditions is associated with better care; however, coding may currently be limited in electronic medical records (EMRs) used in Canadian primary care.Objectives To implement data management activities in a community-based primary care organisation and to evaluate the effects on coding for chronic conditions. Fifty-nine family physicians in Toronto, Ontario, belonging to a single primary care organisation, participated in the study. The organisation implemented a central analytical data repository containing their EMR data extracted, cleaned, standardized and returned by the Canadian Primary Care Sentinel Surveillance Network (CPCSSN), a large validated primary care EMR-based database. They used reporting software provided by CPCSSN to identify selected chronic conditions and standardized codes were then added back to the EMR. We studied four chronic conditions (diabetes, hypertension, chronic obstructive pulmonary disease and dementia). We compared changes in coding over six months for physicians in the organisation with changes for 315 primary care physicians participating in CPCSSN across Canada. Chronic disease coding within the organisation increased significantly more than in other primary care sites. The adjusted difference in the increase of coding was 7.7% (95% confidence interval 7.1%-8.2%, p < 0.01). The use of standard codes, consisting of the most common diagnostic codes for each condition in the CPCSSN database, increased by 8.9% more (95% CI 8.3%-9.5%, p < 0.01). Data management activities were associated with an increase in standardized coding for chronic conditions. Exploring requirements to scale and spread this approach in Canadian primary care organisations may be worthwhile.
Aeroelastic Tailoring Study of N+2 Low-Boom Supersonic Commercial Transport Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2015-01-01
The Lockheed Martins N+2 Low-boom Supersonic Commercial Transport (LSCT) aircraft is optimized in this study through the use of a multidisciplinary design optimization tool developed at the NASA Armstrong Flight Research Center. A total of 111 design variables are used in the first optimization run. Total structural weight is the objective function in this optimization run. Design requirements for strength, buckling, and flutter are selected as constraint functions during the first optimization run. The MSC Nastran code is used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses are based on ZAERO code and landing and ground control loads are computed using an in-house code.
NASA Astrophysics Data System (ADS)
Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.
2017-12-01
Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Classification of breast tissue in mammograms using efficient coding.
Costa, Daniel D; Campos, Lúcio F; Barros, Allan K
2011-06-24
Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.
Verification and Validation of the BISON Fuel Performance Code for PCMI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Novascone, Stephen Rhead; Gardner, Russell James
2016-06-01
BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described. Validation for application to light water reactor (LWR) PCMI problems is assessed by comparing predicted and measured rod diameter following base irradiation andmore » power ramps. Results indicate a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. Initial rod diameter comparisons have led to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less
GRAYSKY-A new gamma-ray skyshine code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witts, D.J.; Twardowski, T.; Watmough, M.H.
1993-01-01
This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less
NASA Astrophysics Data System (ADS)
Villiger, Arturo; Schaer, Stefan; Dach, Rolf; Prange, Lars; Jäggi, Adrian
2017-04-01
It is common to handle code biases in the Global Navigation Satellite System (GNSS) data analysis as conventional differential code biases (DCBs): P1-C1, P1-P2, and P2-C2. Due to the increasing number of signals and systems in conjunction with various tracking modes for the different signals (as defined in RINEX3 format), the number of DCBs would increase drastically and the bookkeeping becomes almost unbearable. The Center for Orbit Determination in Europe (CODE) has thus changed its processing scheme to observable-specific signal biases (OSB). This means that for each observation involved all related satellite and receiver biases are considered. The OSB contributions from various ionosphere analyses (geometry-free linear combination) using different observables and frequencies and from clock analyses (ionosphere-free linear combination) are then combined on normal equation level. By this, one consistent set of OSB values per satellite and receiver can be obtained that contains all information needed for GNSS-related processing. This advanced procedure of code bias handling is now also applied to the IGS (International GNSS Service) MGEX (Multi-GNSS Experiment) procedure at CODE. Results for the biases from the legacy IGS solution as well as the CODE MGEX processing (considering GPS, GLONASS, Galileo, BeiDou, and QZSS) are presented. The consistency with the traditional method is confirmed and the new results are discussed regarding the long-term stability. When processing code data, it is essential to know the true observable types in order to correct for the associated biases. CODE has been verifying the receiver tracking technologies for GPS based on estimated DCB multipliers (for the RINEX 2 case). With the change to OSB, the original verification approach was extended to search for the best fitting observable types based on known OSB values. In essence, a multiplier parameter is estimated for each involved GNSS observable type. This implies that we could recover, for receivers tracking a combination of signals, even the factors of these combinations. The verification of the observable types is crucial to identify the correct observable types of RINEX 2 data (which does not contain the signal modulation in comparison to RINEX 3). The correct information of the used observable types is essential for precise point positioning (PPP) applications and GNSS ambiguity resolution. Multi-GNSS OSBs and verified receiver tracking modes are essential to get best possible multi-GNSS solutions for geodynamic purposes and other applications.
Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise
2018-05-01
Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adamek, Julian; Daverio, David; Durrer, Ruth
We present a new N-body code, gevolution , for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body codes Gadget-2 and RAMSES . We then proceed with a simulation ofmore » large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.« less
Newtonian CAFE: a new ideal MHD code to study the solar atmosphere
NASA Astrophysics Data System (ADS)
González, J. J.; Guzmán, F.
2015-12-01
In this work we present a new independent code designed to solve the equations of classical ideal magnetohydrodynamics (MHD) in three dimensions, submitted to a constant gravitational field. The purpose of the code centers on the analysis of solar phenomena within the photosphere-corona region. In special the code is capable to simulate the propagation of impulsively generated linear and non-linear MHD waves in the non-isothermal solar atmosphere. We present 1D and 2D standard tests to demonstrate the quality of the numerical results obtained with our code. As 3D tests we present the propagation of MHD-gravity waves and vortices in the solar atmosphere. The code is based on high-resolution shock-capturing methods, uses the HLLE flux formula combined with Minmod, MC and WENO5 reconstructors. The divergence free magnetic field constraint is controlled using the Flux Constrained Transport method.
Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes
NASA Technical Reports Server (NTRS)
Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.
1997-01-01
This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
Good trellises for IC implementation of viterbi decoders for linear block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.
1996-01-01
This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
1988-04-01
limitations -- taxes V . How to fill out required forms -- AF form 1401 Petty Cash/Refund Voucher -- AF form 2539 NAP Disbursement Request 4N VI. General...AND ACTIVITY CODES ACTIVITY ACTIVITY CODES TITLE CODES TITLE " C2 Lanes Food and Beverage J3 Outside School Program C3 Lanes Pro Shop J4 Comb Fed... Food and Beverage 05 Course Maintenance L z Education E Outdoor Recreation LI Education Services Program El Marina (On-Base) Facility M = Chaplain
Research on coding and decoding method for digital levels.
Tu, Li-fen; Zhong, Si-dong
2011-01-20
A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface.
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-02-08
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements "00", "01", "10", and "11", respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source.
Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface
Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang
2017-01-01
Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements “00”, “01”, “10”, and “11”, respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source. PMID:28176870
NASA Astrophysics Data System (ADS)
Van, Vinh; Bruckhuisen, Jonas; Stahl, Wolfgang; Ilyushin, Vadim; Nguyen, Ha Vinh Lam
2018-01-01
The microwave spectrum of 2,5-dimethylfuran was recorded using two pulsed molecular jet Fourier transform microwave spectrometers which cover the frequency range from 2 to 40 GHz. The internal rotations of two equivalent methyl tops with a barrier height of approximately 439.15 cm-1 introduce torsional splittings of all rotational transitions in the spectrum. For the spectral analysis, two different computer programs were applied and compared, the PAM-C2v-2tops code based on the principal axis method which treats several torsional states simultaneously, and the XIAM code based on the combined axis method, yielding accurate molecular parameters. The experimental work was supplemented by quantum chemical calculations. Two-dimensional potential energy surfaces depending on the torsional angles of both methyl groups were calculated and parametrized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsdell, J.V. Jr.; Simonen, C.A.; Burk, K.W.
1994-02-01
The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate radiation doses that individuals may have received from operations at the Hanford Site since 1944. This report deals specifically with the atmospheric transport model, Regional Atmospheric Transport Code for Hanford Emission Tracking (RATCHET). RATCHET is a major rework of the MESOILT2 model used in the first phase of the HEDR Project; only the bookkeeping framework escaped major changes. Changes to the code include (1) significant changes in the representation of atmospheric processes and (2) incorporation of Monte Carlo methods for representing uncertainty in input data, model parameters,more » and coefficients. To a large extent, the revisions to the model are based on recommendations of a peer working group that met in March 1991. Technical bases for other portions of the atmospheric transport model are addressed in two other documents. This report has three major sections: a description of the model, a user`s guide, and a programmer`s guide. These sections discuss RATCHET from three different perspectives. The first provides a technical description of the code with emphasis on details such as the representation of the model domain, the data required by the model, and the equations used to make the model calculations. The technical description is followed by a user`s guide to the model with emphasis on running the code. The user`s guide contains information about the model input and output. The third section is a programmer`s guide to the code. It discusses the hardware and software required to run the code. The programmer`s guide also discusses program structure and each of the program elements.« less
Babor, Thomas F; Xuan, Ziming; Damon, Donna
2013-10-01
This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the U.S. Beer Institute Code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N = 286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age <21); experts represented various health-related professions, including public health, human development, alcohol research, and mental health. Alcohol ads were rated on 2 occasions separated by 1 month. After completing Time 1 ratings, participants were randomized to receive feedback from 1 group or the other. Findings indicate that (i) ratings at Time 2 had generally reduced variance, suggesting greater consensus after feedback, (ii) feedback from the expert group was more influential than that of the community group in developing group consensus, (iii) the expert group found significantly fewer violations than the community group, (iv) experts representing different professional backgrounds did not differ among themselves in the number of violations identified, and (v) a rating panel composed of at least 15 raters is sufficient to obtain reliable estimates of code violations. The Delphi technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. Copyright © 2013 by the Research Society on Alcoholism.
Babor, Thomas F.; Xuan, Ziming; Damon, Donna
2013-01-01
Background This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the US Beer Institute code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Methods Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N=286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age < 21); experts represented various health-related professions, including public health, human development, alcohol research and mental health. Alcohol ads were rated on two occasions separated by one month. After completing Time 1 ratings, participants were randomized to receive feedback from one group or the other. Results Findings indicate that (1) ratings at Time 2 had generally reduced variance, suggesting greater consensus after feedback, (2) feedback from the expert group was more influential than that of the community group in developing group consensus, (3) the expert group found significantly fewer violations than the community group, (4) experts representing different professional backgrounds did not differ among themselves in the number of violations identified; (5) a rating panel composed of at least 15 raters is sufficient to obtain reliable estimates of code violations. Conclusions The Delphi Technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. PMID:23682927
Street, J T; Thorogood, N P; Cheung, A; Noonan, V K; Chen, J; Fisher, C G; Dvorak, M F
2013-06-01
Observational cohort comparison. To compare the previously validated Spine Adverse Events Severity system (SAVES) with International Classification of Diseases, Tenth Revision codes (ICD-10) codes for identifying adverse events (AEs) in patients with traumatic spinal cord injury (TSCI). Quaternary Care Spine Program. Patients discharged between 2006 and 2010 were identified from our prospective registry. Two consecutive cohorts were created based on the system used to record acute care AEs; one used ICD-10 coding by hospital coders and the other used SAVES data prospectively collected by a multidisciplinary clinical team. The ICD-10 codes were appropriately mapped to the SAVES. There were 212 patients in the ICD-10 cohort and 173 patients in the SAVES cohort. Analyses were adjusted to account for the different sample sizes, and the two cohorts were comparable based on age, gender and motor score. The SAVES system identified twice as many AEs per person as ICD-10 coding. Fifteen unique AEs were more reliably identified using SAVES, including neuropathic pain (32 × more; P<0.001), urinary tract infections (1.4 × ; P<0.05), pressure sores (2.9 × ; P<0.001) and intra-operative AEs (2.3 × ; P<0.05). Eight of these 15 AEs more frequently identified by SAVES significantly impacted length of stay (P<0.05). Risk factors such as patient age and severity of paralysis were more reliably correlated to AEs collected through SAVES than ICD-10. Implementation of the SAVES system for patients with TSCI captured more individuals experiencing AEs and more AEs per person compared with ICD-10 codes. This study demonstrates the utility of prospectively collecting AE data using validated tools.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
High-Content Optical Codes for Protecting Rapid Diagnostic Tests from Counterfeiting.
Gökçe, Onur; Mercandetti, Cristina; Delamarche, Emmanuel
2018-06-19
Warnings and reports on counterfeit diagnostic devices are released several times a year by regulators and public health agencies. Unfortunately, mishandling, altering, and counterfeiting point-of-care diagnostics (POCDs) and rapid diagnostic tests (RDTs) is lucrative, relatively simple and can lead to devastating consequences. Here, we demonstrate how to implement optical security codes in silicon- and nitrocellulose-based flow paths for device authentication using a smartphone. The codes are created by inkjet spotting inks directly on nitrocellulose or on micropillars. Codes containing up to 32 elements per mm 2 and 8 colors can encode as many as 10 45 combinations. Codes on silicon micropillars can be erased by setting a continuous flow path across the entire array of code elements or for nitrocellulose by simply wicking a liquid across the code. Static or labile code elements can further be formed on nitrocellulose to create a hidden code using poly(ethylene glycol) (PEG) or glycerol additives to the inks. More advanced codes having a specific deletion sequence can also be created in silicon microfluidic devices using an array of passive routing nodes, which activate in a particular, programmable sequence. Such codes are simple to fabricate, easy to view, and efficient in coding information; they can be ideally used in combination with information on a package to protect diagnostic devices from counterfeiting.
NASA Astrophysics Data System (ADS)
Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.
2018-05-01
One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.
75 FR 78229 - Record of Decision for the U.S. Marine Corps East Coast Basing of the F-35B Aircraft
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
... States Code (U.S.C.) Section 4332(2)(c), the regulations of the Council on Environmental Quality (CEQ) for Implementing the Procedural Provisions of NEPA (40 Code of Federal Regulations [CFR] parts 1500... action, the Marine Corps will: (1) Construct and/or renovate airfield facilities and infrastructure...
75 FR 78229 - Record of Decision for the U.S. Marine Corps West Coast Basing of the F-35B Aircraft
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
... States Code (U.S.C.) Section 4332(2)(c), the regulations of the Council on Environmental Quality (CEQ) for Implementing the Procedural Provisions of NEPA (40 Code of Federal Regulations [CFR] parts 1500...) Construct and/or renovate airfield facilities and infrastructure necessary to accommodate and maintain the F...
NASA Astrophysics Data System (ADS)
Zhao, Hui; Wei, Jingxuan
2014-09-01
The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
The PLUTO code for astrophysical gasdynamics .
NASA Astrophysics Data System (ADS)
Mignone, A.
Present numerical codes appeal to a consolidated theory based on finite difference and Godunov-type schemes. In this context we have developed a versatile numerical code, PLUTO, suitable for the solution of high-mach number flow in 1, 2 and 3 spatial dimensions and different systems of coordinates. Different hydrodynamic modules and algorithms may be independently selected to properly describe Newtonian, relativistic, MHD, or relativistic MHD fluids. The modular structure exploits a general framework for integrating a system of conservation laws, built on modern Godunov-type shock-capturing schemes. The code is freely distributed under the GNU public license and it is available for download to the astrophysical community at the URL http://plutocode.to.astro.it.
Palindromic repetitive DNA elements with coding potential in Methanocaldococcus jannaschii.
Suyama, Mikita; Lathe, Warren C; Bork, Peer
2005-10-10
We have identified 141 novel palindromic repetitive elements in the genome of euryarchaeon Methanocaldococcus jannaschii. The total length of these elements is 14.3kb, which corresponds to 0.9% of the total genomic sequence and 6.3% of all extragenic regions. The elements can be divided into three groups (MJRE1-3) based on the sequence similarity. The low sequence identity within each of the groups suggests rather old origin of these elements in M. jannaschii. Three MJRE2 elements were located within the protein coding regions without disrupting the coding potential of the host genes, indicating that insertion of repeats might be a widespread mechanism to enhance sequence diversity in coding regions.
NASA Technical Reports Server (NTRS)
Wang, Yongli; Benson, Robert F.
2011-01-01
Two software applications have been produced specifically for the analysis of some million digital topside ionograms produced by a recent analog-to-digital conversion effort of selected analog telemetry tapes from the Alouette-2, ISIS-1 and ISIS-2 satellites. One, TOPIST (TOPside Ionogram Scalar with True-height algorithm) from the University of Massachusetts Lowell, is designed for the automatic identification of the topside-ionogram ionospheric-reflection traces and their inversion into vertical electron-density profiles Ne(h). TOPIST also has the capability of manual intervention. The other application, from the Goddard Space Flight Center based on the FORTRAN code of John E. Jackson from the 1960s, is designed as an IDL-based interactive program for the scaling of selected digital topside-sounder ionograms. The Jackson code has also been modified, with some effort, so as to run on modern computers. This modification was motivated by the need to scale selected ionograms from the millions of Alouette/ISIS topside-sounder ionograms that only exist on 35-mm film. During this modification, it became evident that it would be more efficient to design a new code, based on the capabilities of present-day computers, than to continue to modify the old code. Such a new code has been produced and here we will describe its capabilities and compare Ne(h) profiles produced from it with those produced by the Jackson code. The concept of the new code is to assume an initial Ne(h) and derive a final Ne(h) through an iteration process that makes the resulting apparent-height profile fir the scaled values within a certain error range. The new code can be used on the X-, O-, and Z-mode traces. It does not assume any predefined profile shape between two contiguous points, like the exponential rule used in Jackson s program. Instead, Monotone Piecewise Cubic Interpolation is applied in the global profile to keep the monotone nature of the profile, which also ensures better smoothness in the final profile than in Jackson s program. The new code uses the complete refractive index expression for a cold collisionless plasma and can accommodate the IGRF, T96, and other geomagnetic field models.
Schmitz, Matthew; Forst, Linda
2016-02-15
Inclusion of information about a patient's work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers' compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for "industry" and "occupation" based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. The objective of the study was to evaluate the intercoder reliability of NIOSH's Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the "high confidence" level and 49%-58% at the "medium confidence" level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are "substantial" at the 2-digit level, but only "fair" to "good" at the 4-digit level. This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted.
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.
Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland
2011-04-08
In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme
2011-01-01
Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310
A Comprehensive High Performance Predictive Tool for Fusion Liquid Metal Hydromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Peter; Chhabra, Rupanshi; Munipalli, Ramakanth
In Phase I SBIR project, HyPerComp and Texcel initiated the development of two induction-based MHD codes as a predictive tool for fusion hydro-magnetics. The newly-developed codes overcome the deficiency of other MHD codes based on the quasi static approximation by defining a more general mathematical model that utilizes the induced magnetic field rather than the electric potential as the main electromagnetic variable. The UCLA code is a finite-difference staggered-mesh code that serves as a supplementary tool to the massively-parallel finite-volume code developed by HyPerComp. As there is no suitable experimental data under blanket-relevant conditions for code validation, code-to-code comparisons andmore » comparisons against analytical solutions were successfully performed for three selected test cases: (1) lid-driven MHD flow, (2) flow in a rectangular duct in a transverse magnetic field, and (3) unsteady finite magnetic Reynolds number flow in a rectangular enclosure. The performed tests suggest that the developed codes are accurate and robust. Further work will focus on enhancing the code capabilities towards higher flow parameters and faster computations. At the conclusion of the current Phase-II Project we have completed the preliminary validation efforts in performing unsteady mixed-convection MHD flows (against limited data that is currently available in literature), and demonstrated flow behavior in large 3D channels including important geometrical features. Code enhancements such as periodic boundary conditions, unmatched mesh structures are also ready. As proposed, we have built upon these strengths and explored a much increased range of Grashof numbers and Hartmann numbers under various flow conditions, ranging from flows in a rectangular duct to prototypic blanket modules and liquid metal PFC. Parametric studies, numerical and physical model improvements to expand the scope of simulations, code demonstration, and continued validation activities have also been completed.« less
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
A users manual for the method of moments Aircraft Modeling Code (AMC), version 2
NASA Technical Reports Server (NTRS)
Peters, M. E.; Newman, E. H.
1994-01-01
This report serves as a user's manual for Version 2 of the 'Aircraft Modeling Code' or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. This report describes the input command language and also includes several examples which illustrate typical code inputs and outputs.
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.; Gracanin, Denis; Erickson, John
2005-01-01
Requirements-to-Design-to-Code (R2D2C) is an approach to the engineering of computer-based systems that embodies the idea of requirements-based programming in system development. It goes further; however, in that the approach offers not only an underlying formalism, but full formal development from requirements capture through to the automatic generation of provably-correct code. As such, the approach has direct application to the development of systems requiring autonomic properties. We describe a prototype tool to support the method, and illustrate its applicability to the development of LOGOS, a NASA autonomous ground control system, which exhibits autonomic behavior. Finally, we briefly discuss other areas where the approach and prototype tool are being considered for application.
An algebraic hypothesis about the primeval genetic code architecture.
Sánchez, Robersy; Grau, Ricardo
2009-09-01
A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D,A,C,G,U}, where symbol D represents one or more hypothetical bases with unspecific pairings. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvement of a primeval DNA repair system could make possible the transition from ancient to modern genetic codes. Our results suggest that the Watson-Crick base pairing G identical with C and A=U and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as, the transition from the former to the latter. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences. The phylogenetic analyses achieved with metrics defined in the N-dimensional vector space (B(3))(N) of DNA sequences and with the new evolutionary model presented here also suggest that an ancient DNA coding sequence with five or more bases does not contradict the expected evolutionary history.
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
Chroma sampling and modulation techniques in high dynamic range video coding
NASA Astrophysics Data System (ADS)
Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj
2015-09-01
High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.
Hwang, Y Joseph; Shariff, Salimah Z; Gandhi, Sonja; Wald, Ron; Clark, Edward; Fleet, Jamie L; Garg, Amit X
2012-01-01
Objective To evaluate the validity of the International Classification of Diseases, Tenth Revision (ICD-10) code N17x for acute kidney injury (AKI) in elderly patients in two settings: at presentation to the emergency department and at hospital admission. Design A population-based retrospective validation study. Setting Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum creatinine measurements at presentation to the emergency department (n=36 049) or hospital admission (n=38 566). The baseline serum creatinine measurement was a median of 102 and 39 days prior to presentation to the emergency department and hospital admission, respectively. Main outcome measures Sensitivity, specificity and positive and negative predictive values of ICD-10 diagnostic coding algorithms for AKI using a reference standard based on changes in serum creatinine from the baseline value. Median changes in serum creatinine of patients who were code positive and code negative for AKI. Results The sensitivity of the best-performing coding algorithm for AKI (defined as a ≥2-fold increase in serum creatinine concentration) was 37.4% (95% CI 32.1% to 43.1%) at presentation to the emergency department and 61.6% (95% CI 57.5% to 65.5%) at hospital admission. The specificity was greater than 95% in both settings. In patients who were code positive for AKI, the median (IQR) increase in serum creatinine from the baseline was 133 (62 to 288) µmol/l at presentation to the emergency department and 98 (43 to 200) µmol/l at hospital admission. In those who were code negative, the increase in serum creatinine was 2 (−8 to 14) and 6 (−4 to 20) µmol/l, respectively. Conclusions The presence or absence of ICD-10 code N17× differentiates two groups of patients with distinct changes in serum creatinine at the time of a hospital encounter. However, the code underestimates the true incidence of AKI due to a limited sensitivity. PMID:23204077
MCNP Version 6.2 Release Notes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, Christopher John; Bull, Jeffrey S.; Solomon, C. J.
Monte Carlo N-Particle or MCNP ® is a general-purpose Monte Carlo radiation-transport code designed to track many particle types over broad ranges of energies. This MCNP Version 6.2 follows the MCNP6.1.1 beta version and has been released in order to provide the radiation transport community with the latest feature developments and bug fixes for MCNP. Since the last release of MCNP major work has been conducted to improve the code base, add features, and provide tools to facilitate ease of use of MCNP version 6.2 as well as the analysis of results. These release notes serve as a general guidemore » for the new/improved physics, source, data, tallies, unstructured mesh, code enhancements and tools. For more detailed information on each of the topics, please refer to the appropriate references or the user manual which can be found at http://mcnp.lanl.gov. This release of MCNP version 6.2 contains 39 new features in addition to 172 bug fixes and code enhancements. There are still some 33 known issues the user should familiarize themselves with (see Appendix).« less
Application of 2D graphic representation of protein sequence based on Huffman tree method.
Qi, Zhao-Hui; Feng, Jun; Qi, Xiao-Qin; Li, Ling
2012-05-01
Based on Huffman tree method, we propose a new 2D graphic representation of protein sequence. This representation can completely avoid loss of information in the transfer of data from a protein sequence to its graphic representation. The method consists of two parts. One is about the 0-1 codes of 20 amino acids by Huffman tree with amino acid frequency. The amino acid frequency is defined as the statistical number of an amino acid in the analyzed protein sequences. The other is about the 2D graphic representation of protein sequence based on the 0-1 codes. Then the applications of the method on ten ND5 genes and seven Escherichia coli strains are presented in detail. The results show that the proposed model may provide us with some new sights to understand the evolution patterns determined from protein sequences and complete genomes. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
Feasibility of coded vibration in a vibro-ultrasound system for tissue elasticity measurement.
Zhao, Jinxin; Wang, Yuanyuan; Yu, Jinhua; Li, Tianjie; Zheng, Yong-Ping
2016-07-01
The ability of various methods for elasticity measurement and imaging is hampered by the vibration amplitude on biological tissues. Based on the inference that coded excitation will improve the performance of the cross-correlation function of the tissue displacement waves, the idea of exerting encoded external vibration on tested samples for measuring its elasticity is proposed. It was implemented by integrating a programmable vibration generation function into a customized vibro-ultrasound system to generate Barker coded vibration for elasticity measurement. Experiments were conducted on silicone phantoms and porcine muscles. The results showed that coded excitation of the vibration enhanced the accuracy and robustness of the elasticity measurement especially in low signal-to-noise ratio scenarios. In the phantom study, the measured shear modulus values with coded vibration had an R(2 )= 0.993 linear correlation to that of referenced indentation, while for single-cycle pulse the R(2) decreased to 0.987. In porcine muscle study, the coded vibration also obtained a shear modulus value which is more accurate than the single-cycle pulse by 0.16 kPa and 0.33 kPa at two different depths. These results demonstrated the feasibility and potentiality of the coded vibration for enhancing the quality of elasticity measurement and imaging.
Validation of a Computational Fluid Dynamics (CFD) Code for Supersonic Axisymmetric Base Flow
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin
1993-01-01
The ability to accurately and efficiently calculate the flow structure in the base region of bodies of revolution in supersonic flight is a significant step in CFD code validation for applications ranging from base heating for rockets to drag for protectives. The FDNS code is used to compute such a flow and the results are compared to benchmark quality experimental data. Flowfield calculations are presented for a cylindrical afterbody at M = 2.46 and angle of attack a = O. Grid independent solutions are compared to mean velocity profiles in the separated wake area and downstream of the reattachment point. Additionally, quantities such as turbulent kinetic energy and shear layer growth rates are compared to the data. Finally, the computed base pressures are compared to the measured values. An effort is made to elucidate the role of turbulence models in the flowfield predictions. The level of turbulent eddy viscosity, and its origin, are used to contrast the various turbulence models and compare the results to the experimental data.
Using Gemba Boards to Facilitate Evidence-Based Practice in Critical Care.
Bourgault, Annette M; Upvall, Michele J; Graham, Alison
2018-06-01
Tradition-based practices lack supporting research evidence and may be harmful or ineffective. Engagement of key stakeholders is a critical step toward facilitating evidence-based practice change. Gemba , derived from Japanese, refers to the real place where work is done. Gemba boards (visual management tools) appear to be an innovative method to engage stakeholders and facilitate evidence-based practice. To explore the use of gemba boards and gemba huddles to facilitate practice change. Twenty-two critical care nurses participated in interviews in this qualitative, descriptive study. Thematic analysis was used to code and categorize interview data. Two researchers reached consensus on coding and derived themes. Data were managed with qualitative analysis software. The code gemba occurred most frequently; a secondary analysis was performed to explore its impact on practice change. Four themes were derived from the gemba code: (1) facilitation of staff, leadership, and interdisciplinary communication, (2) transparency of outcome data, (3) solicitation of staff ideas and feedback, and (4) dissemination of practice changes. Gemba boards and gemba huddles became part of the organizational culture for promoting and disseminating evidence-based practices. Unit-based, publicly located gemba boards and huddles have become key components of evidence-based practice culture. Gemba is both a tool and a process to engage team members and the public to generate clinical questions and to plan, implement, and evaluate practice changes. Future research on the effectiveness of gemba boards to facilitate evidence-based practice is warranted. ©2018 American Association of Critical-Care Nurses.
Miyamoto, Yutaka; Yasuda, Kenichiro; Magara, Masaaki
2014-06-01
Airborne radioactive particles released by the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident in 2011 were collected with a cascade low-pressure impactor at the Japan Atomic Energy Agency (JAEA) in Tokai, Japan, 114 km south of the FDNPP. Size-fractionated samples were collected twice, in the periods of March 17-April 1, 2011, and May 9-13, 2011. These size-fractionated samplings were carried out in the earliest days at a short distance from the FDNPP. Radioactivity of short-lived nuclides (several ten days of half-life) was determined as well as (134)Cs and (137)Cs. The elemental composition of size-fractionated samples was also measured. In the first collection, the activity median aerodynamic diameter (AMAD) of (129m)Te, (140)Ba, (134)Cs, (136)Cs and (137)Cs was 1.5-1.6 μm, while the diameter of (131)I was 0.45 μm. The diameters of (134)Cs and (137)Cs in the second collection were expressed as three peaks at <0.5 μm, 0.94 μm, and 7.8 μm. The (134)Cs/(137)Cs ratio of the first collection was 1.02 in total, but the ratio in the fine fractions was 0.91. A distribution map of (134)Cs/(137)Cs - (136)Cs/(137)Cs ratios was helpful in understanding the change of radioactive Cs composition. The Cs composition of size fractions <0.43 μm and the composition in the 1.1-2.1 μm range (including the AMAD of 1.5-1.6 μm) were similar to the calculated compositions of fuels in the reactors No. 1 and No. 3 at the FDNPP using the ORIGEN-II code. The Cs composition collected in May, 2011 was similar to the calculation results of reactor No. 2 fuel composition. The change of Cs composition implies that the radioactive Cs was released from the three reactors at the FDNPP via different processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Palmans, Hugo; Nafaa, Laila; de Patoul, Nathalie; Denis, Jean-Marc; Tomsej, Milan; Vynckier, Stefaan
2003-05-01
New codes of practice for reference dosimetry in clinical high-energy photon and electron beams have been published recently, to replace the air kerma based codes of practice that have determined the dosimetry of these beams for the past twenty years. In the present work, we compared dosimetry based on the two most widespread absorbed dose based recommendations (AAPM TG-51 and IAEA TRS-398) with two air kerma based recommendations (NCS report-5 and IAEA TRS-381). Measurements were performed in three clinical electron beam energies using two NE2571-type cylindrical chambers, two Markus-type plane-parallel chambers and two NACP-02-type plane-parallel chambers. Dosimetry based on direct calibrations of all chambers in 60Co was investigated, as well as dosimetry based on cross-calibrations of plane-parallel chambers against a cylindrical chamber in a high-energy electron beam. Furthermore, 60Co perturbation factors for plane-parallel chambers were derived. It is shown that the use of 60Co calibration factors could result in deviations of more than 2% for plane-parallel chambers between the old and new codes of practice, whereas the use of cross-calibration factors, which is the first recommendation in the new codes, reduces the differences to less than 0.8% for all situations investigated here. The results thus show that neither the chamber-to-chamber variations, nor the obtained absolute dose values are significantly altered by changing from air kerma based dosimetry to absorbed dose based dosimetry when using calibration factors obtained from the Laboratory for Standard Dosimetry, Ghent, Belgium. The values of the 60Co perturbation factor for plane-parallel chambers (katt . km for the air kerma based and pwall for the absorbed dose based codes of practice) that are obtained from comparing the results based on 60Co calibrations and cross-calibrations are within the experimental uncertainties in agreement with the results from other investigators.
Radiation from advanced solid rocket motor plumes
NASA Technical Reports Server (NTRS)
Farmer, Richard C.; Smith, Sheldon D.; Myruski, Brian L.
1994-01-01
The overall objective of this study was to develop an understanding of solid rocket motor (SRM) plumes in sufficient detail to accurately explain the majority of plume radiation test data. Improved flowfield and radiation analysis codes were developed to accurately and efficiently account for all the factors which effect radiation heating from rocket plumes. These codes were verified by comparing predicted plume behavior with measured NASA/MSFC ASRM test data. Upon conducting a thorough review of the current state-of-the-art of SRM plume flowfield and radiation prediction methodology and the pertinent data base, the following analyses were developed for future design use. The NOZZRAD code was developed for preliminary base heating design and Al2O3 particle optical property data evaluation using a generalized two-flux solution to the radiative transfer equation. The IDARAD code was developed for rapid evaluation of plume radiation effects using the spherical harmonics method of differential approximation to the radiative transfer equation. The FDNS CFD code with fully coupled Euler-Lagrange particle tracking was validated by comparison to predictions made with the industry standard RAMP code for SRM nozzle flowfield analysis. The FDNS code provides the ability to analyze not only rocket nozzle flow, but also axisymmetric and three-dimensional plume flowfields with state-of-the-art CFD methodology. Procedures for conducting meaningful thermo-vision camera studies were developed.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Mata-Cases, Manel; Mauricio, Dídac; Real, Jordi; Bolíbar, Bonaventura; Franch-Nadal, Josep
2016-11-01
To assess the prevalence of miscoding, misclassification, misdiagnosis and under-registration of diabetes mellitus (DM) in primary health care in Catalonia (Spain), and to explore use of automated algorithms to identify them. In this cross-sectional, retrospective study using an anonymized electronic general practice database, data were collected from patients or users with a diabetes-related code or from patients with no DM or prediabetes code but treated with antidiabetic drugs (unregistered DM). Decision algorithms were designed to classify the true diagnosis of type 1 DM (T1DM), type 2 DM (T2DM), and undetermined DM (UDM), and to classify unregistered DM patients treated with antidiabetic drugs. Data were collected from a total of 376,278 subjects with a DM ICD-10 code, and from 8707 patients with no DM or prediabetes code but treated with antidiabetic drugs. After application of the algorithms, 13.9% of patients with T1DM were identified as misclassified, and were probably T2DM; 80.9% of patients with UDM were reclassified as T2DM, and 19.1% of them were misdiagnosed as DM when they probably had prediabetes. The overall prevalence of miscoding (multiple codes or UDM) was 2.2%. Finally, 55.2% of subjects with unregistered DM were classified as prediabetes, 35.7% as T2DM, 8.5% as UDM treated with insulin, and 0.6% as T1DM. The prevalence of inappropriate codification or classification and under-registration of DM is relevant in primary care. Implementation of algorithms could automatically flag cases that need review and would substantially decrease the risk of inappropriate registration or coding. Copyright © 2016 SEEN. Publicado por Elsevier España, S.L.U. All rights reserved.
Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.
Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R
2006-02-28
The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.
2007-10-16
ABSTRACT c. THIS PAGE ABSTRACT OF Francis Otuonye P U UU24 19b. TELEPHONE NUMBER (Include area code ) 24 931-372-3374 Standard Form 298 (Rev. 8/98...modulation pulse wavefom--sotware defined or cognitive. From a information-theoretical viewpoint, the two parts as a whole form so-called "pre- coding ". I...The time domain system Fig. 2.3(b) is based on digital sampling oscilloscope (DSO), Textronix TDS 7000E3. The time domain sounder has the capability
Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark
2011-01-01
Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.
Current and anticipated uses of thermal hydraulic codes in Korea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyung-Doo; Chang, Won-Pyo
1997-07-01
In Korea, the current uses of thermal hydraulic codes are categorized into 3 areas. The first application is in designing both nuclear fuel and NSSS. The codes have usually been introduced based on the technology transfer programs agreed between KAERI and the foreign vendors. Another area is in the supporting of the plant operations and licensing by the utility. The third category is research purposes. In this area assessments and some applications to the safety issue resolutions are major activities using the best estimate thermal hydraulic codes such as RELAP5/MOD3 and CATHARE2. Recently KEPCO plans to couple thermal hydraulic codesmore » with a neutronics code for the design of the evolutionary type reactor by 2004. KAERI also plans to develop its own best estimate thermal hydraulic code, however, application range is different from KEPCO developing code. Considering these activities, it is anticipated that use of the best estimate hydraulic analysis code developed in Korea may be possible in the area of safety evaluation within 10 years.« less
De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul
2017-03-01
Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.
Numerical modeling of the fracture process in a three-unit all-ceramic fixed partial denture.
Kou, Wen; Kou, Shaoquan; Liu, Hongyuan; Sjögren, Göran
2007-08-01
The main objectives were to examine the fracture mechanism and process of a ceramic fixed partial denture (FPD) framework under simulated mechanical loading using a recently developed numerical modeling code, the R-T(2D) code, and also to evaluate the suitability of R-T(2D) code as a tool for this purpose. Using the recently developed R-T(2D) code the fracture mechanism and process of a 3U yttria-tetragonal zirconia polycrystal ceramic (Y-TZP) FPD framework was simulated under static loading. In addition, the fracture pattern obtained using the numerical simulation was compared with the fracture pattern obtained in a previous laboratory test. The result revealed that the framework fracture pattern obtained using the numerical simulation agreed with that observed in a previous laboratory test. Quasi-photoelastic stress fringe pattern and acoustic emission showed that the fracture mechanism was tensile failure and that the crack started at the lower boundary of the framework. The fracture process could be followed both in step-by-step and step-in-step. Based on the findings in the current study, the R-T(2D) code seems suitable for use as a complement to other tests and clinical observations in studying stress distribution, fracture mechanism and fracture processes in ceramic FPD frameworks.
Shiff, Natalie Jane; Oen, Kiem; Rabbani, Rasheda; Lix, Lisa M
2017-09-01
We validated case ascertainment algorithms for juvenile idiopathic arthritis (JIA) in the provincial health administrative databases of Manitoba, Canada. A population-based pediatric rheumatology clinical database from April 1st 1980 to March 31st 2012 was used to test case definitions in individuals diagnosed at ≤15 years of age. The case definitions varied the number of diagnosis codes (1, 2, or 3), time frame (1, 2 or 3 years), time between diagnoses (ever, >1 day, or ≥8 weeks), and physician specialty. Positive predictive value (PPV), sensitivity, and specificity with 95% confidence intervals (CIs) are reported. A case definition of 1 hospitalization or ≥2 diagnoses in 2 years by any provider ≥8 weeks apart using diagnosis codes for rheumatoid arthritis and ankylosing spondylitis produced a sensitivity of 89.2% (95% CI 86.8, 91.6), specificity of 86.3% (95% CI 83.0, 89.6), and PPV of 90.6% (95% CI 88.3, 92.9) when seronegative enthesopathy and arthropathy (SEA) was excluded as JIA; and sensitivity of 88.2% (95% CI 85.7, 90.7), specificity of 90.4% (95% CI 87.5, 93.3), and PPV of 93.9% (95% CI 92.0, 95.8) when SEA was included as JIA. This study validates case ascertainment algorithms for JIA in Canadian administrative health data using diagnosis codes for both rheumatoid arthritis (RA) and ankylosing spondylitis, to better reflect current JIA classification than codes for RA alone. Researchers will be able to use these results to define cohorts for population-based studies.
Two-fluid 2.5D code for simulations of small scale magnetic fields in the lower solar atmosphere
NASA Astrophysics Data System (ADS)
Piantschitsch, Isabell; Amerstorfer, Ute; Thalmann, Julia Katharina; Hanslmeier, Arnold; Lemmerer, Birgit
2015-08-01
Our aim is to investigate magnetic reconnection as a result of the time evolution of magnetic flux tubes in the solar chromosphere. A new numerical two-fluid code was developed, which will perform a 2.5D simulation of the dynamics from the upper convection zone up to the transition region. The code is based on the Total Variation Diminishing Lax-Friedrichs method and includes the effects of ion-neutral collisions, ionisation/recombination, thermal/resistive diffusivity as well as collisional/resistive heating. What is innovative about our newly developed code is the inclusion of a two-fluid model in combination with the use of analytically constructed vertically open magnetic flux tubes, which are used as initial conditions for our simulation. First magnetohydrodynamic (MHD) tests have already shown good agreement with known results of numerical MHD test problems like e.g. the Orszag-Tang vortex test, the Current Sheet test or the Spherical Blast Wave test. Furthermore, the single-fluid approach will also be applied to the initial conditions, in order to compare the different rates of magnetic reconnection in both codes, the two-fluid code and the single-fluid one.
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-01-01
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787
Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo
2017-03-16
We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.
Data summary report for fission product release test VI-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborne, M.F.; Collins, J.L.; Lorenz, R.A.
The first in a series of high-temperature fission product release test in a new vertical test apparatus was conducted in flowing steam. The test specimen was a 15.2-cm-long section of a fuel rod from the Oconee 1 PWR; it had been irradiated to a burnup of /approximately/42 MWd/kg. Using an induction furnace, it was heated under simulated LWR accident conditions -- 20 min at 2000 K and 20 min at 2300 K -- in a hot cell-mounted test apparatus. Posttest inspection showed severe oxidation but only minimal fragmentation of the fuel specimen; cladding melting was apparent only near the topmore » end. Based on fission product measured in the fuel and/or calculated by ORIGEN, analyses of test components showed total releases from the fuel of 47% for /sup 85/Kr, 33% for /sup 125/Sb, 37% for /sup 129/I, 84% for /sup 110m/Ag, and 63% for /sup 137/Cs. Large fractions (36% and 30%, respectively) of the released /sup 110m/Ag and /sup 125/Sb were retained in the furnace above the fuel. Pretest and posttest analysis of the fuel specimen indicated a /sup 134/Cs release of 65%, which is very good agreement with the /sup 137/Cs value. 21 refs., 24 figs., 16 tabs.« less
Zheng, Jian; Tagami, Keiko; Uchida, Shigeo
2013-09-03
The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident has caused serious contamination in the environment. The release of Pu isotopes renewed considerable public concern because they present a large risk for internal radiation exposure. In this Critical Review, we summarize and analyze published studies related to the release of Pu from the FDNPP accident based on environmental sample analyses and the ORIGEN model simulations. Our analysis emphasizes the environmental distribution of released Pu isotopes, information on Pu isotopic composition for source identification of Pu releases in the FDNPP-damaged reactors or spent fuel pools, and estimation of the amounts of Pu isotopes released from the FDNPP accident. Our analysis indicates that a trace amount of Pu isotopes (∼2 × 10(-5)% of core inventory) was released into the environment from the damaged reactors but not from the spent fuel pools located in the reactor buildings. Regarding the possible Pu contamination in the marine environment, limited studies suggest that no extra Pu input from the FDNPP accident could be detected in the western North Pacific 30 km off the Fukushima coast. Finally, we identified knowledge gaps remained on the release of Pu into the environment and recommended issues for future studies.
Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks
NASA Astrophysics Data System (ADS)
Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu
Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Guidelines for VCCT-Based Interlaminar Fatigue and Progressive Failure Finite Element Analysis
NASA Technical Reports Server (NTRS)
Deobald, Lyle R.; Mabson, Gerald E.; Engelstad, Steve; Prabhakar, M.; Gurvich, Mark; Seneviratne, Waruna; Perera, Shenal; O'Brien, T. Kevin; Murri, Gretchen; Ratcliffe, James;
2017-01-01
This document is intended to detail the theoretical basis, equations, references and data that are necessary to enhance the functionality of commercially available Finite Element codes, with the objective of having functionality better suited for the aerospace industry in the area of composite structural analysis. The specific area of focus will be improvements to composite interlaminar fatigue and progressive interlaminar failure. Suggestions are biased towards codes that perform interlaminar Linear Elastic Fracture Mechanics (LEFM) using Virtual Crack Closure Technique (VCCT)-based algorithms [1,2]. All aspects of the science associated with composite interlaminar crack growth are not fully developed and the codes developed to predict this mode of failure must be programmed with sufficient flexibility to accommodate new functional relationships as the science matures.
Schuur, Jeremiah D; Baker, Olesya; Freshman, Jaclyn; Wilson, Michael; Cutler, David M
2017-04-01
We determine the number and location of freestanding emergency departments (EDs) across the United States and determine the population characteristics of areas where freestanding EDs are located. We conducted a systematic inventory of US freestanding EDs. For the 3 states with the highest number of freestanding EDs, we linked demographic, insurance, and health services data, using the 5-digit ZIP code corresponding to the freestanding ED's location. To create a comparison nonfreestanding ED group, we matched 187 freestanding EDs to 1,048 nonfreestanding ED ZIP codes on land and population within state. We compared differences in demographic, insurance, and health services factors between matched ZIP codes with and without freestanding EDs, using univariate regressions with weights. We identified 360 freestanding EDs located in 30 states; 54.2% of freestanding EDs were hospital satellites, 36.6% were independent, and 9.2% were not classifiable. The 3 states with the highest number of freestanding EDs accounted for 66% of all freestanding EDs: Texas (181), Ohio (34), and Colorado (24). Across all 3 states, freestanding EDs were located in ZIP codes that had higher incomes and a lower proportion of the population with Medicaid. In Texas and Ohio, freestanding EDs were located in ZIP codes with a higher proportion of the population with private insurance. In Texas, freestanding EDs were located in ZIP codes that had fewer Hispanics, had a greater number of hospital-based EDs and physician offices, and had more physician visits and medical spending per year than ZIP codes without a freestanding ED. In Ohio, freestanding EDs were located in ZIP codes with fewer hospital-based EDs. In Texas, Ohio, and Colorado, freestanding EDs were located in areas with a better payer mix. The location of freestanding EDs in relation to other health care facilities and use and spending on health care varied between states. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Efficient simulation of pitch angle collisions in a 2+2-D Eulerian Vlasov code
NASA Astrophysics Data System (ADS)
Banks, Jeff; Berger, R.; Brunner, S.; Tran, T.
2014-10-01
Here we discuss pitch angle scattering collisions in the context of the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The collision operator is discretized using 4th order accurate conservative finite-differencing. The treatment of the Vlasov operator in phase-space uses an approach based on a minimally diffuse, fourth-order-accurate discretization (Banks and Hittinger, IEEE T. Plasma Sci. 39, 2198). The overall scheme is therefore discretely conservative and controls unphysical oscillations. Some details of the numerical scheme will be presented, and the implementation on modern highly concurrent parallel computers will be discussed. We will present results of collisional effects on linear and non-linear Landau damping of electron plasma waves (EPWs). In addition we will present initial results showing the effect of collisions on the evolution of EPWs in two space dimensions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LDRD program at LLNL under project tracking code 12-ERD-061.
Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
NASA Astrophysics Data System (ADS)
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
Pediatric severe sepsis in U.S. children's hospitals.
Balamuth, Fran; Weiss, Scott L; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Hayes, Katie; Gaieski, David; Hall, Matt; Shah, Samir S; Alpern, Elizabeth R
2014-11-01
To compare the prevalence, resource utilization, and mortality for pediatric severe sepsis identified using two established identification strategies. Observational cohort study from 2004 to 2012. Forty-four pediatric hospitals contributing data to the Pediatric Health Information Systems database. Children 18 years old or younger. We identified patients with severe sepsis or septic shock by using two International Classification of Diseases, 9th edition, Clinical Modification-based coding strategies: 1) combinations of International Classification of Diseases, 9th edition, Clinical Modification codes for infection plus organ dysfunction (combination code cohort); 2) International Classification of Diseases, 9th edition, Clinical Modification codes for severe sepsis and septic shock (sepsis code cohort). Outcomes included prevalence of severe sepsis, as well as hospital and ICU length of stay, and mortality. Outcomes were compared between the two cohorts examining aggregate differences over the study period and trends over time. The combination code cohort identified 176,124 hospitalizations (3.1% of all hospitalizations), whereas the sepsis code cohort identified 25,236 hospitalizations (0.45%), a seven-fold difference. Between 2004 and 2012, the prevalence of sepsis increased from 3.7% to 4.4% using the combination code cohort and from 0.4% to 0.7% using the sepsis code cohort (p < 0.001 for trend in each cohort). Length of stay (hospital and ICU) and costs decreased in both cohorts over the study period (p < 0.001). Overall, hospital mortality was higher in the sepsis code cohort than the combination code cohort (21.2% [95% CI, 20.7-21.8] vs 8.2% [95% CI, 8.0-8.3]). Over the 9-year study period, there was an absolute reduction in mortality of 10.9% (p < 0.001) in the sepsis code cohort and 3.8% (p < 0.001) in the combination code cohort. Prevalence of pediatric severe sepsis increased in the studied U.S. children's hospitals over the past 9 years, whereas resource utilization and mortality decreased. Epidemiologic estimates of pediatric severe sepsis varied up to seven-fold depending on the strategy used for case ascertainment.
Pediatric Severe Sepsis in US Children’s Hospitals
Balamuth, Fran; Weiss, Scott L.; Neuman, Mark I.; Scott, Halden; Brady, Patrick W.; Paul, Raina; Farris, Reid W.D.; McClead, Richard; Hayes, Katie; Gaieski, David; Hall, Matt; Shah, Samir S.; Alpern, Elizabeth R.
2014-01-01
Objective To compare the prevalence, resource utilization, and mortality for pediatric severe sepsis identified using two established identification strategies. Design Observational cohort study from 2004–2012. Setting Forty-four pediatric hospitals contributing data to the Pediatric Health Information Systems database. Patients Children ≤18 years of age. Measurements and Main Results We identified patients with severe sepsis or septic shock by using two International Classification of Diseases, 9th edition-Clinical Modification (ICD9-CM) based coding strategies: 1) combinations of ICD9-CM codes for infection plus organ dysfunction (combination code cohort); 2) ICD9-CM codes for severe sepsis and septic shock (sepsis code cohort). Outcomes included prevalence of severe sepsis, as well as hospital and intensive care unit (ICU) length of stay (LOS), and mortality. Outcomes were compared between the two cohorts examining aggregate differences over the study period and trends over time. The combination code cohort identified, 176,124 hospitalizations (3.1% of all hospitalizations), while the sepsis code cohort identified 25,236 hospitalizations (0.45%), a 7-fold difference. Between 2004 and 2012, the prevalence of sepsis increased from 3.7% to 4.4% using the combination code cohort and from 0.4% to 0.7% using the sepsis code cohort (p<0.001 for trend in each cohort). LOS (hospital and ICU) and costs decreased in both cohorts over the study period (p<0.001). Overall hospital mortality was higher in the sepsis code cohort than the combination code cohort (21.2%, (95% CI: 20.7–21.8 vs. 8.2%,(95% CI: 8.0–8.3). Over the 9 year study period, there was an absolute reduction in mortality of 10.9% (p<0.001) in the sepsis code cohort and 3.8% (p<0.001) in the combination code cohort. Conclusions Prevalence of pediatric severe sepsis increased in the studied US children’s hospitals over the past 9 years, though resource utilization and mortality decreased. Epidemiologic estimates of pediatric severe sepsis varied up to 7-fold depending on the strategy used for case ascertainment. PMID:25162514
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects
NASA Technical Reports Server (NTRS)
West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)
2000-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.
Langley Stability and Transition Analysis Code (LASTRAC) Version 1.2 User Manual
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2004-01-01
LASTRAC is a general-purposed, physics-based transition prediction code released by NASA for Laminar Flow Control studies and transition research. The design and development of the LASTRAC code is aimed at providing an engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. It was written from scratch based on the state-of-the-art numerical methods for stability analysis and modern software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory or linear parabolized stability equations method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. This document describes the governing equations, numerical methods, code development, detailed description of input/output parameters, and case studies for the current release of LASTRAC.
Development of Tokamak Transport Solvers for Stiff Confinement Systems
NASA Astrophysics Data System (ADS)
St. John, H. E.; Lao, L. L.; Murakami, M.; Park, J. M.
2006-10-01
Leading transport models such as GLF23 [1] and MM95 [2] describe turbulent plasma energy, momentum and particle flows. In order to accommodate existing transport codes and associated solution methods effective diffusivities have to be derived from these turbulent flow models. This can cause significant problems in predicting unique solutions. We have developed a parallel transport code solver, GCNMP, that can accommodate both flow based and diffusivity based confinement models by solving the discretized nonlinear equations using modern Newton, trust region, steepest descent and homotopy methods. We present our latest development efforts, including multiple dynamic grids, application of two-level parallel schemes, and operator splitting techniques that allow us to combine flow based and diffusivity based models in tokamk simulations. 6pt [1] R.E. Waltz, et al., Phys. Plasmas 4, 7 (1997). [2] G. Bateman, et al., Phys. Plasmas 5, 1793 (1998).
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putney, J.M.; Preece, R.J.
1993-06-01
An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less
NASA Technical Reports Server (NTRS)
Habbal, Shadia Rifai
2005-01-01
Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored: (1) the role of proton temperature anisotropy in the expansion of the solar (2) the role of plasma parameters at the coronal base in the formation of high (3) a three-fluid model of the slow solar wind (4) the heating of coronal loops (5) a newly developed hybrid code for the study of ion cyclotron resonance in wind, speed solar wind streams at mid-latitudes, the solar wind.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Code of Federal Regulations, 2010 CFR
2010-07-01
... offenders who are committed to prison for treatment and rehabilitation based on felony convictions under the... eligible for parole by statute, including offenders who have been returned to prison upon the revocation of... authority to return such offenders to prison upon an order of revocation. (D.C. Code 24-406.) [65 FR 45888...
Predicted blood glucose from insulin administration based on values from miscoded glucose meters.
Raine, Charles H; Pardo, Scott; Parkes, Joan Lee
2008-07-01
The proper use of many types of self-monitored blood glucose (SMBG) meters requires calibration to match strip code. Studies have demonstrated the occurrence and impact on insulin dose of coding errors with SMBG meters. This paper reflects additional analyses performed with data from Raine et al. (JDST, 2:205-210, 2007). It attempts to relate potential insulin dose errors to possible adverse blood glucose outcomes when glucose meters are miscoded. Five sets of glucose meters were used. Two sets of meters were autocoded and therefore could not be miscoded, and three sets required manual coding. Two of each set of manually coded meters were deliberately miscoded, and one from each set was properly coded. Subjects (n = 116) had finger stick blood glucose obtained at fasting, as well as at 1 and 2 hours after a fixed meal (Boost((R)); Novartis Medical Nutrition U.S., Basel, Switzerland). Deviations of meter blood glucose results from the reference method (YSI) were used to predict insulin dose errors and resultant blood glucose outcomes based on these deviations. Using insulin sensitivity data, it was determined that, given an actual blood glucose of 150-400 mg/dl, an error greater than +40 mg/dl would be required to calculate an insulin dose sufficient to produce a blood glucose of less than 70 mg/dl. Conversely, an error less than or equal to -70 mg/dl would be required to derive an insulin dose insufficient to correct an elevated blood glucose to less than 180 mg/dl. For miscoded meters, the estimated probability to produce a blood glucose reduction to less than or equal to 70 mg/dl was 10.40%. The corresponding probabilities for autocoded and correctly coded manual meters were 2.52% (p < 0.0001) and 1.46% (p < 0.0001), respectively. Furthermore, the errors from miscoded meters were large enough to produce a calculated blood glucose outcome less than or equal to 50 mg/dl in 42 of 833 instances. Autocoded meters produced zero (0) outcomes less than or equal to 50 mg/dl out of 279 instances, and correctly coded manual meters produced 1 of 416. Improperly coded blood glucose meters present the potential for insulin dose errors and resultant clinically significant hypoglycemia or hyperglycemia. Patients should be instructed and periodically reinstructed in the proper use of blood glucose meters, particularly for meters that require coding.
DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information
ERIC Educational Resources Information Center
McCallister, Gary
2005-01-01
The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)
Dentists' perspectives on caries-related treatment decisions.
Gomez, J; Ellwood, R P; Martignon, S; Pretty, I A
2014-06-01
To assess the impact of patient risk status on Colombian dentists' caries related treatment decisions for early to intermediate caries lesions (ICDAS code 2 to 4). A web-based questionnaire assessed dentists' views on the management of early/intermediate lesions. The questionnaire included questions on demographic characteristics, five clinical scenarios with randomised levels of caries risk, and two questions on different clinical and radiographic sets of images with different thresholds of caries. Questionnaires were completed by 439 dentists. For the two scenarios describing occlusal lesions ICDAS code 2, dentists chose to provide a preventive option in 63% and 60% of the cases. For the approximal lesion ICDAS code 2, 81% of the dentists chose to restore. The main findings of the binary logistic regression analysis for the clinical scenarios suggest that for the ICDAS code 2 occlusal lesions, the odds of a high caries risk patient having restorations is higher than for a low caries risk patient. For the questions describing different clinical thresholds of caries, most dentists would restore at ICDAS code 2 (55%) and for the question showing different radiographic thresholds images, 65% of dentists would intervene operatively at the inner half of enamel. No significant differences with respect to risk were found for these questions with the logistic regression. The results of this study indicate that Colombian dentists have not yet fully adopted non-invasive treatment for early caries lesions.
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization. PMID:24600168
Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali
2014-01-01
Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all <1%. The results validate the accuracy of the new code, which is a useful tool in scintillation detector optimization.
NASA-VOF3D: A three-dimensional computer program for incompressible flows with free surfaces
NASA Astrophysics Data System (ADS)
Torrey, M. D.; Mjolsness, R. C.; Stein, L. R.
1987-07-01
Presented is the NASA-VOF3D three-dimensional, transient, free-surface hydrodynamics program. This three-dimensional extension of NASA-VOF2D will, in principle, permit treatment in full three-dimensional generality of the wide variety of applications that could be treated by NASA-VOF2D only within the two-dimensional idealization. In particular, it, like NASA-VOF2D, is specifically designed to calculate confined flows in a low g environment. The code is presently restricted to cylindrical geometry. The code is based on the fractional volume-of-fluid method and allows multiple free surfaces with surface tension and wall adhesion. It also has a partial cell treatment that allows curved boundaries and internal obstacles. This report provides a brief discussion of the numerical method, a code listing, and some sample problems.
Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2015-11-01
FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pigni, M.T., E-mail: pignimt@ornl.gov; Francis, M.W.; Gauld, I.C.
A recent implementation of ENDF/B-VII.1 independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear schemes in the decay sub-library that are not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that do not agree with the cumulative fission yields in the library as well as with experimental measurements. To address these issues, a comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron-induced fission for {supmore » 235,238}U and {sup 239,241}Pu in order to provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to compare the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. Another important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library for stable and long-lived fission products. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less
Development of an Efficient Approach to Perform Neutronics Simulations for Plutonium-238 Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chandler, David; Ellis, Ronald James
Conversion of 238Pu decay heat into usable electricity is imperative to power National Aeronautics and Space Administration (NASA) deep space exploration missions; however, the current stockpile of 238Pu is diminishing and the quality is less than ideal. In response, the US Department of Energy and NASA have undertaken a program to reestablish a domestic 238Pu production program and a technology demonstration sub-project has been initiated. Neutronics simulations for 238Pu production play a vital role in this project because the results guide reactor safety-basis, target design and optimization, and post-irradiation examination activities. A new, efficient neutronics simulation tool written in Pythonmore » was developed to evaluate, with the highest fidelity possible with approved tools, the time-dependent nuclide evolution and heat deposition rates in 238Pu production targets irradiated in the High Flux Isotope Reactor (HFIR). The Python Activation and Heat Deposition Script (PAHDS) was developed specifically for experiment analysis in HFIR and couples the MCNP5 and SCALE 6.1.3 software quality assured tools to take advantage of an existing high-fidelity MCNP HFIR model, the most up-to-date ORIGEN code, and the most up-to-date nuclear data. Three cycle simulations were performed with PAHDS implementing ENDF/B-VII.0, ENDF/B-VII.1, and the Hybrid Library GPD-Rev0 cross-section libraries. The 238Pu production results were benchmarked against VESTA-obtained results and the impact of various cross-section libraries on the calculated metrics were assessed.« less
Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.
1998-03-01
The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa
2009-01-01
Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769
Error-correcting pairs for a public-key cryptosystem
NASA Astrophysics Data System (ADS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-06-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.
Computational Fluid Dynamics Analysis Method Developed for Rocket-Based Combined Cycle Engine Inlet
NASA Technical Reports Server (NTRS)
1997-01-01
Renewed interest in hypersonic propulsion systems has led to research programs investigating combined cycle engines that are designed to operate efficiently across the flight regime. The Rocket-Based Combined Cycle Engine is a propulsion system under development at the NASA Lewis Research Center. This engine integrates a high specific impulse, low thrust-to-weight, airbreathing engine with a low-impulse, high thrust-to-weight rocket. From takeoff to Mach 2.5, the engine operates as an air-augmented rocket. At Mach 2.5, the engine becomes a dual-mode ramjet; and beyond Mach 8, the rocket is turned back on. One Rocket-Based Combined Cycle Engine variation known as the "Strut-Jet" concept is being investigated jointly by NASA Lewis, the U.S. Air Force, Gencorp Aerojet, General Applied Science Labs (GASL), and Lockheed Martin Corporation. Work thus far has included wind tunnel experiments and computational fluid dynamics (CFD) investigations with the NPARC code. The CFD method was initiated by modeling the geometry of the Strut-Jet with the GRIDGEN structured grid generator. Grids representing a subscale inlet model and the full-scale demonstrator geometry were constructed. These grids modeled one-half of the symmetric inlet flow path, including the precompression plate, diverter, center duct, side duct, and combustor. After the grid generation, full Navier-Stokes flow simulations were conducted with the NPARC Navier-Stokes code. The Chien low-Reynolds-number k-e turbulence model was employed to simulate the high-speed turbulent flow. Finally, the CFD solutions were postprocessed with a Fortran code. This code provided wall static pressure distributions, pitot pressure distributions, mass flow rates, and internal drag. These results were compared with experimental data from a subscale inlet test for code validation; then they were used to help evaluate the demonstrator engine net thrust.
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
Zhang, Zhi-Hai; Gao, Ling-Xiao; Guo, Yuan-Jun; Wang, Wei; Mo, Xiang-Xia
2012-12-01
The template selection is essential in the application of digital micromirror spectrometer. The best theoretical coding H-matrix is not widely used due to acyclic, complex coding and difficult achievement. The noise ratio of best practical S-matrix for improvement is slightly inferior to matrix H. So we designed a new type complementary S-matrix. Through studying its noise improvement theory, the algorithm is proved to have the advantages of both H-matrix and S-matrix. The experiments proved that the SNR can be increased 2.05 times than S-template.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Rozendaal, H. L.
1977-01-01
A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.
Improvements of the particle-in-cell code EUTERPE for petascaling machines
NASA Astrophysics Data System (ADS)
Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.
2011-09-01
In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.
NASA Technical Reports Server (NTRS)
Gong, J.; Ozdemir, T.; Volakis, J; Nurnberger, M.
1995-01-01
Year 1 progress can be characterized with four major achievements which are crucial toward the development of robust, easy to use antenna analysis code on doubly conformal platforms. (1) A new FEM code was developed using prismatic meshes. This code is based on a new edge based distorted prism and is particularly attractive for growing meshes associated with printed slot and patch antennas on doubly conformal platforms. It is anticipated that this technology will lead to interactive, simple to use codes for a large class of antenna geometries. Moreover, the codes can be expanded to include modeling of the circuit characteristics. An attached report describes the theory and validation of the new prismatic code using reference calculations and measured data collected at the NASA Langley facilities. The agreement between the measured and calculated data is impressive even for the coated patch configuration. (2) A scheme was developed for improved feed modeling in the context of FEM. A new approach based on the voltage continuity condition was devised and successfully tested in modeling coax cables and aperture fed antennas. An important aspect of this new feed modeling approach is the ability to completely separate the feed and antenna mesh regions. In this manner, different elements can be used in each of the regions leading to substantially improved accuracy and meshing simplicity. (3) A most important development this year has been the introduction of the perfectly matched interface (PMI) layer for truncating finite element meshes. So far the robust boundary integral method has been used for truncating the finite element meshes. However, this approach is not suitable for antennas on nonplanar platforms. The PMI layer is a lossy anisotropic absorber with zero reflection at its interface. (4) We were able to interface our antenna code FEMA_CYL (for antennas on cylindrical platforms) with a standard high frequency code. This interface was achieved by first generating equivalent magnetic currents across the antenna aperture using the FEM code. These currents were employed as the sources in the high frequency code.
Impact of thorium based molten salt reactor on the closure of the nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Jaradat, Safwan Qasim Mohammad
Molten salt reactor (MSR) is one of six reactors selected by the Generation IV International Forum (GIF). The liquid fluoride thorium reactor (LFTR) is a MSR concept based on thorium fuel cycle. LFTR uses liquid fluoride salts as a nuclear fuel. It uses 232Th and 233U as the fertile and fissile materials, respectively. Fluoride salt of these nuclides is dissolved in a mixed carrier salt of lithium and beryllium (FLiBe). The objective of this research was to complete feasibility studies of a small commercial thermal LFTR. The focus was on neutronic calculations in order to prescribe core design parameter such as core size, fuel block pitch (p), fuel channel radius, fuel path, reflector thickness, fuel salt composition, and power. In order to achieve this objective, the applicability of Monte Carlo N-Particle Transport Code (MCNP) to MSR modeling was verified. Then, a prescription for conceptual small thermal reactor LFTR and relevant calculations were performed using MCNP to determine the main neutronic parameters of the core reactor. The MCNP code was used to study the reactor physics characteristics for the FUJI-U3 reactor. The results were then compared with the results obtained from the original FUJI-U3 using the reactor physics code SRAC95 and the burnup analysis code ORIPHY2. The results were comparable with each other. Based on the results, MCNP was found to be a reliable code to model a small thermal LFTR and study all the related reactor physics characteristics. The results of this study were promising and successful in demonstrating a prefatory small commercial LFTR design. The outcome of using a small core reactor with a diameter/height of 280/260 cm that would operate for more than five years at a power level of 150 MWth was studied. The fuel system 7LiF - BeF2 - ThF4 - UF4 with a (233U/ 232Th) = 2.01 % was the candidate fuel for this reactor core.
Comparison of Engine Cycle Codes for Rocket-Based Combined Cycle Engines
NASA Technical Reports Server (NTRS)
Waltrup, Paul J.; Auslender, Aaron H.; Bradford, John E.; Carreiro, Louis R.; Gettinger, Christopher; Komar, D. R.; McDonald, J.; Snyder, Christopher A.
2002-01-01
This paper summarizes the results from a one day workshop on Rocket-Based Combined Cycle (RBCC) Engine Cycle Codes held in Monterey CA in November of 2000 at the 2000 JANNAF JPM with the authors as primary participants. The objectives of the workshop were to discuss and compare the merits of existing Rocket-Based Combined Cycle (RBCC) engine cycle codes being used by government and industry to predict RBCC engine performance and interpret experimental results. These merits included physical and chemical modeling, accuracy and user friendliness. The ultimate purpose of the workshop was to identify the best codes for analyzing RBCC engines and to document any potential shortcomings, not to demonstrate the merits or deficiencies of any particular engine design. Five cases representative of the operating regimes of typical RBCC engines were used as the basis of these comparisons. These included Mach 0 sea level static and Mach 1.0 and Mach 2.5 Air-Augmented-Rocket (AAR), Mach 4 subsonic combustion ramjet or dual-mode scramjet, and Mach 8 scramjet operating modes. Specification of a generic RBCC engine geometry and concomitant component operating efficiencies, bypass ratios, fuel/oxidizer/air equivalence ratios and flight dynamic pressures were provided. The engine included an air inlet, isolator duct, axial rocket motor/injector, axial wall fuel injectors, diverging combustor, and exit nozzle. Gaseous hydrogen was used as the fuel with the rocket portion of the system using a gaseous H2/O2 propellant system to avoid cryogenic issues. The results of the workshop, even after post-workshop adjudication of differences, were surprising. They showed that the codes predicted essentially the same performance at the Mach 0 and I conditions, but progressively diverged from a common value (for example, for fuel specific impulse, Isp) as the flight Mach number increased, with the largest differences at Mach 8. The example cases and results are compared and discussed in this paper.
Comparative analysis of design codes for timber bridges in Canada, the United States, and Europe
James Wacker; James (Scott) Groenier
2010-01-01
The United States recently completed its transition from the allowable stress design code to the load and resistance factor design (LRFD) reliability-based code for the design of most highway bridges. For an international perspective on the LRFD-based bridge codes, a comparative analysis is presented: a study addressed national codes of the United States, Canada, and...
Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime
2016-01-01
It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Uncertainty Assessment of Hypersonic Aerothermodynamics Prediction Capability
NASA Technical Reports Server (NTRS)
Bose, Deepak; Brown, James L.; Prabhu, Dinesh K.; Gnoffo, Peter; Johnston, Christopher O.; Hollis, Brian
2011-01-01
The present paper provides the background of a focused effort to assess uncertainties in predictions of heat flux and pressure in hypersonic flight (airbreathing or atmospheric entry) using state-of-the-art aerothermodynamics codes. The assessment is performed for four mission relevant problems: (1) shock turbulent boundary layer interaction on a compression corner, (2) shock turbulent boundary layer interaction due a impinging shock, (3) high-mass Mars entry and aerocapture, and (4) high speed return to Earth. A validation based uncertainty assessment approach with reliance on subject matter expertise is used. A code verification exercise with code-to-code comparisons and comparisons against well established correlations is also included in this effort. A thorough review of the literature in search of validation experiments is performed, which identified a scarcity of ground based validation experiments at hypersonic conditions. In particular, a shortage of useable experimental data at flight like enthalpies and Reynolds numbers is found. The uncertainty was quantified using metrics that measured discrepancy between model predictions and experimental data. The discrepancy data is statistically analyzed and investigated for physics based trends in order to define a meaningful quantified uncertainty. The detailed uncertainty assessment of each mission relevant problem is found in the four companion papers.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Aerodynamic Analysis of the M33 Projectile Using the CFX Code
2011-12-01
is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) The M33 projectile has been analyzed using the ANSYS CFX code that is based...analyzed using the ANSYS CFX code that is based on the numerical solution of the full Navier-Stokes equations. Simulation data were obtained...using the CFX code. The ANSYS - CFX code is a commercial CFD program used to simulate fluid flow in a variety of applications such as gas turbine
Highly selective BSA imprinted polyacrylamide hydrogels facilitated by a metal-coding MIP approach.
El-Sharif, H F; Yapati, H; Kalluru, S; Reddy, S M
2015-12-01
We report the fabrication of metal-coded molecularly imprinted polymers (MIPs) using hydrogel-based protein imprinting techniques. A Co(II) complex was prepared using (E)-2-((2 hydrazide-(4-vinylbenzyl)hydrazono)methyl)phenol; along with iron(III) chloroprotoporphyrin (Hemin), vinylferrocene (VFc), zinc(II) protoporphyrin (ZnPP) and protoporphyrin (PP), these complexes were introduced into the MIPs as co-monomers for metal-coding of non-metalloprotein imprints. Results indicate a 66% enhancement for bovine serum albumin (BSA) protein binding capacities (Q, mg/g) via metal-ion/ligand exchange properties within the metal-coded MIPs. Specifically, Co(II)-complex-based MIPs exhibited 92 ± 1% specific binding with Q values of 5.7 ± 0.45 mg BSA/g polymer and imprinting factors (IF) of 14.8 ± 1.9 (MIP/non-imprinted (NIP) control). The selectivity of our Co(II)-coded BSA MIPs were also tested using bovine haemoglobin (BHb), lysozyme (Lyz), and trypsin (Tryp). By evaluating imprinting factors (K), each of the latter proteins was found to have lower affinities in comparison to cognate BSA template. The hydrogels were further characterised by thermal analysis and differential scanning calorimetry (DSC) to assess optimum polymer composition. The development of hydrogel-based molecularly imprinted polymer (HydroMIPs) technology for the memory imprinting of proteins and for protein biosensor development presents many possibilities, including uses in bio-sample clean-up or selective extraction, replacement of biological antibodies in immunoassays and biosensors for medicine and the environment. Biosensors for proteins and viruses are currently expensive to develop because they require the use of expensive antibodies. Because of their biomimicry capabilities (and their potential to act as synthetic antibodies), HydroMIPs potentially offer a route to the development of new low-cost biosensors. Herein, a metal ion-mediated imprinting approach was employed to metal-code our hydrogel-based MIPs for the selective recognition of bovine serum albumin (BSA). Specifically, Co(II)-complex based MIPs exhibited a 66% enhancement (in comparison to our normal MIPs) exhibiting 92 ± 1% specific binding with Q values of 5.7 ± 0.45 mg BSA/g polymer and imprinting factors (IF) of 14.8 ± 1.9 (MIP/ non-imprinted (NIP) control). The proposed metal-coded MIPs for protein recognition are intended to lead to unprecedented improvement in MIP selectivity and for future biosensor development that rely on an electrochemical redox processes. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Wang, Fangnian; Deeney, Jude T.; Denis, Gerald V.
2014-01-01
Disturbed body energy balance can lead to obesity and obesity-driven diseases such as Type 2 diabetes, which have reached an epidemic level. Evidence indicates that obesity induced inflammation is a major cause of insulin resistance and Type 2 diabetes. Environmental factors, such as nutrients, affect body energy balance through epigenetic or chromatin-based mechanisms. As a bromodomain and external domain family transcription regulator, Brd2 regulates expression of many genes through interpretation of chromatin codes, and participates in the regulation of body energy balance and immune function. In the severely obese state, Brd2 knockdown in mice prevented obesity-induced inflammatory responses, protected animals from Type 2 diabetes, and thus uncoupled obesity from diabetes. Brd2 provides an important model for investigation of the function of transcription regulators and the development of obesity and diabetes; it also provides a possible target to treat obesity and diabetes through modulation of the function of a chromatin code reader. PMID:23374712
Muller, Sara; Hider, Samantha L; Raza, Karim; Stack, Rebecca J; Hayward, Richard A; Mallen, Christian D
2015-01-01
Objective Rheumatoid arthritis (RA) is a multisystem, inflammatory disorder associated with increased levels of morbidity and mortality. While much research into the condition is conducted in the secondary care setting, routinely collected primary care databases provide an important source of research data. This study aimed to update an algorithm to define RA that was previously developed and validated in the General Practice Research Database (GPRD). Methods The original algorithm consisted of two criteria. Individuals meeting at least one were considered to have RA. Criterion 1: ≥1 RA Read code and a disease modifying antirheumatic drug (DMARD) without an alternative indication. Criterion 2: ≥2 RA Read codes, with at least one ‘strong’ code and no alternative diagnoses. Lists of codes for consultations and prescriptions were obtained from the authors of the original algorithm where these were available, or compiled based on the original description and clinical knowledge. 4161 people with a first Read code for RA between 1 January 2010 and 31 December 2012 were selected from the Clinical Practice Research Datalink (CPRD, successor to the GPRD), and the criteria applied. Results Code lists were updated for the introduction of new Read codes and biological DMARDs. 3577/4161 (86%) of people met the updated algorithm for RA, compared to 61% in the original development study. 62.8% of people fulfilled both Criterion 1 and Criterion 2. Conclusions Those wishing to define RA in the CPRD, should consider using this updated algorithm, rather than a single RA code, if they wish to identify only those who are most likely to have RA. PMID:26700281
NASA Astrophysics Data System (ADS)
Hori, T.; Ichimura, T.
2015-12-01
Here we propose a system for monitoring and forecasting of crustal activity, especially great interplate earthquake generation and its preparation processes in subduction zone. Basically, we model great earthquake generation as frictional instability on the subjecting plate boundary. So, spatio-temporal variation in slip velocity on the plate interface should be monitored and forecasted. Although, we can obtain continuous dense surface deformation data on land and partly at the sea bottom, the data obtained are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1)&(2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2014, SC14) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x 30 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, this meeting) has improved the high fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we will apply it for 3D heterogeneous structure with the high fidelity FE model.
Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1989-01-01
Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.
A Study to Determine the Need for a Standard Limiting the Horsepower of Recreational Boats.
1978-09-01
Acceptance Number of Number Fatal Accidents Non -Fatal Accidents - (Lost control ) 1 93 2 :No attempt to avoid collision) 1 19 72 fAttempted to avoic, not enough...base, and an explanation of the computer SModel designed to aid in organizing and analyzing the data are presented with the results of the analyses. An...Standard 75 S 3.2 Non -Powering Related Accident Sample 76 3.3 Coded Information and Coding Form 77 • - 3.4 Effectiveness Evaluation of the Current
NASA Technical Reports Server (NTRS)
Gurman, Joseph (Technical Monitor); Habbal, Shadia Rifai
2004-01-01
Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored (1) the role of proton temperature anisotropy in the expansion of the solar wind, (2) the role of plasma parameters at the coronal base in the formation of high speed solar wind streams at mid-latitudes, and (3) the heating of coronal loops.
Automatic Registration of Scanned Satellite Imagery with a Digital Map Data Base.
1980-11-01
define the corresponding map window (mW)(procedure TRANSFORMWINDOW MAP A-- S4S Araofms Cpo iin et Serc Area deiatl compAr tal _______________ T...to a LIST-item). LIN: = ( ® code 2621431 ; ® pointer LA to the line list, © pointer PRI; pointer PR2), LIST: = ( Q pointer PL to a LIN-item; n pointer...items where PL -pointers are replaced by a code for the beginning (the number 262140 in our case) and end (the number 26241). Figure 3.2 illustra- tes a
Acceptance criteria for welds in ASTM A106 grade B steel pipe and plate
NASA Technical Reports Server (NTRS)
Hudson, C. M.; Wright, D. B., Jr.; Leis, B. N.
1986-01-01
Based on the RECERT Program findings, NASA-Langley funded a fatigue study of code-unacceptable welds. Usage curves were developed which were based on the structural integrity of the welds. The details of this study are presented in NASA CR-178114. The information presented is a condensation and reinterpretation of the information in NASA CR-178114. This condensation and reinterpretation generated usage curves for welds having: (1) indications 0.20 -inch deep by 0.40-inch long, and (2) indications 0.195-inch deep by 8.4-inches long. These curves were developed using the procedures used in formulating the design curves in Section VIII, Division 2 of the American Society of Mechanical Engineers Boiler and Pressure Vessel Code.
Investigation of neutral particle dynamics in Aditya tokamak plasma with DEGAS2 code
NASA Astrophysics Data System (ADS)
Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.; Manchanda, R.; Banerjee, S.; Ramaiya, N.; Sharma, Deepti; Srinivasan, R.; Stotler, D. P.; Aditya Team
2017-08-01
Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated and matched with the experimental one. The dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7-40 eV).
Investigation of neutral particle dynamics in Aditya tokamak plasma with DEGAS2 code
Dey, Ritu; Ghosh, Joydeep; Chowdhuri, M. B.; ...
2017-06-09
Neutral particle behavior in Aditya tokamak, which has a circular poloidal ring limiter at one particular toroidal location, has been investigated using DEGAS2 code. The code is based on the calculation using Monte Carlo algorithms and is mainly used in tokamaks with divertor configuration. This code has been successfully implemented in Aditya tokamak with limiter configuration. The penetration of neutral hydrogen atom is studied with various atomic and molecular contributions and it is found that the maximum contribution comes from the dissociation processes. For the same, H α spectrum is also simulated which was matched with the experimental one. Themore » dominant contribution around 64% comes from molecular dissociation processes and neutral particle is generated by those processes have energy of ~ 2.0 eV. Furthermore, the variation of neutral hydrogen density and H α emissivity profile are analysed for various edge temperature profiles and found that there is not much changes in H α emission at the plasma edge with the variation of edge temperature (7 to 40 eV).« less
NASA Astrophysics Data System (ADS)
Panuzzo, P.; Li, J.; Caux, E.
2012-09-01
The Herschel Interactive Processing Environment (HIPE) was developed by the European Space Agency (ESA) in collaboration with NASA and the Herschel Instrument Control Centres, to provide the astronomical community a complete environment to process and analyze the data gathered by the Herschel Space Observatory. One of the most important components of HIPE is the plotting system (named PlotXY) that we present here. With PlotXY it is possible to produce easily high quality publication-ready 2D plots. It provides a long list of features, with fully configurable components, and interactive zooming. The entire code of HIPE is written in Java and is open source released under the GNU Lesser General Public License version 3. A new version of PlotXY is being developed to be independent from the HIPE code base; it is available to the software development community for the inclusion in other projects at the URL http://code.google.com/p/jplot2d/.
Zou, Ding; Djordjevic, Ivan B
2016-09-05
In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.
Patrick, Hannah; Sims, Andrew; Burn, Julie; Bousfield, Derek; Colechin, Elaine; Reay, Christopher; Alderson, Neil; Goode, Stephen; Cunningham, David; Campbell, Bruce
2013-03-01
New devices and procedures are often introduced into health services when the evidence base for their efficacy and safety is limited. The authors sought to assess the availability and accuracy of routinely collected Hospital Episodes Statistics (HES) data in the UK and their potential contribution to the monitoring of new procedures. Four years of HES data (April 2006-March 2010) were analysed to identify episodes of hospital care involving a sample of 12 new interventional procedures. HES data were cross checked against other relevant sources including national or local registers and manufacturers' information. HES records were available for all 12 procedures during the entire study period. Comparative data sources were available from national (5), local (2) and manufacturer (2) registers. Factors found to affect comparisons were miscoding, alternative coding and inconsistent use of subsidiary codes. The analysis of provider coverage showed that HES is sensitive at detecting centres which carry out procedures, but specificity is poor in some cases. Routinely collected HES data have the potential to support quality improvements and evidence-based commissioning of devices and procedures in health services but achievement of this potential depends upon the accurate coding of procedures.
Reference View Selection in DIBR-Based Multiview Coding.
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.
Drug overdose surveillance using hospital discharge data.
Slavova, Svetla; Bunn, Terry L; Talbert, Jeffery
2014-01-01
We compared three methods for identifying drug overdose cases in inpatient hospital discharge data on their ability to classify drug overdoses by intent and drug type(s) involved. We compared three International Classification of Diseases, Ninth Revision, Clinical Modification code-based case definitions using Kentucky hospital discharge data for 2000-2011. The first definition (Definition 1) was based on the external-cause-of-injury (E-code) matrix. The other two definitions were based on the Injury Surveillance Workgroup on Poisoning (ISW7) consensus recommendations for national and state poisoning surveillance using the principal diagnosis or first E-code (Definition 2) or any diagnosis/E-code (Definition 3). Definition 3 identified almost 50% more drug overdose cases than did Definition 1. The increase was largely due to cases with a first-listed E-code describing a drug overdose but a principal diagnosis that was different from drug overdose (e.g., mental disorders, or respiratory or circulatory system failure). Regardless of the definition, more than 53% of the hospitalizations were self-inflicted drug overdoses; benzodiazepines were involved in about 30% of the hospitalizations. The 2011 age-adjusted drug overdose hospitalization rate in Kentucky was 146/100,000 population using Definition 3 and 107/100,000 population using Definition 1. The ISW7 drug overdose definition using any drug poisoning diagnosis/E-code (Definition 3) is potentially the highest sensitivity definition for counting drug overdose hospitalizations, including by intent and drug type(s) involved. As the states enact policies and plan for adequate treatment resources, standardized drug overdose definitions are critical for accurate reporting, trend analysis, policy evaluation, and state-to-state comparison.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
An integrated, structure- and energy-based view of the genetic code.
Grosjean, Henri; Westhof, Eric
2016-09-30
The principles of mRNA decoding are conserved among all extant life forms. We present an integrative view of all the interaction networks between mRNA, tRNA and rRNA: the intrinsic stability of codon-anticodon duplex, the conformation of the anticodon hairpin, the presence of modified nucleotides, the occurrence of non-Watson-Crick pairs in the codon-anticodon helix and the interactions with bases of rRNA at the A-site decoding site. We derive a more information-rich, alternative representation of the genetic code, that is circular with an unsymmetrical distribution of codons leading to a clear segregation between GC-rich 4-codon boxes and AU-rich 2:2-codon and 3:1-codon boxes. All tRNA sequence variations can be visualized, within an internal structural and energy framework, for each organism, and each anticodon of the sense codons. The multiplicity and complexity of nucleotide modifications at positions 34 and 37 of the anticodon loop segregate meaningfully, and correlate well with the necessity to stabilize AU-rich codon-anticodon pairs and to avoid miscoding in split codon boxes. The evolution and expansion of the genetic code is viewed as being originally based on GC content with progressive introduction of A/U together with tRNA modifications. The representation we present should help the engineering of the genetic code to include non-natural amino acids. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Go, Michael R; Masterson, Loren; Veerman, Brent; Satiani, Bhagwan
2016-02-01
To curb increasing volumes of diagnostic imaging and costs, reimbursement for carotid duplex ultrasound (CDU) is dependent on "appropriate" indications as documented by International Classification of Diseases (ICD) codes entered by ordering physicians. Historically, asymptomatic indications for CDU yield lower rates of abnormal results than symptomatic indications, and consensus documents agree that most asymptomatic indications for CDU are inappropriate. In our vascular laboratory, we perceived an increased rate of incorrect or inappropriate ICD codes. We therefore sought to determine if ICD codes were useful in predicting the frequency of abnormal CDU. We hypothesized that asymptomatic or nonspecific ICD codes would yield a lower rate of abnormal CDU than symptomatic codes, validating efforts to limit reimbursement in asymptomatic, low-yield groups. We reviewed all outpatient CDU done in 2011 at our institution. ICD codes were recorded, and each medical record was then reviewed by a vascular surgeon to determine if the assigned ICD code appropriately reflected the clinical scenario. CDU findings categorized as abnormal (>50% stenosis) or normal (<50% stenosis) were recorded. Each individual ICD code and group 1 (asymptomatic), group 2 (nonhemispheric symptoms), group 3 (hemispheric symptoms), group 4 (preoperative cardiovascular examination), and group 5 (nonspecific) ICD codes were analyzed for correlation with CDU results. Nine hundred ninety-four patients had 74 primary ICD codes listed as indications for CDU. Of assigned ICD codes, 17.4% were deemed inaccurate. Overall, 14.8% of CDU were abnormal. Of the 13 highest frequency ICD codes, only 433.10, an asymptomatic code, was associated with abnormal CDU. Four symptomatic codes were associated with normal CDU; none of the other high frequency codes were associated with CDU result. Patients in group 1 (asymptomatic) were significantly more likely to have an abnormal CDU compared to each of the other groups (P < 0.001, P < 0.001, P = 0.020, P = 0.002) and to all other groups combined (P < 0.001). Asymptomatic indications by ICD codes yielded higher rates of abnormal CDU than symptomatic indications. This finding is inconsistent with clinical experience and historical data, and we suggest that inaccurate coding may play a role. Limiting reimbursement for CDU in low-yield groups is reasonable. However, reimbursement policies based on ICD coding, for example, limiting payment for asymptomatic ICD codes, may impede use of CDU in high-yield patient groups. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.
1992-01-01
How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weeratunga, S K
Ares and Kull are mature code frameworks that support ALE hydrodynamics for a variety of HEDP applications at LLNL, using two widely different meshing approaches. While Ares is based on a 2-D/3-D block-structured mesh data base, Kull is designed to support unstructured, arbitrary polygonal/polyhedral meshes. In addition, both frameworks are capable of running applications on large, distributed-memory parallel machines. Currently, both these frameworks separately support assorted collections of physics packages related to HEDP, including one for the energy deposition by laser/ion-beam ray tracing. This study analyzes the options available for developing a common laser/ion-beam ray tracing package that can bemore » easily shared between these two code frameworks and concludes with a set of recommendations for its development.« less
2016-01-01
Background Inclusion of information about a patient’s work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers’ compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for “industry” and “occupation” based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. Objective The objective of the study was to evaluate the intercoder reliability of NIOSH’s Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Methods Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Results Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the “high confidence” level and 49%-58% at the “medium confidence” level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are “substantial” at the 2-digit level, but only “fair” to “good” at the 4-digit level. Conclusions This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted. PMID:26878932
Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.
Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys
2018-04-01
Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.
Herdewyn, Sarah; Zhao, Hui; Moisse, Matthieu; Race, Valérie; Matthijs, Gert; Reumers, Joke; Kusters, Benno; Schelhaas, Helenius J; van den Berg, Leonard H; Goris, An; Robberecht, Wim; Lambrechts, Diether; Van Damme, Philip
2012-06-01
Motor neuron degeneration in amyotrophic lateral sclerosis (ALS) has a familial cause in 10% of patients. Despite significant advances in the genetics of the disease, many families remain unexplained. We performed whole-genome sequencing in five family members from a pedigree with autosomal-dominant classical ALS. A family-based elimination approach was used to identify novel coding variants segregating with the disease. This list of variants was effectively shortened by genotyping these variants in 2 additional unaffected family members and 1500 unrelated population-specific controls. A novel rare coding variant in SPAG8 on chromosome 9p13.3 segregated with the disease and was not observed in controls. Mutations in SPAG8 were not encountered in 34 other unexplained ALS pedigrees, including 1 with linkage to chromosome 9p13.2-23.3. The shared haplotype containing the SPAG8 variant in this small pedigree was 22.7 Mb and overlapped with the core 9p21 linkage locus for ALS and frontotemporal dementia. Based on differences in coverage depth of known variable tandem repeat regions between affected and non-affected family members, the shared haplotype was found to contain an expanded hexanucleotide (GGGGCC)(n) repeat in C9orf72 in the affected members. Our results demonstrate that rare coding variants identified by whole-genome sequencing can tag a shared haplotype containing a non-coding pathogenic mutation and that changes in coverage depth can be used to reveal tandem repeat expansions. It also confirms (GGGGCC)n repeat expansions in C9orf72 as a cause of familial ALS.
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
New quantum codes derived from a family of antiprimitive BCH codes
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin
The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1990-01-01
All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.
Multiple copies of a bile acid-inducible gene in Eubacterium sp. strain VPI 12708.
Gopal-Srivastava, R; Mallonee, D H; White, W B; Hylemon, P B
1990-01-01
Eubacterium sp. strain VPI 12708 is an anaerobic intestinal bacterium which possesses inducible bile acid 7-dehydroxylation activity. Several new polypeptides are produced in this strain following induction with cholic acid. Genes coding for two copies of a bile acid-inducible 27,000-dalton polypeptide (baiA1 and baiA2) have been previously cloned and sequenced. We now report on a gene coding for a third copy of this 27,000-dalton polypeptide (baiA3). The baiA3 gene has been cloned in lambda DASH on an 11.2-kilobase DNA fragment from a partial Sau3A digest of the Eubacterium DNA. DNA sequence analysis of the baiA3 gene revealed 100% homology with the baiA1 gene within the coding region of the 27,000-dalton polypeptides. The baiA2 gene shares 81% sequence identity with the other two genes at the nucleotide level. The flanking nucleotide sequences associated with the baiA1 and baiA3 genes are identical for 930 bases in the 5' direction from the initiation codon and for at least 325 bases in the 3' direction from the stop codon, including the putative promoter regions for the genes. An additional open reading frame (occupying from 621 to 648 bases, depending on the correct start codon) was found in the identical 5' regions associated with the baiA1 and baiA3 clones. The 5' sequence 930 bases upstream from the baiA1 and baiA3 genes was totally divergent. The baiA2 gene, which is part of a large bile acid-inducible operon, showed no homology with the other two genes either in the 5' or 3' direction from the polypeptide coding region, except for a 15-base-pair presumed ribosome-binding site in the 5' region. These studies strongly suggest that a gene duplication (baiA1 and baiA3) has occurred and is stably maintained in this bacterium. Images PMID:2376563
Optical coherence tomography to evaluate variance in the extent of carious lesions in depth.
Park, Kyung-Jin; Schneider, Hartmut; Ziebolz, Dirk; Krause, Felix; Haak, Rainer
2018-05-03
Evaluation of variance in the extent of carious lesions in depth at smooth surfaces within the same ICDAS code group using optical coherence tomography (OCT) in vitro and in vivo. (1) Verification/validation of OCT to assess non-cavitated caries: 13 human molars with ICDAS code 2 at smooth surfaces were imaged using OCT and light microscopy. Regions of interest (ROI) were categorized according to the depth of carious lesions. Agreement between histology and OCT was determined by unweighted Cohen's Kappa and Wilcoxon test. (2) Assessment of 133 smooth surfaces using ICDAS and OCT in vitro, 49 surfaces in vivo. ROI were categorized according to the caries extent (ICDAS: codes 0-4, OCT: scoring based on lesion depth). A frequency distribution of the OCT scores for each ICDAS code was determined. (1) Histology and OCT agreed moderately (κ = 0.54, p ≤ 0.001) with no significant difference between both methods (p = 0.25). The lesions (76.9% (10 of 13)) _were equally scored. (2) In vitro, OCT revealed caries in 42% of ROI clinically assessed as sound. OCT detected dentin-caries in 40% of ROIs visually assessed as enamel-caries. In vivo, large differences between ICDAS and OCT were observed. Carious lesions of ICDAS codes 1 and 2 vary largely in their extent in depth.
Numerical, analytical, experimental study of fluid dynamic forces in seals
NASA Technical Reports Server (NTRS)
Shapiro, William; Artiles, Antonio; Aggarwal, Bharat; Walowit, Jed; Athavale, Mahesh M.; Preskwas, Andrzej J.
1992-01-01
NASA/Lewis Research Center is sponsoring a program for providing computer codes for analyzing and designing turbomachinery seals for future aerospace and engine systems. The program is made up of three principal components: (1) the development of advanced three dimensional (3-D) computational fluid dynamics codes, (2) the production of simpler two dimensional (2-D) industrial codes, and (3) the development of a knowledge based system (KBS) that contains an expert system to assist in seal selection and design. The first task has been to concentrate on cylindrical geometries with straight, tapered, and stepped bores. Improvements have been made by adoption of a colocated grid formulation, incorporation of higher order, time accurate schemes for transient analysis and high order discretization schemes for spatial derivatives. This report describes the mathematical formulations and presents a variety of 2-D results, including labyrinth and brush seal flows. Extensions of 3-D are presently in progress.
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes
2016-01-01
Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.
Chien, Tsair-Wei; Lin, Weir-Sen
2016-03-02
The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.