Entropy-Based Bounds On Redundancies Of Huffman Codes
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1992-01-01
Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.
Schroedinger’s code: Source code availability and transparency in astrophysics
NASA Astrophysics Data System (ADS)
Ryan, PW; Allen, Alice; Teuben, Peter
2018-01-01
Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.
PFLOTRAN-RepoTREND Source Term Comparison Summary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, Jennifer M.
Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.
Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J
1997-01-01
To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.
Phase II Evaluation of Clinical Coding Schemes
Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith
1997-01-01
Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343
Source terms, shielding calculations and soil activation for a medical cyclotron.
Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E
2016-12-01
Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .
EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young
2003-02-27
Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation ofmore » the Korean concept of the LILW disposal project in the near future.« less
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, S.L.; Miller, L.A.; Monroe, D.K.
1998-04-01
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less
Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
A Benchmark Study of Large Contract Supplier Monitoring Within DOD and Private Industry
1994-03-01
83 2. Long Term Supplier Relationships ...... .. 84 3. Global Sourcing . . . . . . . . . . . . .. 85 4. Refocusing on Customer Quality...monitoring and recognition, reduced number of suppliers, global sourcing, and long term contractor relationships . These initiatives were then compared to DCMC...on customer quality. 14. suBJE.C TERMS Benchmark Study of Large Contract Supplier Monitoring. 15. NUMBER OF PAGES108 16. PRICE CODE 17. SECURITY
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...
2016-12-01
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Taiee; Bauer, Johannes M.; Liu, James C.
A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
Enhancements to the MCNP6 background source
McMath, Garrett E.; McKinney, Gregg W.
2015-10-19
The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less
The Contract Management Body of Knowledge: A Comparison of Contracting Competencies
2013-12-01
SME subject matter expert SOW statement of work TINA Truth in Negotiations Act UCC uniform commercial code WBS work breakdown structure xv...documents whose terms and condition are legally enforceable. Sources of law and guidance covered include the uniform commercial code ( UCC ), Federal...contracting including the uniform commercial code ( UCC ), Federal Acquisition Regulation (FAR), as well as various other laws pertaining to both
Wartime Tracking of Class I Surface Shipments from Production or Procurement to Destination
1992-04-01
Armed Forces I ICAF-FAP National Defense University 6c. ADDRESS (City, State, ard ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Fort Lesley J...INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) 9c. ADDRESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK...COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue on reverse
Force Identification from Structural Response
1999-12-01
STUDENT AT (If applicable) AFIT/CIA Univ of New Mexico A 6c. ADDRESS (City, State, and ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Wright...ADDRESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (h,,clude...FOR PUBLIC RELEASE IAW AFR 190-1 ERNEST A. HAYGOOD, 1st Lt, USAF Executive Officer, Civilian Institution Programs 17. COSATI CODES 18. SUBJECT TERMS
Nonlinearly driven harmonics of Alfvén modes
NASA Astrophysics Data System (ADS)
Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.
2014-01-01
In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.
Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G
2006-08-01
Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.
Monitor Network Traffic with Packet Capture (pcap) on an Android Device
2015-09-01
administrative privileges . Under the current design Android development requirement, an Android Graphical User Interface (GUI) application cannot directly...build an Android application to monitor network traffic using open source packet capture (pcap) libraries. 15. SUBJECT TERMS ELIDe, Android , pcap 16...Building Application with Native Codes 5 8.1 Calling Native Codes Using JNI 5 8.2 Calling Native Codes from an Android Application 8 9. Retrieve Live
Rey-Martinez, Jorge; Pérez-Fernández, Nicolás
2016-12-01
The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.
Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M
2005-01-01
This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.
Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M
2009-09-30
QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.
15 CFR 734.2 - Important EAR terms and principles.
Code of Federal Regulations, 2010 CFR
2010-01-01
... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...
15 CFR 734.2 - Important EAR terms and principles.
Code of Federal Regulations, 2012 CFR
2012-01-01
... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...
15 CFR 734.2 - Important EAR terms and principles.
Code of Federal Regulations, 2014 CFR
2014-01-01
... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...
15 CFR 734.2 - Important EAR terms and principles.
Code of Federal Regulations, 2013 CFR
2013-01-01
... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...
15 CFR 734.2 - Important EAR terms and principles.
Code of Federal Regulations, 2011 CFR
2011-01-01
... technology and software not subject to the EAR are described in §§ 734.7 through 734.11 and supplement no. 1... of items subject to the EAR out of the United States, or release of technology or software subject to... source code and object code software subject to the EAR. (2) Export of technology or software. (See...
Madjidi, Faramarz; Behroozy, Ali
2014-01-01
Exposure to visible light and near infrared (NIR) radiation in the wavelength region of 380 to 1400 nm may cause thermal retinal injury. In this analysis, the effective spectral radiance of a hot source is replaced by its temperature in the exposure limit values in the region of 380-1400 nm. This article describes the development and implementation of a computer code to predict those temperatures, corresponding to the exposure limits proposed by the American Conference of Governmental Industrial Hygienists (ACGIH). Viewing duration and apparent diameter of the source were inputs for the computer code. At the first stage, an infinite series was created for calculation of spectral radiance by integration with Planck's law. At the second stage for calculation of effective spectral radiance, the initial terms of this infinite series were selected and integration was performed by multiplying these terms by a weighting factor R(λ) in the wavelength region 380-1400 nm. At the third stage, using a computer code, the source temperature that can emit the same effective spectral radiance was found. As a result, based only on measuring the source temperature and accounting for the exposure time and the apparent diameter of the source, it is possible to decide whether the exposure to visible and NIR in any 8-hr workday is permissible. The substitution of source temperature for effective spectral radiance provides a convenient way to evaluate exposure to visible light and NIR.
1986-09-01
ORGANIZATION Gjeoteehnical Laborator WESGR-M 6c ADDRESS (City, Slate, and ZIP Code ) 7b ADDRESS(City, State. and ZIP Code ) PO Box 631 Vicksburg, MS 39180...of Engineers 8< ADDRESS(City, State, and ZIP Code ) 10 SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT.. ", 1 :, • ; I, - u It ., " ’ ~f...Springfield, VA 22161 17 COSATI CODES 18 SUBJECT TERMS (Continue-On revprse of necessary and identify by block number) " FIELD GROUP SUB GROUP
Modeling Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
Flow Instability Tests for a Particle Bed Reactor Nuclear Thermal Rocket Fuel Element
1993-05-01
2.0 with GWBASIC or higher (DOS 5.0 was installed on the machine). Since the source code was written in BASIC, it was easy to make modifications...8217 AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for Public Release IAW 190-1 Distribution Unlimited MICHAEL M. BRICKER, SMSgt, USAF Chief...Administration 13. ABSTRACT (Maximum 200 words) i.14. SUBJECT TERMS 15. NUMBER OF PAGES 339 16. PRICE CODE . SECURITY CLASSIFICATION 18. SECURITY
European Science Notes. Volume 40, Number 4.
1986-04-01
OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (if applicable) 8c. ADDRESS (City, State, and ZIP Code ) 10. SOURCE OF...Office, London ONRL 6c. ADDRESS (City, State, and ZIP Code ) 7b. ADDRESS (City, State, and ZIPCode) Box 39 FPO, NY 09510 Ba. NAME OF FUNDING/SPONSORING 8b...13..TYPj9 REPORT13bTIECVRD1.DTOFRPT(YaMnhDy)1.AGCUNMonthly FROM TO _ April 1986 32 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS
Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Hixon, Ray
2015-01-01
The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.
Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Hixon, Ray
2015-01-01
The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.
The Application of Function Points to Predict Source Lines of Code for Software Development
1992-09-01
there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available
Airport-Noise Levels and Annoyance Model (ALAMO) system's reference manual
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
The airport-noise levels and annoyance model (ALAMO) is described in terms of the constituent modules, the execution of ALAMO procedure files, necessary for system execution, and the source code documentation associated with code development at Langley Research Center. The modules constituting ALAMO are presented both in flow graph form, and through a description of the subroutines and functions that comprise them.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.
1991-01-01
A numerical code is developed for computing three-dimensional, turbulent, compressible flow within coolant passages of turbine blades. The code is based on a formulation of the compressible Navier-Stokes equations in a rotating frame of reference in which the velocity dependent variable is specified with respect to the rotating frame instead of the inertial frame. The algorithm employed to obtain solutions to the governing equation is a finite-volume LU algorithm that allows convection, source, as well as diffusion terms to be treated implicitly. In this study, all convection terms are upwind differenced by using flux-vector splitting, and all diffusion terms are centrally differenced. This paper describes the formulation and algorithm employed in the code. Some computed solutions for the flow within a coolant passage of a radial turbine are also presented.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
Methods for nuclear air-cleaning-system accident-consequence assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.
1982-01-01
This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
Blosnich, John R; Cashy, John; Gordon, Adam J; Shipherd, Jillian C; Kauth, Michael R; Brown, George R; Fine, Michael J
2018-04-04
Transgender individuals are vulnerable to negative health risks and outcomes, but research remains limited because data sources, such as electronic medical records (EMRs), lack standardized collection of gender identity information. Most EMR do not include the gold standard of self-identified gender identity, but International Classification of Diseases (ICDs) includes diagnostic codes indicating transgender-related clinical services. However, it is unclear if these codes can indicate transgender status. The objective of this study was to determine the extent to which patients' clinician notes in EMR contained transgender-related terms that could corroborate ICD-coded transgender identity. Data are from the US Department of Veterans Affairs Corporate Data Warehouse. Transgender patients were defined by the presence of ICD9 and ICD10 codes associated with transgender-related clinical services, and a 3:1 comparison group of nontransgender patients was drawn. Patients' clinician text notes were extracted and searched for transgender-related words and phrases. Among 7560 patients defined as transgender based on ICD codes, the search algorithm identified 6753 (89.3%) with transgender-related terms. Among 22 072 patients defined as nontransgender without ICD codes, 246 (1.1%) had transgender-related terms; after review, 11 patients were identified as transgender, suggesting a 0.05% false negative rate. Using ICD-defined transgender status can facilitate health services research when self-identified gender identity data are not available in EMR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
VizieR Online Data Catalog: FARGO_THORIN 1.0 hydrodynamic code (Chrenko+, 2017)
NASA Astrophysics Data System (ADS)
Chrenko, O.; Broz, M.; Lambrechts, M.
2017-07-01
This archive contains the source files, documentation and example simulation setups of the FARGO_THORIN 1.0 hydrodynamic code. The program was introduced, described and used for simulations in the paper. It is built on top of the FARGO code (Masset, 2000A&AS..141..165M, Baruteau & Masset, 2008ApJ...672.1054B) and it is also interfaced with the REBOUND integrator package (Rein & Liu, 2012A&A...537A.128R). THORIN stands for Two-fluid HydrOdynamics, the Rebound integrator Interface and Non-isothermal gas physics. The program is designed for self-consistent investigations of protoplanetary systems consisting of a gas disk, a disk of small solid particles (pebbles) and embedded protoplanets. Code features: I) Non-isothermal gas disk with implicit numerical solution of the energy equation. The implemented energy source terms are: Compressional heating, viscous heating, stellar irradiation, vertical escape of radiation, radiative diffusion in the midplane and radiative feedback to accretion heating of protoplanets. II) Planets evolved in 3D, with close encounters allowed. The orbits are integrated using the IAS15 integrator (Rein & Spiegel, 2015MNRAS.446.1424R). The code detects the collisions among planets and resolve them as mergers. III) Refined treatment of the planet-disk gravitational interaction. The code uses a vertical averaging of the gravitational potential, as outlined in Muller & Kley (2012A&A...539A..18M). IV) Pebble disk represented by an Eulerian, presureless and inviscid fluid. The pebble dynamics is affected by the Epstein gas drag and optionally by the diffusive effects. We also implemented the drag back-reaction term into the Navier-Stokes equation for the gas. Archive summary: ------------------------------------------------------------------------- directory/file Explanation ------------------------------------------------------------------------- /in_relax Contains setup of the first example simulation /in_wplanet Contains setup of the second example simulation /srcmain Contains the source files of FARGOTHORIN /src_reb Contains the source files of the REBOUND integrator package to be linked with THORIN GUNGPL3 GNU General Public License, version 3 LICENSE License agreement README Simple user's guide UserGuide.pdf Extended user's guide refman.pdf Programer's guide ----------------------------------------------------------------------------- (1 data file).
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
Khodak, Andrei
2017-08-21
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, Andrei
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
JSPAM: A restricted three-body code for simulating interacting galaxies
NASA Astrophysics Data System (ADS)
Wallin, J. F.; Holincheck, A. J.; Harvey, A.
2016-07-01
Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.
2017-12-01
This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.
Flow diagram analysis of electrical fatalities in construction industry.
Chi, Chia-Fen; Lin, Yuan-Yuan; Ikhwan, Mohamad
2012-01-01
The current study reanalyzed 250 electrical fatalities in the construction industry from 1996 to 2002 into seven patterns based on source of electricity (power line, energized equipment, improperly installed or damaged equipment), direct contact or indirect contact through some source of injury (boom vehicle, metal bar or pipe, and other conductive material). Each fatality was coded in terms of age, company size, experience, performing tasks, source of injury, accident cause and hazard pattern. The Chi-square Automatic Interaction Detector (CHAID) was applied to the coded data of the fatal electrocution to find a subset of predictors that might derive meaningful classifications or accidents scenarios. A series of Flow Diagrams was constructed based on CHAID result to illustrate the flow of electricity travelling from electrical source to human body. Each of the flow diagrams can be directly linked with feasible prevention strategies by cutting the flow of electricity.
Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Carlson, Jan-Renee; Rumsey, Christopher L.
2015-01-01
The implementation of the k-kL turbulence model using multiple computational uid dy- namics (CFD) codes is reported herein. The k-kL model is a two-equation turbulence model based on Abdol-Hamid's closure and Menter's modi cation to Rotta's two-equation model. Rotta shows that a reliable transport equation can be formed from the turbulent length scale L, and the turbulent kinetic energy k. Rotta's equation is well suited for term-by-term mod- eling and displays useful features compared to other two-equation models. An important di erence is that this formulation leads to the inclusion of higher-order velocity derivatives in the source terms of the scale equations. This can enhance the ability of the Reynolds- averaged Navier-Stokes (RANS) solvers to simulate unsteady ows. The present report documents the formulation of the model as implemented in the CFD codes Fun3D and CFL3D. Methodology, veri cation and validation examples are shown. Attached and sepa- rated ow cases are documented and compared with experimental data. The results show generally very good comparisons with canonical and experimental data, as well as matching results code-to-code. The results from this formulation are similar or better than results using the SST turbulence model.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts.
Ravagli, Carlo; Pognan, Francois; Marc, Philippe
2017-01-01
The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com This software is designed to run on a Java EE application server and store data in a relational database. philippe.marc@novartis.com. © The Author 2016. Published by Oxford University Press.
OntoBrowser: a collaborative tool for curation of ontologies by subject matter experts
Ravagli, Carlo; Pognan, Francois
2017-01-01
Summary: The lack of controlled terminology and ontology usage leads to incomplete search results and poor interoperability between databases. One of the major underlying challenges of data integration is curating data to adhere to controlled terminologies and/or ontologies. Finding subject matter experts with the time and skills required to perform data curation is often problematic. In addition, existing tools are not designed for continuous data integration and collaborative curation. This results in time-consuming curation workflows that often become unsustainable. The primary objective of OntoBrowser is to provide an easy-to-use online collaborative solution for subject matter experts to map reported terms to preferred ontology (or code list) terms and facilitate ontology evolution. Additional features include web service access to data, visualization of ontologies in hierarchical/graph format and a peer review/approval workflow with alerting. Availability and implementation: The source code is freely available under the Apache v2.0 license. Source code and installation instructions are available at http://opensource.nibr.com. This software is designed to run on a Java EE application server and store data in a relational database. Contact: philippe.marc@novartis.com PMID:27605099
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min
2018-03-01
Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Astrophysics Data System (ADS)
Chitsomboon, Tawit
1992-02-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Romero, Roberto; Tarca, Adi; Chaemsaithong, Piya; Miranda, Jezid; Chaiworapongsa, Tinnakorn; Jia, Hui; Hassan, Sonia S.; Kalita, Cynthia A.; Cai, Juan; Yeo, Lami; Lipovich, Leonard
2014-01-01
Objective The mechanisms responsible for normal and abnormal parturition are poorly understood. Myometrial activation leading to regular uterine contractions is a key component of labor. Dysfunctional labor (arrest of dilatation and/or descent) is a leading indication for cesarean delivery. Compelling evidence suggests that most of these disorders are functional in nature, and not the result of cephalopelvic disproportion. The methodology and the datasets afforded by the post-genomic era provide novel opportunities to understand and target gene functions in these disorders. In 2012, the ENCODE Consortium elucidated the extraordinary abundance and functional complexity of long non-coding RNA genes in the human genome. The purpose of the study was to identify differentially expressed long non-coding RNA genes in human myometrium in women in spontaneous labor at term. Materials and Methods Myometrium was obtained from women undergoing cesarean deliveries who were not in labor (n=19) and women in spontaneous labor at term (n=20). RNA was extracted and profiled using an Illumina® microarray platform. The analysis of the protein coding genes from this study has been previously reported. Here, we have used computational approaches to bound the extent of long non-coding RNA representation on this platform, and to identify co-differentially expressed and correlated pairs of long non-coding RNA genes and protein-coding genes sharing the same genomic loci. Results Upon considering more than 18,498 distinct lncRNA genes compiled nonredundantly from public experimental data sources, and interrogating 2,634 that matched Illumina microarray probes, we identified co-differential expression and correlation at two genomic loci that contain coding-lncRNA gene pairs: SOCS2-AK054607 and LMCD1-NR_024065 in women in spontaneous labor at term. This co-differential expression and correlation was validated by qRT-PCR, an independent experimental method. Intriguingly, one of the two lncRNA genes differentially expressed in term labor had a key genomic structure element, a splice site that lacked evolutionary conservation beyond primates. Conclusions We provide for the first time evidence for coordinated differential expression and correlation of cis-encoded antisense lncRNAs and protein-coding genes with known, as well as novel roles in pregnancy in the myometrium of women in spontaneous labor at term. PMID:24168098
MELCOR computer code manuals: Primer and user`s guides, Version 1.8.3 September 1994. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the US Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users` Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
Prioritized packet video transmission over time-varying wireless channel using proactive FEC
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-12-01
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
Source-Free Exchange-Correlation Magnetic Fields in Density Functional Theory.
Sharma, S; Gross, E K U; Sanna, A; Dewhurst, J K
2018-03-13
Spin-dependent exchange-correlation energy functionals in use today depend on the charge density and the magnetization density: E xc [ρ, m]. However, it is also correct to define the functional in terms of the curl of m for physical external fields: E xc [ρ,∇ × m]. The exchange-correlation magnetic field, B xc , then becomes source-free. We study this variation of the theory by uniquely removing the source term from local and generalized gradient approximations to the functional. By doing so, the total Kohn-Sham moments are improved for a wide range of materials for both functionals. Significantly, the moments for the pnictides are now in good agreement with experiment. This source-free method is simple to implement in all existing density functional theory codes.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Tendal, Britta; Hróbjartsson, Asbjørn; Lundh, Andreas; Gøtzsche, Peter C
2014-01-01
Objective To assess the effects of coding and coding conventions on summaries and tabulations of adverse events data on suicidality within clinical study reports. Design Systematic electronic search for adverse events of suicidality in tables, narratives, and listings of adverse events in individual patients within clinical study reports. Where possible, for each event we extracted the original term reported by the investigator, the term as coded by the medical coding dictionary, medical coding dictionary used, and the patient’s trial identification number. Using the patient’s trial identification number, we attempted to reconcile data on the same event between the different formats for presenting data on adverse events within the clinical study report. Setting 9 randomised placebo controlled trials of duloxetine for major depressive disorder submitted to the European Medicines Agency for marketing approval. Data sources Clinical study reports obtained from the EMA in 2011. Results Six trials used the medical coding dictionary COSTART (Coding Symbols for a Thesaurus of Adverse Reaction Terms) and three used MedDRA (Medical Dictionary for Regulatory Activities). Suicides were clearly identifiable in all formats of adverse event data in clinical study reports. Suicide attempts presented in tables included both definitive and provisional diagnoses. Suicidal ideation and preparatory behaviour were obscured in some tables owing to the lack of specificity of the medical coding dictionary, especially COSTART. Furthermore, we found one event of suicidal ideation described in narrative text that was absent from tables and adverse event listings of individual patients. The reason for this is unclear, but may be due to the coding conventions used. Conclusion Data on adverse events in tables in clinical study reports may not accurately represent the underlying patient data because of the medical dictionaries and coding conventions used. In clinical study reports, the listings of adverse events for individual patients and narratives of adverse events can provide additional information, including original investigator reported adverse event terms, which can enable a more accurate estimate of harms. PMID:24899651
Hand Gesture Data Collection Procedure Using a Myo Armband for Machine Learning
2015-09-01
instructions, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing the collection information...data using a Myo armband. The source code for this work is included as an Appendix. 15. SUBJECT TERMS Myo, Machine Learning, Classifier, Data...development in multiple platfonns (e.g., Windows, iOS, Android , etc.) and many languages (e.g. , Java, C++, C#, Lua, etc.). For the data collection
DOE Office of Scientific and Technical Information (OSTI.GOV)
A.A. Bingham; R.M. Ferrer; A.M. ougouag
2009-09-01
An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. Themore » code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.« less
A Need for a Theory of Visual Literacy.
ERIC Educational Resources Information Center
Hortin, John A.
1982-01-01
Examines sources available for developing a theory of visual literacy and attempts to clarify the meaning of the term. Suggests that visual thinking, a concept supported by recent research on mental imagery, visualization, and dual coding, ought to be the emphasis for future theory development. (FL)
The Current Status of Behaviorism and Neurofeedback
ERIC Educational Resources Information Center
Fultz, Dwight E.
2009-01-01
There appears to be no dominant conceptual model for the process and outcomes of neurofeedback among practitioners or manufacturers. Behaviorists are well-positioned to develop a neuroscience-based source code in which neural activity is described in behavioral terms, providing a basis for behavioral conceptualization and education of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnack, Dalton D.
Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law tomore » model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.« less
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
VLF Source Localization with a Freely Drifting Sensor Array
1992-09-01
Simultaneous Measurement of Infra - sonic Acoustic Particle Velocity and Acoustic Pressure in the Ocean by F-ely Drifting Swallow Floats," IEEEJ. Ocean. Eng., vol...Pacific. Marine Physical Laboratory’s set of nine freely drifting, infrasonic sensors, capable of recording ocean ambient noise in the 1- to 25-Hz range...Terms. 15. Number of Pages, Swallow float, matched-field processing, infrasonic sensor, vlf source localization 153 16. Price Code. 17. Seorlity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Greulich, Christopher
A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.
Multicast Routing of Hierarchical Data
NASA Technical Reports Server (NTRS)
Shacham, Nachum
1992-01-01
The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.
Modeling of Nonlinear Beat Signals of TAE's
NASA Astrophysics Data System (ADS)
Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin
2012-03-01
Experiments on Alcator C-Mod reveal Toroidal Alfven Eigenmodes (TAE) together with signals at various beat frequencies, including those at twice the mode frequency. The beat frequencies are sidebands driven by quadratic nonlinear terms in the MHD equations. These nonlinear sidebands have not yet been quantified by any existing codes. We extend the AEGIS code to capture nonlinear effects by treating the nonlinear terms as a driving source in the linear MHD solver. Our goal is to compute the spatial structure of the sidebands for realistic geometry and q-profile, which can be directly compared with experiment in order to interpret the phase contrast imaging diagnostic measurements and to enable the quantitative determination of the Alfven wave amplitude in the plasma core
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
Class of near-perfect coded apertures
NASA Technical Reports Server (NTRS)
Cannon, T. M.; Fenimore, E. E.
1977-01-01
Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.
Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.
Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M
2009-10-01
Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.
Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.
Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa
2015-01-01
The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
mdFoam+: Advanced molecular dynamics in OpenFOAM
NASA Astrophysics Data System (ADS)
Longshaw, S. M.; Borg, M. K.; Ramisetti, S. B.; Zhang, J.; Lockerby, D. A.; Emerson, D. R.; Reese, J. M.
2018-03-01
This paper introduces mdFoam+, which is an MPI parallelised molecular dynamics (MD) solver implemented entirely within the OpenFOAM software framework. It is open-source and released under the same GNU General Public License (GPL) as OpenFOAM. The source code is released as a publicly open software repository that includes detailed documentation and tutorial cases. Since mdFoam+ is designed entirely within the OpenFOAM C++ object-oriented framework, it inherits a number of key features. The code is designed for extensibility and flexibility, so it is aimed first and foremost as an MD research tool, in which new models and test cases can be developed and tested rapidly. Implementing mdFoam+ in OpenFOAM also enables easier development of hybrid methods that couple MD with continuum-based solvers. Setting up MD cases follows the standard OpenFOAM format, as mdFoam+ also relies upon the OpenFOAM dictionary-based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of an MD simulation is not typical of most OpenFOAM applications. Results show that mdFoam+ compares well to another well-known MD code (e.g. LAMMPS) in terms of benchmark problems, although it also has additional functionality that does not exist in other open-source MD codes.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
42 CFR 414.904 - Average sales price as the basis for payment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subsection (c), the term billing unit means the identifiable quantity associated with a billing and payment code, as established by CMS. (c) Single source drugs—(1) Average sales price. The average sales price... report as required by section 623(c) of the Medicare Prescription Drug, Improvement, and Modernization...
42 CFR 414.904 - Average sales price as the basis for payment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subsection (c), the term billing unit means the identifiable quantity associated with a billing and payment code, as established by CMS. (c) Single source drugs—(1) Average sales price. The average sales price... report as required by section 623(c) of the Medicare Prescription Drug, Improvement, and Modernization...
Human Rights Texts: Converting Human Rights Primary Source Documents into Data.
Fariss, Christopher J; Linder, Fridolin J; Jones, Zachary M; Crabtree, Charles D; Biek, Megan A; Ross, Ana-Sophia M; Kaur, Taranamol; Tsai, Michael
2015-01-01
We introduce and make publicly available a large corpus of digitized primary source human rights documents which are published annually by monitoring agencies that include Amnesty International, Human Rights Watch, the Lawyers Committee for Human Rights, and the United States Department of State. In addition to the digitized text, we also make available and describe document-term matrices, which are datasets that systematically organize the word counts from each unique document by each unique term within the corpus of human rights documents. To contextualize the importance of this corpus, we describe the development of coding procedures in the human rights community and several existing categorical indicators that have been created by human coding of the human rights documents contained in the corpus. We then discuss how the new human rights corpus and the existing human rights datasets can be used with a variety of statistical analyses and machine learning algorithms to help scholars understand how human rights practices and reporting have evolved over time. We close with a discussion of our plans for dataset maintenance, updating, and availability.
Human Rights Texts: Converting Human Rights Primary Source Documents into Data
Fariss, Christopher J.; Linder, Fridolin J.; Jones, Zachary M.; Crabtree, Charles D.; Biek, Megan A.; Ross, Ana-Sophia M.; Kaur, Taranamol; Tsai, Michael
2015-01-01
We introduce and make publicly available a large corpus of digitized primary source human rights documents which are published annually by monitoring agencies that include Amnesty International, Human Rights Watch, the Lawyers Committee for Human Rights, and the United States Department of State. In addition to the digitized text, we also make available and describe document-term matrices, which are datasets that systematically organize the word counts from each unique document by each unique term within the corpus of human rights documents. To contextualize the importance of this corpus, we describe the development of coding procedures in the human rights community and several existing categorical indicators that have been created by human coding of the human rights documents contained in the corpus. We then discuss how the new human rights corpus and the existing human rights datasets can be used with a variety of statistical analyses and machine learning algorithms to help scholars understand how human rights practices and reporting have evolved over time. We close with a discussion of our plans for dataset maintenance, updating, and availability. PMID:26418817
Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C
2018-01-01
This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.
The Astrophysics Source Code Library: An Update
NASA Astrophysics Data System (ADS)
Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.
2012-01-01
The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.
Authorship Attribution of Source Code
ERIC Educational Resources Information Center
Tennyson, Matthew F.
2013-01-01
Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…
1983-11-01
successfully. I- Accession For NTIS -GO iiiONa DTIC TAB t Unannounced - Justificatio Distribution/ I Availability Codes vail and/or DIst Special IA-11...terms of initial signal power. An active sensor must be excited externally. Such a sensor receives its power from an external source and merely modulates...electrons in the material to gain L enough energy to be emitted. The voltage source causes a positive potential to be felt on the collector, thus causing the
Abhyankar, Swapna; Demner-Fushman, Dina; Callaghan, Fiona M; McDonald, Clement J
2014-01-01
Objective To develop a generalizable method for identifying patient cohorts from electronic health record (EHR) data—in this case, patients having dialysis—that uses simple information retrieval (IR) tools. Methods We used the coded data and clinical notes from the 24 506 adult patients in the Multiparameter Intelligent Monitoring in Intensive Care database to identify patients who had dialysis. We used SQL queries to search the procedure, diagnosis, and coded nursing observations tables based on ICD-9 and local codes. We used a domain-specific search engine to find clinical notes containing terms related to dialysis. We manually validated the available records for a 10% random sample of patients who potentially had dialysis and a random sample of 200 patients who were not identified as having dialysis based on any of the sources. Results We identified 1844 patients that potentially had dialysis: 1481 from the three coded sources and 1624 from the clinical notes. Precision for identifying dialysis patients based on available data was estimated to be 78.4% (95% CI 71.9% to 84.2%) and recall was 100% (95% CI 86% to 100%). Conclusions Combining structured EHR data with information from clinical notes using simple queries increases the utility of both types of data for cohort identification. Patients identified by more than one source are more likely to meet the inclusion criteria; however, including patients found in any of the sources increases recall. This method is attractive because it is available to researchers with access to EHR data and off-the-shelf IR tools. PMID:24384230
Flow of GE90 Turbofan Engine Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1999-01-01
The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Jerden, James
2016-10-01
The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less
SOPHAEROS code development and its application to falcon tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lajtha, G.; Missirlian, M.; Kissane, M.
1996-12-31
One of the key issues in source-term evaluation in nuclear reactor severe accidents is determination of the transport behavior of fission products released from the degrading core. The SOPHAEROS computer code is being developed to predict fission product transport in a mechanistic way in light water reactor circuits. These applications of the SOPHAEROS code to the Falcon experiments, among others not presented here, indicate that the numerical scheme of the code is robust, and no convergence problems are encountered. The calculation is also very fast being three times longer on a Sun SPARC 5 workstation than real time and typicallymore » {approx} 10 times faster than an identical calculation with the VICTORIA code. The study demonstrates that the SOPHAEROS 1.3 code is a suitable tool for prediction of the vapor chemistry and fission product transport with a reasonable level of accuracy. Furthermore, the fexibility of the code material data bank allows improvement of understanding of fission product transport and deposition in the circuit. Performing sensitivity studies with different chemical species or with different properties (saturation pressure, chemical equilibrium constants) is very straightforward.« less
Physicsdesign point for a 1MW fusion neutron source
NASA Astrophysics Data System (ADS)
Woodruff, Simon; Melnik, Paul; Sieck, Paul; Stuber, James; Romero-Talamas, Carlos; O'Bryan, John; Miller, Ronald
2016-10-01
We are developing a design point for a spheromak experiment heated by adiabatic compression for use as a compact neutron source. We utilize the CORSICA and NIMROD MHD codes as well as analytic modeling to assess a concept with target parameters R0 =0.5m, Rf =0.17m, T0 =1keV, Tf =8keV, n0 =2e20m-3 and nf = 5e21m-3, with radial convergence of C =R0/Rf =3. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. We present results simulations of magnetic compression using the NIMROD code to examine the role of rotation on the stability and confinement of the spheromak as it is compressed. Supported by DARPA Grant N66001-14-1-4044 and IAEA CRP on Compact Fusion Neutron Sources.
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Extension of CE/SE method to non-equilibrium dissociating flows
NASA Astrophysics Data System (ADS)
Wen, C. Y.; Saldivar Massimi, H.; Shen, H.
2018-03-01
In this study, the hypersonic non-equilibrium flows over rounded nose geometries are numerically investigated by a robust conservation element and solution element (CE/SE) code, which is based on hybrid meshes consisting of triangular and quadrilateral elements. The dissociating and recombination chemical reactions as well as the vibrational energy relaxation are taken into account. The stiff source terms are solved by an implicit trapezoidal method of integration. Comparison with laboratory and numerical cases are provided to demonstrate the accuracy and reliability of the present CE/SE code in simulating hypersonic non-equilibrium flows.
An Open Source Agenda for Research Linking Text and Image Content Features.
ERIC Educational Resources Information Center
Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi
2001-01-01
Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Continuation of research into language concepts for the mission support environment: Source code
NASA Technical Reports Server (NTRS)
Barton, Timothy J.; Ratner, Jeremiah M.
1991-01-01
Research into language concepts for the Mission Control Center is presented. A computer code for source codes is presented. The file contains the routines which allow source code files to be created and compiled. The build process assumes that all elements and the COMP exist in the current directory. The build process places as much code generation as possible on the preprocessor as possible. A summary is given of the source files as used and/or manipulated by the build routine.
NSRD-10: Leak Path Factor Guidance Using MELCOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
Estimates of the source term from a U.S. Department of Energy (DOE) nuclear facility requires that the analysts know how to apply the simulation tools used, such as the MELCOR code, particularly for a complicated facility that may include an air ventilation system and other active systems that can influence the environmental pathway of the materials released. DOE has designated MELCOR 1.8.5, an unsupported version, as a DOE ToolBox code in its Central Registry, which includes a leak-path-factor guidance report written in 2004 that did not include experimental validation data. To continue to use this MELCOR version requires additional verificationmore » and validations, which may not be feasible from a project cost standpoint. Instead, the recent MELCOR should be used. Without any developer support and lack of experimental data validation, it is difficult to convince regulators that the calculated source term from the DOE facility is accurate and defensible. This research replaces the obsolete version in the 2004 DOE leak path factor guidance report by using MELCOR 2.1 (the latest version of MELCOR with continuing modeling development and user support) and by including applicable experimental data from the reactor safety arena and from applicable experimental data used in the DOE-HDBK-3010. This research provides best practice values used in MELCOR 2.1 specifically for the leak path determination. With these enhancements, the revised leak-path-guidance report should provide confidence to the DOE safety analyst who would be using MELCOR as a source-term determination tool for mitigated accident evaluations.« less
MEMOPS: data modelling and automatic code generation.
Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D
2010-03-25
In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Recent skyshine calculations at Jefferson Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degtyarenko, P.
1997-12-01
New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less
NASA Astrophysics Data System (ADS)
Sarmah, Ratan; Tiwari, Shubham
2018-03-01
An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.
Comparison of Predicted and Measured Attenuation of Turbine Noise from a Static Engine Test
NASA Technical Reports Server (NTRS)
Chien, Eugene W.; Ruiz, Marta; Yu, Jia; Morin, Bruce L.; Cicon, Dennis; Schwieger, Paul S.; Nark, Douglas M.
2007-01-01
Aircraft noise has become an increasing concern for commercial airlines. Worldwide demand for quieter aircraft is increasing, making the prediction of engine noise suppression one of the most important fields of research. The Low-Pressure Turbine (LPT) can be an important noise source during the approach condition for commercial aircraft. The National Aeronautics and Space Administration (NASA), Pratt & Whitney (P&W), and Goodrich Aerostructures (Goodrich) conducted a joint program to validate a method for predicting turbine noise attenuation. The method includes noise-source estimation, acoustic treatment impedance prediction, and in-duct noise propagation analysis. Two noise propagation prediction codes, Eversman Finite Element Method (FEM) code [1] and the CDUCT-LaRC [2] code, were used in this study to compare the predicted and the measured turbine noise attenuation from a static engine test. In this paper, the test setup, test configurations and test results are detailed in Section II. A description of the input parameters, including estimated noise modal content (in terms of acoustic potential), and acoustic treatment impedance values are provided in Section III. The prediction-to-test correlation study results are illustrated and discussed in Section IV and V for the FEM and the CDUCT-LaRC codes, respectively, and a summary of the results is presented in Section VI.
Tam, Vivian; Edge, Jennifer S; Hoffman, Steven J
2016-10-12
Shortages of health workers in low-income countries are exacerbated by the international migration of health workers to more affluent countries. This problem is compounded by the active recruitment of health workers by destination countries, particularly Australia, Canada, UK and USA. The World Health Organization (WHO) adopted a voluntary Code of Practice in May 2010 to mitigate tensions between health workers' right to migrate and the shortage of health workers in source countries. The first empirical impact evaluation of this Code was conducted 11-months after its adoption and demonstrated a lack of impact on health workforce recruitment policy and practice in the short-term. This second empirical impact evaluation was conducted 4-years post-adoption using the same methodology to determine whether there have been any changes in the perceived utility, applicability, and implementation of the Code in the medium-term. Forty-four respondents representing government, civil society and the private sector from Australia, Canada, UK and USA completed an email-based survey evaluating their awareness of the Code, perceived impact, changes to policy or recruitment practices resulting from the Code, and the effectiveness of non-binding Codes generally. The same survey instrument from the original study was used to facilitate direct comparability of responses. Key lessons were identified through thematic analysis. The main findings between the initial impact evaluation and the current one are unchanged. Both sets of key informants reported no significant policy or regulatory changes to health worker recruitment in their countries as a direct result of the Code due to its lack of incentives, institutional mechanisms and interest mobilizers. Participants emphasized the existence of previous bilateral and regional Codes, the WHO Code's non-binding nature, and the primacy of competing domestic healthcare priorities in explaining this perceived lack of impact. The Code has probably still not produced the tangible improvements in health worker flows it aspired to achieve. Several actions, including a focus on developing bilateral codes, linking the Code to topical global priorities, and reframing the Code's purpose to emphasize health system sustainability, are proposed to improve the Code's uptake and impact.
Revised Class IV Planning Factors
1997-01-01
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503. 1. AGENCY...October 1996 meeting. 14. SUBJECT TERMS 15. NUMBER OF PAGES supply management mobilization 26 planning factors 16. PRICE CODE construction materials 17
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley
2018-05-01
We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
You've Written a Cool Astronomy Code! Now What Do You Do with It?
NASA Astrophysics Data System (ADS)
Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.
2014-01-01
Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.
The random energy model in a magnetic field and joint source channel coding
NASA Astrophysics Data System (ADS)
Merhav, Neri
2008-09-01
We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.
2014-06-01
User Manual and Source Code for a LAMMPS Implementation of Constant Energy Dissipative Particle Dynamics (DPD-E) by James P. Larentzos...Laboratory Aberdeen Proving Ground, MD 21005-5069 ARL-SR-290 June 2014 User Manual and Source Code for a LAMMPS Implementation of Constant...3. DATES COVERED (From - To) September 2013–February 2014 4. TITLE AND SUBTITLE User Manual and Source Code for a LAMMPS Implementation of
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
SeisCode: A seismological software repository for discovery and collaboration
NASA Astrophysics Data System (ADS)
Trabant, C.; Reyes, C. G.; Clark, A.; Karstens, R.
2012-12-01
SeisCode is a community repository for software used in seismological and related fields. The repository is intended to increase discoverability of such software and to provide a long-term home for software projects. Other places exist where seismological software may be found, but none meet the requirements necessary for an always current, easy to search, well documented, and citable resource for projects. Organizations such as IRIS, ORFEUS, and the USGS have websites with lists of available or contributed seismological software. Since the authors themselves do often not maintain these lists, the documentation often consists of a sentence or paragraph, and the available software may be outdated. Repositories such as GoogleCode and SourceForge, which are directly maintained by the authors, provide version control and issue tracking but do not provide a unified way of locating geophysical software scattered in and among countless unrelated projects. Additionally, projects are hosted at language-specific sites such as Mathworks and PyPI, in FTP directories, and in websites strewn across the Web. Search engines are only partially effective discovery tools, as the desired software is often hidden deep within the results. SeisCode provides software authors a place to present their software, codes, scripts, tutorials, and examples to the seismological community. Authors can choose their own level of involvement. At one end of the spectrum, the author might simply create a web page that points to an existing site. At the other extreme, an author may choose to leverage the many tools provided by SeisCode, such as a source code management tool with integrated issue tracking, forums, news feeds, downloads, wikis, and more. For software development projects with multiple authors, SeisCode can also be used as a central site for collaboration. SeisCode provides the community with an easy way to discover software, while providing authors a way to build a community around their software packages. IRIS invites the seismological community to browse and to submit projects to https://seiscode.iris.washington.edu/
LittleQuickWarp: an ultrafast image warping tool.
Qu, Lei; Peng, Hanchuan
2015-02-01
Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.
Astronomy education and the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; Nemiroff, Robert J.
2016-01-01
The Astrophysics Source Code Library (ASCL) is an online registry of source codes used in refereed astrophysics research. It currently lists nearly 1,200 codes and covers all aspects of computational astrophysics. How can this resource be of use to educators and to the graduate students they mentor? The ASCL serves as a discovery tool for codes that can be used for one's own research. Graduate students can also investigate existing codes to see how common astronomical problems are approached numerically in practice, and use these codes as benchmarks for their own solutions to these problems. Further, they can deepen their knowledge of software practices and techniques through examination of others' codes.
1988-05-01
Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice
Chips: A Tool for Developing Software Interfaces Interactively.
1987-10-01
of the application through the objects on the screen. Chips makes this easy by supplying simple and direct access to the source code and data ...object-oriented programming, user interface management systems, programming environments. Typographic Conventions Technical terms appearing in the...creating an environment in which we could do our work. This project could not have happened without him. Jeff Bonar started and managed the Chips
Data processing with microcode designed with source coding
McCoy, James A; Morrison, Steven E
2013-05-07
Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.
MELCOR/CONTAIN LMR Implementation Report-Progress FY15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.; Louie, David L.Y.
2016-01-01
This report describes the progress of the CONTAIN-LMR sodium physics and chemistry models to be implemented in to MELCOR 2.1. It also describes the progress to implement these models into CONT AIN 2 as well. In the past two years, the implementation included the addition of sodium equations of state and sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laborat ory by modifying MELCOR to include liquid lithium equation of state as a working fluid to mode l the nuclear fusion safety research. The second source uses properties generatedmore » for the SIMMER code. Testing and results from this implementation of sodium pr operties are given. In addition, the CONTAIN-LMR code was derived from an early version of C ONTAIN code. Many physical models that were developed sin ce this early version of CONTAIN are not captured by this early code version. Therefore, CONTAIN 2 is being updated with the sodium models in CONTAIN-LMR in or der to facilitate verification of these models with the MELCOR code. Although CONTAIN 2, which represents the latest development of CONTAIN, now contains ma ny of the sodium specific models, this work is not complete due to challenges from the lower cell architecture in CONTAIN 2, which is different from CONTAIN- LMR. This implementation should be completed in the coming year, while sodi um models from C ONTAIN-LMR are being integrated into MELCOR. For testing, CONTAIN decks have been developed for verification and validation use. In terms of implementing the sodium m odels into MELCOR, a separate sodium model branch was created for this document . Because of massive development in the main stream MELCOR 2.1 code and the require ment to merge the latest code version into this branch, the integration of the s odium models were re-directed to implement the sodium chemistry models first. This change led to delays of the actual implementation. For aid in the future implementation of sodium models, a new sodium chemistry package was created. Thus reporting for the implementation of the sodium chemistry is discussed in this report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, T; Lin, H; Xu, X
Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less
Araujo, Gonzalo; Penketh, Luke; Heath, Anna; McCoy, Emer; Labaja, Jessica; Lucey, Anna; Ponzo, Alessandro
2015-01-01
While shark-based tourism is a rapidly growing global industry, there is ongoing controversy about the effects of provisioning on the target species. This study investigated the effect of feeding on whale sharks (Rhincodon typus) at a provisioning site in Oslob, Cebu, in terms of arrival time, avoidance and feeding behaviour using photo-identification and focal follows. Additionally, compliance to the code of conduct in place was monitored to assess tourism pressure on the whale sharks. Newly identified sharks gradually arrived earlier to the provisioning site after their initial sighting, indicating that the animals learn to associate the site with food rewards. Whale sharks with a long resighting history showed anticipatory behaviour and were recorded at the site on average 5 min after the arrival of feeder boats. Results from a generalised linear mixed model indicated that animals with a longer resighting history were less likely to show avoidance behaviour to touches or boat contact. Similarly, sequential data on feeding behaviour was modelled using a generalised estimating equations approach, which suggested that experienced whale sharks were more likely to display vertical feeding behaviour. It was proposed that the continuous source of food provides a strong incentive for the modification of behaviours, i.e., learning, through conditioning. Whale sharks are large opportunistic filter feeders in a mainly oligotrophic environment, where the ability to use novel food sources by modifying their behaviour could be of great advantage. Non-compliance to the code of conduct in terms of minimum distance to the shark (2 m) increased from 79% in 2012 to 97% in 2014, suggesting a high tourism pressure on the whale sharks in Oslob. The long-term effects of the observed behavioural modifications along with the high tourism pressure remain unknown. However, management plans are traditionally based on the precautionary principle, which aims to take preventive actions even if data on cause and effect are still inconclusive. Hence, an improved enforcement of the code of conduct coupled with a reduction in the conditioning of the whale sharks through provisioning were proposed to minimise the impacts on whale sharks in Oslob. PMID:26644984
Schleimer, Anna; Araujo, Gonzalo; Penketh, Luke; Heath, Anna; McCoy, Emer; Labaja, Jessica; Lucey, Anna; Ponzo, Alessandro
2015-01-01
While shark-based tourism is a rapidly growing global industry, there is ongoing controversy about the effects of provisioning on the target species. This study investigated the effect of feeding on whale sharks (Rhincodon typus) at a provisioning site in Oslob, Cebu, in terms of arrival time, avoidance and feeding behaviour using photo-identification and focal follows. Additionally, compliance to the code of conduct in place was monitored to assess tourism pressure on the whale sharks. Newly identified sharks gradually arrived earlier to the provisioning site after their initial sighting, indicating that the animals learn to associate the site with food rewards. Whale sharks with a long resighting history showed anticipatory behaviour and were recorded at the site on average 5 min after the arrival of feeder boats. Results from a generalised linear mixed model indicated that animals with a longer resighting history were less likely to show avoidance behaviour to touches or boat contact. Similarly, sequential data on feeding behaviour was modelled using a generalised estimating equations approach, which suggested that experienced whale sharks were more likely to display vertical feeding behaviour. It was proposed that the continuous source of food provides a strong incentive for the modification of behaviours, i.e., learning, through conditioning. Whale sharks are large opportunistic filter feeders in a mainly oligotrophic environment, where the ability to use novel food sources by modifying their behaviour could be of great advantage. Non-compliance to the code of conduct in terms of minimum distance to the shark (2 m) increased from 79% in 2012 to 97% in 2014, suggesting a high tourism pressure on the whale sharks in Oslob. The long-term effects of the observed behavioural modifications along with the high tourism pressure remain unknown. However, management plans are traditionally based on the precautionary principle, which aims to take preventive actions even if data on cause and effect are still inconclusive. Hence, an improved enforcement of the code of conduct coupled with a reduction in the conditioning of the whale sharks through provisioning were proposed to minimise the impacts on whale sharks in Oslob.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
The Particle Accelerator Simulation Code PyORBIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M
2015-01-01
The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less
MrLavaLoba: A new probabilistic model for the simulation of lava flows as a settling process
NASA Astrophysics Data System (ADS)
de'Michieli Vitturi, Mattia; Tarquini, Simone
2018-01-01
A new code to simulate lava flow spread, MrLavaLoba, is presented. In the code, erupted lava is itemized in parcels having an elliptical shape and prescribed volume. New parcels bud from existing ones according to a probabilistic law influenced by the local steepest slope direction and by tunable input settings. MrLavaLoba must be accounted among the probabilistic codes for the simulation of lava flows, because it is not intended to mimic the actual process of flowing or to provide directly the progression with time of the flow field, but rather to guess the most probable inundated area and final thickness of the lava deposit. The code's flexibility allows it to produce variable lava flow spread and emplacement according to different dynamics (e.g. pahoehoe or channelized-'a'ā). For a given scenario, it is shown that model outputs converge, in probabilistic terms, towards a single solution. The code is applied to real cases in Hawaii and Mt. Etna, and the obtained maps are shown. The model is written in Python and the source code is available at http://demichie.github.io/MrLavaLoba/.
2009-09-01
nuclear industry for conducting performance assessment calculations. The analytical FORTRAN code for the DNAPL source function, REMChlor, was...project. The first was to apply existing deterministic codes , such as T2VOC and UTCHEM, to the DNAPL source zone to simulate the remediation processes...but describe the spatial variability of source zones unlike one-dimensional flow and transport codes that assume homogeneity. The Lagrangian models
Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests
NASA Astrophysics Data System (ADS)
Toth, G.; Keppens, R.; Botchev, M. A.
1998-04-01
We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.
Genuine worker participation-an indispensable key to effective global OHS.
Brown, Garrett
2009-01-01
Working conditions, including workplace safety, in global supply chains of products sold by transnational corporations have only marginally improved over the last 15 years despite the development of hundreds of corporate "codes of conduct," code monitoring systems, and an elaborate new "corporate social responsibility" industry. The two underlying reasons for the lack of significant change are: 1) a schizophrenic business model which fatally undermines "socially responsible" sourcing programs with unyielding dictates for the lowest possible production costs; and 2) the lack of any meaningful participation by shop-floor workers in plant safety programs. Only when trained, empowered, and active workers are an integral part of workplace safety programs will conditions improve over the long term.
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.
Variability Search in GALFACTS
NASA Astrophysics Data System (ADS)
Kania, Joseph; Wenger, Trey; Ghosh, Tapasi; Salter, Christopher J.
2015-01-01
The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo-sky survey using the seven-beam Arecibo L-band Feed Array (ALFA). The Survey is centered at 1.375 GHz with 300-MHz bandwidth, and measures all four Stokes parameters. We are looking for compact sources that vary in intensity or polarization on timescales of about a month via intra-survey comparisons and long term variations through comparisons with the NRAO VLA Sky Survey. Data processing includes locating and rejecting radio frequency interference, recognizing sources, two-dimensional Gaussian fitting to multiple cuts through the same source, and gain corrections. Our Python code is being used on the calibrations sources observed in conjunction with the survey measurements to determine the calibration parameters that will then be applied to data for the main field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Lliev, Filip L.; Stanev, Valentin G.
This code is a toy (short) version of CODE-2016-83. From a general perspective, the code represents an unsupervised adaptive machine learning algorithm that allows efficient and high performance de-mixing and feature extraction of a multitude of non-negative signals mixed and recorded by a network of uncorrelated sensor arrays. The code identifies the number of the mixed original signals and their locations. Further, the code also allows deciphering of signals that have been delayed in regards to the mixing process in each sensor. This code is high customizable and it can be efficiently used for a fast macro-analyses of data. Themore » code is applicable to a plethora of distinct problems: chemical decomposition, pressure transient decomposition, unknown sources/signal allocation, EM signal decomposition. An additional procedure for allocation of the unknown sources is incorporated in the code.« less
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
NASA Astrophysics Data System (ADS)
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
Statistical Characterization of MP3 Encoders for Steganalysis: ’CHAMP3’
2004-04-27
compression exceeds those of typical stegano- graphic tools (e. g., LSB image embedding), the availability of commented source codes for MP3 encoders...developed by testing the approach on known and unknown reference data. 15. SUBJECT TERMS EOARD, Steganography , Digital Watermarking...Pages kbps Kilobits per Second LGPL Lesser General Public License LSB Least Significant Bit MB Megabyte MDCT Modified Discrete Cosine Transformation MP3
Purser, Harry; Jarrold, Christopher
2010-04-01
A long-standing body of research supports the existence of separable short- and long-term memory systems, relying on phonological and semantic codes, respectively. The aim of the current study was to measure the contribution of long-term knowledge to short-term memory performance by looking for evidence of phonologically and semantically coded storage within a short-term recognition task, among developmental samples. Each experimental trial presented 4-item lists. In Experiment 1 typically developing children aged 5 to 6 years old showed evidence of phonologically coded storage across all 4 serial positions, but evidence of semantically coded storage at Serial Positions 1 and 2. In a further experiment, a group of individuals with Down syndrome was investigated as a test case that might be expected to use semantic coding to support short-term storage, but these participants showed no evidence of semantically coded storage and evidenced phonologically coded storage only at Serial Position 4, suggesting that individuals with Down syndrome have a verbal short-term memory capacity of 1 item. Our results suggest that previous evidence of semantic effects on "short-term memory performance" does not reflect semantic coding in short-term memory itself, and provide an experimental method for researchers wishing to take a relatively pure measure of verbal short-term memory capacity, in cases where rehearsal is unlikely.
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.
Multidimensional incremental parsing for universal source coding.
Bae, Soo Hyun; Juang, Biing-Hwang
2008-10-01
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
The sound of moving bodies. Ph.D. Thesis - Cambridge Univ.
NASA Technical Reports Server (NTRS)
Brentner, Kenneth Steven
1990-01-01
The importance of the quadrupole source term in the Ffowcs, Williams, and Hawkings (FWH) equation was addressed. The quadrupole source contains fundamental components of the complete fluid mechanics problem, which are ignored only at the risk of error. The results made it clear that any application of the acoustic analogy should begin with all of the source terms in the FWH theory. The direct calculation of the acoustic field as part of the complete unsteady fluid mechanics problem using CFD is considered. It was shown that aeroelastic calculation can indeed be made with CFD codes. The results indicate that the acoustic field is the most susceptible component of the computation to numerical error. Therefore, the ability to measure the damping of acoustic waves is absolutely essential both to develop acoustic computations. Essential groundwork for a new approach to the problem of sound generation by moving bodies is presented. This new computational acoustic approach holds the promise of solving many problems hitherto pushed aside.
An Efficient Variable Length Coding Scheme for an IID Source
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.
Source Code Plagiarism--A Student Perspective
ERIC Educational Resources Information Center
Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.
2011-01-01
This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…
Tehan, Gerald; Fogarty, Gerard; Ryan, Katherine
2004-07-01
Rehearsal speed has traditionally been seen to be the prime determinant of individual differences in memory span. Recent studies, in the main using young children as the participant population, have suggested other contributors to span performance. In the present research, we used structural equation modeling to explore, at the construct level, individual differences in immediate serial recall with respect to rehearsal, search, phonological coding, and speed of access to lexical memory. We replicated standard short-term phenomena; we showed that the variables that influence children's span performance influence adult performance in the same way; and we showed that speed of access to lexical memory and facility with phonological codes appear to be more potent sources of individual differences in immediate memory than is either rehearsal speed or search factors.
Design Aspects of the Rayleigh Convection Code
NASA Astrophysics Data System (ADS)
Featherstone, N. A.
2017-12-01
Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Hybrid concatenated codes and iterative decoding
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Pollara, Fabrizio (Inventor)
2000-01-01
Several improved turbo code apparatuses and methods. The invention encompasses several classes: (1) A data source is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each encoder outputs a code element which may be transmitted or stored. A parallel decoder provides the ability to decode the code elements to derive the original source information d without use of a received data signal corresponding to d. The output may be coupled to a multilevel trellis-coded modulator (TCM). (2) A data source d is applied to two or more encoders with an interleaver between the source and each of the second and subsequent encoders. Each of the encoders outputs a code element. In addition, the original data source d is output from the encoder. All of the output elements are coupled to a TCM. (3) At least two data sources are applied to two or more encoders with an interleaver between each source and each of the second and subsequent encoders. The output may be coupled to a TCM. (4) At least two data sources are applied to two or more encoders with at least two interleavers between each source and each of the second and subsequent encoders. (5) At least one data source is applied to one or more serially linked encoders through at least one interleaver. The output may be coupled to a TCM. The invention includes a novel way of terminating a turbo coder.
Patel, Sonal; Dowse, Ros
2015-10-01
Although much health information-seeking behaviour (HISB) research has been reported in patients with good literacy skills, little is known about HISB in patients with limited literacy skills served by under-resourced health-care systems. To investigate medicine information-seeking behaviour and information needs in patients with limited literacy. Using a question guide, four focus group discussions (FGDs) were conducted to explore themes related to information needs, information-seeking practices and awareness of and ability to utilize information sources. Twenty-two isiXhosa-speaking long-term patients with limited formal education were recruited from a primary health-care clinic in South Africa. Discussions were audio-recorded and transcribed verbatim. NVivo(®) was used for initial coding of transcripts. Codes were analysed, and potential themes and subthemes in the entire data set were identified and refined. The results of this study reflect a passive, disempowered patient. Poor awareness of information sources, lack of health-related knowledge and stigma contributed to a lack of information-seeking practice, thus potentially adversely influencing patient-provider interactions. Patients neither asked questions nor were encouraged to ask questions. All expressed an unmet need for information and a desire for receiving the illustrated written medicines-related information displayed in the FGDs. The main sources of information were health-care professionals, followed by family and friends. The significant level of patient disempowerment and passivity reported amongst patients underpinned their inability to actively seek information. Neither sources of information nor types of appropriate medicines information could be identified. Unmet information needs and a desire for information were reported. © 2013 John Wiley & Sons Ltd.
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.; Cluggish, B.; Kim, J. S.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recentmore » charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.« less
Study of flow control by localized volume heating in hypersonic boundary layers
NASA Astrophysics Data System (ADS)
Keller, M. A.; Kloker, M. J.; Kirilovskiy, S. V.; Polivanov, P. A.; Sidorenko, A. A.; Maslov, A. A.
2014-12-01
Boundary-layer flow control is a prerequisite for a safe and efficient operation of future hypersonic transport systems. Here, the influence of an electric discharge—modeled by a heat-source term in the energy equation—on laminar boundary-layer flows over a flat plate with zero pressure gradient at Mach 3, 5, and 7 is investigated numerically. The aim was to appraise the potential of electro-gasdynamic devices for an application as turbulence generators in the super- and hypersonic flow regime. The results with localized heat-source elements in boundary layers are compared to cases with roughness elements serving as classical passive trips. The numerical simulations are performed using the commercial code ANSYS FLUENT (by ITAM) and the high-order finite-difference DNS code NS3D (by IAG), the latter allowing for the detailed analysis of laminar flow instability. For the investigated setups with steady heating, transition to turbulence is not observed, due to the Reynolds-number lowering effect of heating.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zehtabian, M; Zaker, N; Sina, S
2015-06-15
Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 whichmore » is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.« less
Robinson, Judas; de Lusignan, Simon; Kostkova, Patty; Madge, Bruce; Marsh, A; Biniaris, C
2006-01-01
Rich Site Summary (RSS) feeds are a method for disseminating and syndicating the contents of a website using extensible mark-up language (XML). The Primary Care Electronic Library (PCEL) distributes recent additions to the site in the form of an RSS feed. When new resources are added to PCEL, they are manually assigned medical subject headings (MeSH terms), which are then automatically mapped to SNOMED-CT terms using the Unified Medical Language System (UMLS) Metathesaurus. The library is thus searchable using MeSH or SNOMED-CT. Our syndicate partner wished to have remote access to PCEL coronary heart disease (CHD) information resources based on SNOMED-CT search terms. To pilot the supply of relevant information resources in response to clinically coded requests, using RSS syndication for transmission between web servers. Our syndicate partner provided a list of CHD SNOMED-CT terms to its end-users, a list which was coded according to UMLS specifications. When the end-user requested relevant information resources, this request was relayed from our syndicate partner's web server to the PCEL web server. The relevant resources were retrieved from the PCEL MySQL database. This database is accessed using a server side scripting language (PHP), which enables the production of dynamic RSS feeds on the basis of Source Asserted Identifiers (CODEs) contained in UMLS. Retrieving resources using SNOMED-CT terms using syndication can be used to build a functioning application. The process from request to display of syndicated resources took less than one second. The results of the pilot illustrate that it is possible to exchange data between servers using RSS syndication. This method could be utilised dynamically to supply digital library resources to a clinical system with SNOMED-CT data used as the standard of reference.
From 2D to 3D modelling in long term tectonics: Modelling challenges and HPC solutions (Invited)
NASA Astrophysics Data System (ADS)
Le Pourhiet, L.; May, D.
2013-12-01
Over the last decades, 3D thermo-mechanical codes have been made available to the long term tectonics community either as open source (Underworld, Gale) or more limited access (Fantom, Elvis3D, Douar, LaMem etc ...). However, to date, few published results using these methods have included the coupling between crustal and lithospheric dynamics at large strain. The fact that these computations are computational expensive is not the primary reason for the relatively slow development of 3D modeling in the long term tectonics community, as compare to the rapid development observed within the mantle dynamic community, or in the short-term tectonics field. Long term tectonics problems have specific issues not found in either of these two field, including; large strain (not an issue for short-term), the inclusion of free surface and the occurence of large viscosity contrasts. The first issue is typically eliminated using a combined marker-ALE method instead of fully lagrangian method, however, the marker-ALE approach can pose some algorithmic challenges in a massively parallel environment. The two last issues are more problematic because they affect the convergence of the linear/non-linear solver and the memory cost. Two options have been tested so far, using low order element and solving with a sparse direct solver, or using higher order stable elements together with a multi-grid solver. The first options, is simpler to code and to use but reaches its limit at around 80^3 low order elements. The second option requires more operations but allows using iterative solver on extremely large computers. In this presentation, I will describe the design philosophy and highlight results obtained using a code from the second-class method. The presentation will be oriented from an end-user point of view, using an application from 3D continental break up to illustrate key concepts. The description will proceed point by point from implementing physics into the code, to dealing with specific issues related to solving the discrete system of non linear equations.
Resources for comparing the speed and performance of medical autocoders.
Berman, Jules J
2004-06-15
Concept indexing is a popular method for characterizing medical text, and is one of the most important early steps in many data mining efforts. Concept indexing differs from simple word or phrase indexing because concepts are typically represented by a nomenclature code that binds a medical concept to all equivalent representations. A concept search on the term renal cell carcinoma would be expected to find occurrences of hypernephroma, and renal carcinoma (concept equivalents). The purpose of this study is to provide freely available resources to compare speed and performance among different autocoders. These tools consist of: 1) a public domain autocoder written in Perl (a free and open source programming language that installs on any operating system); 2) a nomenclature database derived from the unencumbered subset of the publicly available Unified Medical Language System; 3) a large corpus of autocoded output derived from a publicly available medical text. A simple lexical autocoder was written that parses plain-text into a listing of all 1,2,3, and 4-word strings contained in text, assigning a nomenclature code for text strings that match terms in the nomenclature. The nomenclature used is the unencumbered subset of the 2003 Unified Medical Language System (UMLS). The unencumbered subset of UMLS was reduced to exclude homonymous one-word terms and proper names, resulting in a term/code data dictionary containing about a half million medical terms. The Online Mendelian Inheritance in Man (OMIM), a 92+ Megabyte publicly available medical opus, was used as sample medical text for the autocoder. The autocoding Perl script is remarkably short, consisting of just 38 command lines. The 92+ Megabyte OMIM file was completely autocoded in 869 seconds on a 2.4 GHz processor (less than 10 seconds per Megabyte of text). The autocoded output file (9,540,442 bytes) contains 367,963 coded terms from OMIM and is distributed with this manuscript. A public domain Perl script is provided that can parse through plain-text files of any length, matching concepts against an external nomenclature. The script and associated files can be used freely to compare the speed and performance of autocoding software.
Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments
ERIC Educational Resources Information Center
Kermek, Dragutin; Novak, Matija
2016-01-01
In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…
Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.
Padula, William V; McQueen, Robert Brett; Pronovost, Peter J
2017-11-01
The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...
SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data.
Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A
2015-01-01
There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA's applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA. © The Author(s) 2015. Published by Oxford University Press.
SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data
Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K.; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A.
2015-01-01
There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA’s applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA PMID:26385205
The Astrophysics Source Code Library by the numbers
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein
2018-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.
Astrophysics Source Code Library: Incite to Cite!
NASA Astrophysics Data System (ADS)
DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.
2014-05-01
The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.
Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.
2013-10-01
The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
MultiElec: A MATLAB Based Application for MEA Data Analysis.
Georgiadis, Vassilis; Stephanou, Anastasis; Townsend, Paul A; Jackson, Thomas R
2015-01-01
We present MultiElec, an open source MATLAB based application for data analysis of microelectrode array (MEA) recordings. MultiElec displays an extremely user-friendly graphic user interface (GUI) that allows the simultaneous display and analysis of voltage traces for 60 electrodes and includes functions for activation-time determination, the production of activation-time heat maps with activation time and isoline display. Furthermore, local conduction velocities are semi-automatically calculated along with their corresponding vector plots. MultiElec allows ad hoc signal suppression, enabling the user to easily and efficiently handle signal artefacts and for incomplete data sets to be analysed. Voltage traces and heat maps can be simply exported for figure production and presentation. In addition, our platform is able to produce 3D videos of signal progression over all 60 electrodes. Functions are controlled entirely by a single GUI with no need for command line input or any understanding of MATLAB code. MultiElec is open source under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Both the program and source code are available to download from http://www.cancer.manchester.ac.uk/MultiElec/.
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
NASA Technical Reports Server (NTRS)
Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward
2007-01-01
A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.
Practices in source code sharing in astrophysics
NASA Astrophysics Data System (ADS)
Shamir, Lior; Wallin, John F.; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J.; Mink, Jessica; Hanisch, Robert J.; DuPrie, Kimberly
2013-02-01
While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make computational astronomy methods more available to other researchers who wish to apply them to their data.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2014-12-01
Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.
Utilization of genetic tests: analysis of gene-specific billing in Medicare claims data.
Lynch, Julie A; Berse, Brygida; Dotson, W David; Khoury, Muin J; Coomer, Nicole; Kautter, John
2017-08-01
We examined the utilization of precision medicine tests among Medicare beneficiaries through analysis of gene-specific tier 1 and 2 billing codes developed by the American Medical Association in 2012. We conducted a retrospective cross-sectional study. The primary source of data was 2013 Medicare 100% fee-for-service claims. We identified claims billed for each laboratory test, the number of patients tested, expenditures, and the diagnostic codes indicated for testing. We analyzed variations in testing by patient demographics and region of the country. Pharmacogenetic tests were billed most frequently, accounting for 48% of the expenditures for new codes. The most common indications for testing were breast cancer, long-term use of medications, and disorders of lipid metabolism. There was underutilization of guideline-recommended tumor mutation tests (e.g., epidermal growth factor receptor) and substantial overutilization of a test discouraged by guidelines (methylenetetrahydrofolate reductase). Methodology-based tier 2 codes represented 15% of all claims billed with the new codes. The highest rate of testing per beneficiary was in Mississippi and the lowest rate was in Alaska. Gene-specific billing codes significantly improved our ability to conduct population-level research of precision medicine. Analysis of these data in conjunction with clinical records should be conducted to validate findings.Genet Med advance online publication 26 January 2017.
Comparison of simulation and experimental results for a gas puff nozzle on Ambiorix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnier, J-N.; Chevalier, J-M.; Dubroca, B.
One of source term of Z-Pinch experiments is the gas puff density profile. In order to characterize the gas jet, an experiment based on interferometry has been performed. The first study was a point measurement (a section density profile) which led us to develop a global and instantaneous interferometry imaging method. In order to optimise the nozzle, we simulated the experiment with a flow calculation code (ARES). In this paper, the experimental results are compared with simulations. The different gas properties (He, Ne, Ar) and the flow duration lead us to take care, on the one hand, of the gasmore » viscosity, and on the other, of modifying the code for an instationary flow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, M.E.
1997-12-05
This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less
A Two-moment Radiation Hydrodynamics Module in ATHENA Using a Godunov Method
NASA Astrophysics Data System (ADS)
Skinner, M. A.; Ostriker, E. C.
2013-04-01
We describe a module for the Athena code that solves the grey equations of radiation hydrodynamics (RHD) using a local variable Eddington tensor (VET) based on the M1 closure of the two-moment hierarchy of the transfer equation. The variables are updated via a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. The streaming and diffusion limits are well-described by the M1 closure model, and our implementation shows excellent behavior for problems containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly-varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal.
Modeling activities on the negative-ion-based Neutral Beam Injectors of the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostinetti, P.; Antoni, V.; Chitarin, G.
2011-09-26
At the National Institute for Fusion Science (NIFS) large-scaled negative ion sources have been widely used for the Neutral Beam Injectors (NBIs) mounted on the Large Helical Device (LHD), which is the world-largest superconducting helical system. These injectors have achieved outstanding performances in terms of beam energy, negative-ion current and optics, and represent a reference for the development of heating and current drive NBIs for ITER.In the framework of the support activities for the ITER NBIs, the PRIMA test facility, which includes a RF-drive ion source with 100 keV accelerator (SPIDER) and a complete 1 MeV Neutral Beam system (MITICA)more » is under construction at Consorzio RFX in Padova.An experimental validation of the codes has been undertaken in order to prove the accuracy of the simulations and the soundness of the SPIDER and MITICA design. To this purpose, the whole set of codes have been applied to the LHD NBIs in a joint activity between Consorzio RFX and NIFS, with the goal of comparing and benchmarking the codes with the experimental data. A description of these modeling activities and a discussion of the main results obtained are reported in this paper.« less
NASA Astrophysics Data System (ADS)
Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang
2018-06-01
Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.
Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao
2017-01-01
Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934
Spatial correlation-based side information refinement for distributed video coding
NASA Astrophysics Data System (ADS)
Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin
2013-12-01
Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.
The mathematical theory of signal processing and compression-designs
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2006-05-01
The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.
Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low-Altitude VLF Transmitter
2007-08-31
latitude) for 3 different grid spacings. 14 8. Low-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...excellent, validating the new FD code. 16 9. High-altitude fields produced by a 10-kHz source computed using the FD and TD codes. The agreement is...again excellent. 17 10. Low-altitude fields produced by a 20-k.Hz source computed using the FD and TD codes. 17 11. High-altitude fields produced
NASA Astrophysics Data System (ADS)
Bauwe, Andreas; Eckhardt, Kai-Uwe; Lennartz, Bernd
2017-04-01
Eutrophication is still one of the main environmental problems in the Baltic Sea. Currently, agricultural diffuse sources constitute the major portion of phosphorus (P) fluxes to the Baltic Sea and have to be reduced to achieve the HELCOM targets and improve the ecological status. Eco-hydrological models are suitable tools to identify sources of nutrients and possible measures aiming at reducing nutrient loads into surface waters. In this study, the Soil and Water Assessment Tool (SWAT) was applied to the Warnow river basin (3300 km2), the second largest watershed in Germany discharging into the Baltic Sea. The Warnow river basin is located in northeastern Germany and characterized by lowlands with a high proportion of artificially drained areas. The aim of this study were (i) to estimate P loadings for individual flow fractions (point sources, surface runoff, tile flow, groundwater flow), spatially distributed on sub-basin scale. Since the official version of SWAT does not allow for the modeling of P in tile drains, we tested (ii) two different approaches of simulating P in tile drains by changing the SWAT source code. The SWAT source code was modified so that (i) the soluble P concentration of the groundwater was transferred to the tile water and (ii) the soluble P in the soil was transferred to the tiles. The SWAT model was first calibrated (2002-2011) and validated (1992-2001) for stream flow at 7 headwater catchments at a daily time scale. Based on this, the stream flow at the outlet of the Warnow river basin was simulated. Performance statistics indicated at least satisfactory model results for each sub-basin. Breaking down the discharge into flow constituents, it becomes visible that stream flow is mainly governed by groundwater and tile flow. Due to the topographic situation with gentle slopes, surface runoff played only a minor role. Results further indicate that the prediction of soluble P loads was improved by the modified SWAT versions. Major sources of P in rivers are groundwater and tile flow. P was also released by surface runoff during large storm events when sediment was eroded into the rivers. The contributions of point sources in terms of waste water treatment plants to the overall P loading were low. The modifications made in the SWAT source code should be considered as a starting point to simulate P loads in artificially drained landscapes more precisely. Further testing and development of the code is required.
Numerical investigations of low-density nozzle flow by solving the Boltzmann equation
NASA Technical Reports Server (NTRS)
Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen
1995-01-01
A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.
NASA Astrophysics Data System (ADS)
Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.
2018-05-01
Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.
Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea
2010-04-01
The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.
Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.
Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D
2018-01-01
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.
Development of a Chemically Reacting Flow Solver on the Graphic Processing Units
2011-05-10
been implemented on the GPU by Schive et al. (2010). The outcome of their work is the GAMER code for astrophysical simulation. Thibault and...Euler equations at each cell. For simplification, consider the Euler equations in one dimension with no source terms; the discretized form of the...is known to be more diffusive than the other fluxes due to the large bound of the numerical signal velocities: b+, b-. 3.4 Time Marching Methods
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
Correlation of recent fission product release data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kress, T.S.; Lorenz, R.A.; Nakamura, T.
For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-07-09
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-01-01
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim
2016-01-01
Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458
40 CFR Appendix A to Subpart A of... - Tables
Code of Federal Regulations, 2010 CFR
2010-07-01
... phone number ✓ ✓ (6) FIPS code ✓ ✓ (7) Facility ID codes ✓ ✓ (8) Unit ID code ✓ ✓ (9) Process ID code... for Reporting on Emissions From Nonpoint Sources and Nonroad Mobile Sources, Where Required by 40 CFR... start date ✓ ✓ (3) Inventory end date ✓ ✓ (4) Contact name ✓ ✓ (5) Contact phone number ✓ ✓ (6) FIPS...
SOURCELESS STARTUP. A MACHINE CODE FOR COMPUTING LOW-SOURCE REACTOR STARTUPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacMillan, D.B.
1960-06-01
>A revision to the sourceless start-up code is presented. The code solves a system of differential equations encountered in computing the probability distribution of activity at an observed power level during reactor start-up from a very low source level. (J.R.D.)
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Hammond, G. E.
2017-12-01
Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Craig, Elizabeth; Kerr, Neal; McDonald, Gabrielle
2017-03-01
In New Zealand, there is a paucity of information on children with chronic conditions and disabilities (CCD). One reason is that many are managed in hospital outpatients where diagnostic coding of health-care events does not occur. This study explores the feasibility of coding paediatric outpatient data to provide health planners with information on children with CCD. Thirty-seven clinicians from six District Health Boards (DHBs) trialled coding over 12 weeks. In five DHBs, the International Classification of Diseases and Related Health Problems, 10th Edition, Australian Modification (ICD-10-AM) and Systematised Nomenclature of Medicine Clinical Terms (SNOMED-CT) were trialled for 6 weeks each. In one DHB, ICD-10-AM was trialled for 12 weeks. A random sample (30%) of ICD-10-AM coded events were also coded by clinical coders. A mix of paper and electronic methods were used. In total 2,604 outpatient events were coded in ICD-10-AM and 693 in SNOMED-CT. Dual coding occurred for 770 (29.6%) ICD-10-AM events. Overall, 34% of ICD-10-AM and 40% of SNOMED-CT events were for developmental and behavioural disorders. Chronic medical conditions were also common. Clinicians were concerned about the workload impacts, particularly for paper-based methods. Coder's were concerned about clinician's adherence to coding guidelines and the poor quality of documentation in some notes. Coded outpatient data could provide planners with a rich source of information on children with CCD. However, coding is also resource intensive. Thus its costs need to be weighed against the costs of managing a much larger health budget using very limited information. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
Benchmarking kinetic calculations of resistive wall mode stability
NASA Astrophysics Data System (ADS)
Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.
2014-05-01
Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
SolTrace | Concentrating Solar Power | NREL
NREL packaged distribution or from source code at the SolTrace open source project website. NREL Publications Support FAQs SolTrace open source project The code uses Monte-Carlo ray-tracing methodology. The -tracing capabilities. With the release of the SolTrace open source project, the software has adopted
Code System for Performance Assessment Ground-water Analysis for Low-level Nuclear Waste.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MATTHEW,; KOZAK, W.
1994-02-09
Version 00 The PAGAN code system is a part of the performance assessment methodology developed for use by the U. S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1 has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simplemore » ground-water transport analysis and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time- and location-dependent radionuclide concentration at a well in the aquifer, or a time- and location-dependent radionuclide flux into a surface-water body.« less
Development of an upwind, finite-volume code with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Molvik, Gregory A.
1994-01-01
Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques, and a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical, and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data.
Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code
NASA Astrophysics Data System (ADS)
Faghihi, F.; Mehdizadeh, S.; Hadad, K.
Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
NASA Astrophysics Data System (ADS)
Johnson, Ryan; Kercher, Andrew; Schwer, Douglas; Corrigan, Andrew; Kailasanath, Kazhikathra
2017-11-01
This presentation focuses on the development of a Discontinuous Galerkin (DG) method for application to chemically reacting flows. The in-house code, called Propel, was developed by the Laboratory of Computational Physics and Fluid Dynamics at the Naval Research Laboratory. It was designed specifically for developing advanced multi-dimensional algorithms to run efficiently on new and innovative architectures such as GPUs. For these results, Propel solves for convection and diffusion simultaneously with detailed transport and thermodynamics. Chemistry is currently solved in a time-split approach using Strang-splitting with finite element DG time integration of chemical source terms. Results presented here show canonical unsteady reacting flow cases, such as co-flow and splitter plate, and we report performance for higher order DG on CPU and GPUs.
Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S
2016-03-08
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
Gold emissivities for hydrocode applications
NASA Astrophysics Data System (ADS)
Bowen, C.; Wagon, F.; Galmiche, D.; Loiseau, P.; Dattolo, E.; Babonneau, D.
2004-10-01
The Radiom model [M. Busquet, Phys Fluids B 5, 4191 (1993)] is designed to provide a radiative-hydrodynamic code with non-local thermodynamic equilibrium (non-LTE) data efficiently by using LTE tables. Comparison with benchmark data [M. Klapisch and A. Bar-Shalom, J. Quant. Spectrosc. Radiat. Transf. 58, 687 (1997)] has shown Radiom to be inaccurate far from LTE and for heavy ions. In particular, the emissivity was found to be strongly underestimated. A recent algorithm, Gondor [C. Bowen and P. Kaiser, J. Quant. Spectrosc. Radiat. Transf. 81, 85 (2003)], was introduced to improve the gold non-LTE ionization and corresponding opacity. It relies on fitting the collisional ionization rate to reproduce benchmark data given by the Averroès superconfiguration code [O. Peyrusse, J. Phys. B 33, 4303 (2000)]. Gondor is extended here to gold emissivity calculations, with two simple modifications of the two-level atom line source function used by Radiom: (a) a larger collisional excitation rate and (b) the addition of a Planckian source term, fitted to spectrally integrated Averroès emissivity data. This approach improves the agreement between experiments and hydrodynamic simulations.
Methodology for the nuclear design validation of an Alternate Emergency Management Centre (CAGE)
NASA Astrophysics Data System (ADS)
Hueso, César; Fabbri, Marco; de la Fuente, Cristina; Janés, Albert; Massuet, Joan; Zamora, Imanol; Gasca, Cristina; Hernández, Héctor; Vega, J. Ángel
2017-09-01
The methodology is devised by coupling different codes. The study of weather conditions as part of the data of the site will determine the relative concentrations of radionuclides in the air using ARCON96. The activity in the air is characterized depending on the source and release sequence specified in NUREG-1465 by RADTRAD code, which provides results of the inner cloud source term contribution. Known activities, energy spectra are inferred using ORIGEN-S, which are used as input for the models of the outer cloud, filters and containment generated with MCNP5. The sum of the different contributions must meet the conditions of habitability specified by the CSN (Spanish Nuclear Regulatory Body) (TEDE <50 mSv and equivalent dose to the thyroid <500 mSv within 30 days following the accident doses) so that the dose is optimized by varying parameters such as CAGE location, flow filtering need for recirculation, thicknesses and compositions of the walls, etc. The results for the most penalizing area meet the established criteria, and therefore the CAGE building design based on the methodology presented is radiologically validated.
Tidal Response of Europa's Subsurface Ocean
NASA Astrophysics Data System (ADS)
Karatekin, O.; Comblen, R.; Deleersnijder, E.; Dehant, V. M.
2010-12-01
Time-variable tides in the subsurface oceans of icy satellites cause large periodic surface displacements and tidal dissipation can become a major energy source that can affect long-term orbital and internal evolution. In the present study, we investigate the response of the subsurface ocean of Europa to a time-varibale tidal potential. Two-dimensional nonlinear shallow water equations are solved on a sphere by means of a finite element code. The resulting ocean tidal flow velocities,dissipation and surface displacements will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sienicki, J.J.
A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. PACER has been developed for FORTRAN 77 and earlier versions of FORTRAN. The code has been successfully compiled and executedmore » on SUN SPARC and Hewlett-Packard HP-735 workstations provided that appropriate compiler options are specified. The code incorporates both capabilities built around a hardwired default generic VVER-440 Model V230 design as well as fairly general user-defined input. However, array dimensions are hardwired and must be changed by modifying the source code if the number of compartments/cells differs from the default number of nine. Detailed input instructions are provided as well as a description of outputs. Input files and selected output are presented for two sample problems run on both HP-735 and SUN SPARC workstations.« less
Using the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, P. J.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Wallin, J. F.
2013-01-01
The Astrophysics Source Code Library (ASCL) is a free on-line registry of source codes that are of interest to astrophysicists; with over 500 codes, it is the largest collection of scientist-written astrophysics programs in existence. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. An advisory committee formed in 2011 provides input and guides the development and expansion of the ASCL, and since January 2012, all accepted ASCL entries are indexed by ADS. Though software is increasingly important for the advancement of science in astrophysics, these methods are still often hidden from view or difficult to find. The ASCL (ascl.net/) seeks to improve the transparency and reproducibility of research by making these vital methods discoverable, and to provide recognition and incentive to those who write and release programs useful for astrophysics research. This poster provides a description of the ASCL, an update on recent additions, and the changes in the astrophysics community we are starting to see because of the ASCL.
Classification scheme and prevention measures for caught-in-between occupational fatalities.
Chi, Chia-Fen; Lin, Syuan-Zih
2018-04-01
The current study analyzed 312 caught-in-between fatalities caused by machinery and vehicles. A comprehensive and mutually exclusive coding scheme was developed to analyze and code each caught-in-between fatality in terms of age, gender, experience of the victim, type of industry, source of injury, and causes for these accidents. Boolean algebra analysis was applied on these 312 caught-in-between fatalities to derive minimal cut set (MCS) causes associated with each source of injury. Eventually, contributing factors and common accident patterns associated with (1) special process machinery including textile, printing, packaging machinery, (2) metal, woodworking, and special material machinery, (3) conveyor, (4) vehicle, (5) crane, (6) construction machinery, and (7) elevator can be divided into three major groups through Boolean algebra and MCS analysis. The MCS causes associated with conveyor share the same primary causes as those of the special process machinery including textile, printing, packaging and metal, woodworking, and special material machinery. These fatalities can be eliminated by focusing on the prevention measures associated with lack of safeguards, working on a running machine or process, unintentional activation, unsafe posture or position, unsafe clothing, and defective safeguards. Other precise and effective intervention can be developed based on the identified groups of accident causes associated with each source of injury. Copyright © 2017 Elsevier Ltd. All rights reserved.
McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Stedry, M.H.
1994-07-01
McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsa, Z.
1988-06-16
This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.
The Astrophysics Source Code Library: Where Do We Go from Here?
NASA Astrophysics Data System (ADS)
Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J.
2014-05-01
The Astrophysics Source Code Library1, started in 1999, has in the past three years grown from a repository for 40 codes to a registry of over 700 codes that are now indexed by ADS. What comes next? We examine the future of the , the challenges facing it, the rationale behind its practices, and the need to balance what we might do with what we have the resources to accomplish.
Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.
Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile
2016-01-01
This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.
NASA Astrophysics Data System (ADS)
Kwon, N.; Gentle, J.; Pierce, S. A.
2015-12-01
Software code developed for research is often used for a relatively short period of time before it is abandoned, lost, or becomes outdated. This unintentional abandonment of code is a valid problem in the 21st century scientific process, hindering widespread reusability and increasing the effort needed to develop research software. Potentially important assets, these legacy codes may be resurrected and documented digitally for long-term reuse, often with modest effort. Furthermore, the revived code may be openly accessible in a public repository for researchers to reuse or improve. For this study, the research team has begun to revive the codebase for Groundwater Decision Support System (GWDSS), originally developed for participatory decision making to aid urban planning and groundwater management, though it may serve multiple use cases beyond those originally envisioned. GWDSS was designed as a java-based wrapper with loosely federated commercial and open source components. If successfully revitalized, GWDSS will be useful for both practical applications as a teaching tool and case study for groundwater management, as well as informing theoretical research. Using the knowledge-sharing approaches documented by the NSF-funded Ontosoft project, digital documentation of GWDSS is underway, from conception to development, deployment, characterization, integration, composition, and dissemination through open source communities and geosciences modeling frameworks. Information assets, documentation, and examples are shared using open platforms for data sharing and assigned digital object identifiers. Two instances of GWDSS version 3.0 are being created: 1) a virtual machine instance for the original case study to serve as a live demonstration of the decision support tool, assuring the original version is usable, and 2) an open version of the codebase, executable installation files, and developer guide available via an open repository, assuring the source for the application is accessible with version control and potential for new branch developments. Finally, metadata about the software has been completed within the OntoSoft portal to provide descriptive curation, make GWDSS searchable, and complete documentation of the scientific software lifecycle.
Computational Infrastructure for Geodynamics (CIG)
NASA Astrophysics Data System (ADS)
Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.
2004-12-01
Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to leverage and develop long-term strategic partnerships with open source development efforts within the larger thrusts of scientific computing and geoinformatics. These strategic partnerships are essential as the frontier has moved into multi-scale and multi-physics problems in which many investigators now want to use simulation software for data interpretation, data assimilation, and hypothesis testing.
Methods for Coding Tobacco-Related Twitter Data: A Systematic Review
Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai
2017-01-01
Background As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. Objective The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Methods Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. Results E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter’s Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Conclusions Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter’s databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. PMID:28363883
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less
Reliable video transmission over fading channels via channel state estimation
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-04-01
Transmission of continuous media such as video over time- varying wireless communication channels can benefit from the use of adaptation techniques in both source and channel coding. An adaptive feedback-based wireless video transmission scheme is investigated in this research with special emphasis on feedback-based adaptation. To be more specific, an interactive adaptive transmission scheme is developed by letting the receiver estimate the channel state information and send it back to the transmitter. By utilizing the feedback information, the transmitter is capable of adapting the level of protection by changing the flexible RCPC (rate-compatible punctured convolutional) code ratio depending on the instantaneous channel condition. The wireless channel is modeled as a fading channel, where the long-term and short- term fading effects are modeled as the log-normal fading and the Rayleigh flat fading, respectively. Then, its state (mainly the long term fading portion) is tracked and predicted by using an adaptive LMS (least mean squares) algorithm. By utilizing the delayed feedback on the channel condition, the adaptation performance of the proposed scheme is first evaluated in terms of the error probability and the throughput. It is then extended to incorporate variable size packets of ITU-T H.263+ video with the error resilience option. Finally, the end-to-end performance of wireless video transmission is compared against several non-adaptive protection schemes.
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.
Mørk, Søren; Holmes, Ian
2012-03-01
Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsuga, Ken; Takahashi, Hiroyuki R.
2016-02-20
We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less
Image authentication using distributed source coding.
Lin, Yao-Chung; Varodayan, David; Girod, Bernd
2012-01-01
We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.
1991-05-31
benchmarks ............ .... . .. .. . . .. 220 Appendix G : Source code of the Aquarius Prolog compiler ........ . 224 Chapter I Introduction "You’re given...notation, a tool that is used throughout the compiler’s implementation. Appendix F lists the source code of the C and Prolog benchmarks. Appendix G lists the...source code of the compilcr. 5 "- standard form Prolog / a-sfomadon / head umrvln Convert to tmeikernel Prol g vrans~fonaon 1symbolic execution
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
Understanding Accretion Disks through Three Dimensional Radiation MHD Simulations
NASA Astrophysics Data System (ADS)
Jiang, Yan-Fei
I study the structures and thermal properties of black hole accretion disks in the radiation pressure dominated regime. Angular momentum transfer in the disk is provided by the turbulence generated by the magneto-rotational instability (MRI), which is calculated self-consistently with a recently developed 3D radiation magneto-hydrodynamics (MHD) code based on Athena. This code, developed by my collaborators and myself, couples both the radiation momentum and energy source terms with the ideal MHD equations by modifying the standard Godunov method to handle the stiff radiation source terms. We solve the two momentum equations of the radiation transfer equations with a variable Eddington tensor (VET), which is calculated with a time independent short characteristic module. This code is well tested and accurate in both optically thin and optically thick regimes. It is also accurate for both radiation pressure and gas pressure dominated flows. With this code, I find that when photon viscosity becomes significant, the ratio between Maxwell stress and Reynolds stress from the MRI turbulence can increase significantly with radiation pressure. The thermal instability of the radiation pressure dominated disk is then studied with vertically stratified shearing box simulations. Unlike the previous results claiming that the radiation pressure dominated disk with MRI turbulence can reach a steady state without showing any unstable behavior, I find that the radiation pressure dominated disks always either collapse or expand until we have to stop the simulations. During the thermal runaway, the heating and cooling rates from the simulations are consistent with the general criterion of thermal instability. However, details of the thermal runaway are different from the predictions of the standard alpha disk model, as many assumptions in that model are not satisfied in the simulations. We also identify the key reasons why previous simulations do not find the instability. The thermal instability has many important implications for understanding the observations of both X-ray binaries and Active Galactic Nuclei (AGNs). However, direct comparisons between observations and the simulations require global radiation MHD simulations, which will be the main focus of my future work.
2007-10-01
Architecture ................................................................................ 14 Figure 2. Eclipse Java Model...16 Figure 3. Eclipse Java Model at the Source Code Level...24 Figure 9. Java Source Code
Romero, Roberto; Tarca, Adi L; Chaemsaithong, Piya; Miranda, Jezid; Chaiworapongsa, Tinnakorn; Jia, Hui; Hassan, Sonia S; Kalita, Cynthia A; Cai, Juan; Yeo, Lami; Lipovich, Leonard
2014-09-01
To identify differentially expressed long non-coding RNA (lncRNA) genes in human myometrium in women with spontaneous labor at term. Myometrium was obtained from women undergoing cesarean deliveries who were not in labor (n = 19) and women in spontaneous labor at term (n = 20). RNA was extracted and profiled using an Illumina® microarray platform. We have used computational approaches to bound the extent of long non-coding RNA representation on this platform, and to identify co-differentially expressed and correlated pairs of long non-coding RNA genes and protein-coding genes sharing the same genomic loci. We identified co-differential expression and correlation at two genomic loci that contain coding-lncRNA gene pairs: SOCS2-AK054607 and LMCD1-NR_024065 in women in spontaneous labor at term. This co-differential expression and correlation was validated by qRT-PCR, an experimental method completely independent of the microarray analysis. Intriguingly, one of the two lncRNA genes differentially expressed in term labor had a key genomic structure element, a splice site, that lacked evolutionary conservation beyond primates. We provide, for the first time, evidence for coordinated differential expression and correlation of cis-encoded antisense lncRNAs and protein-coding genes with known as well as novel roles in pregnancy in the myometrium of women in spontaneous labor at term.
Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars
NASA Astrophysics Data System (ADS)
Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan
2013-04-01
The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.
2013-12-01
A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.
2016-01-01
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460
NASA Technical Reports Server (NTRS)
Davis, Kirsch; Bankieris, Derek
2016-01-01
As an intern project for NASA Johnson Space Center (JSC), my job was to familiarize myself and operate a Robotics Operating System (ROS). The project outcome converted existing software assets into ROS using nodes, enabling a robotic Hexapod to communicate to be functional and controlled by an existing PlayStation 3 (PS3) controller. Existing control algorithms and current libraries have no ROS capabilities within the Hexapod C++ source code when the internship started, but that has changed throughout my internship. Conversion of C++ codes to ROS enabled existing code to be compatible with ROS, and is now controlled using an existing PS3 controller. Furthermore, my job description was to design ROS messages and script programs that enabled assets to participate in the ROS ecosystem by subscribing and publishing messages. Software programming source code is written in directories using C++. Testing of software assets included compiling code within the Linux environment using a terminal. The terminal ran the code from a directory. Several problems occurred while compiling code and the code would not compile. So modifying code to where C++ can read the source code were made. Once the code was compiled and ran, the code was uploaded to Hexapod and then controlled by a PS3 controller. The project outcome has the Hexapod fully functional and compatible with ROS and operates using the PlayStation 3 controller. In addition, an open source software (IDE) Arduino board will be integrated into the ecosystem with designing circuitry on a breadboard to add additional behavior with push buttons, potentiometers and other simple elements in the electrical circuitry. Other projects with the Arduino will be a GPS module, digital clock that will run off 22 satellites to show accurate real time using a GPS signal and an internal patch antenna to communicate with satellites. In addition, this internship experience has led me to pursue myself to learn coding more efficiently and effectively to write, subscribe and publish my own source code in different programming languages. With some familiarity with software programming, it will enhance my skills in the electrical engineering field. In contrast, my experience here at JSC with the Simulation and Graphics Branch (ER7) has led me to take my coding skill to be more proficient to increase my knowledge in software programming, and also enhancing my skills in ROS. This knowledge will be taken back to my university to implement coding in a school project that will use source coding and ROS to work on the PR2 robot which is controlled by ROS software. My skills learned here will be used to integrate messages to subscribe and publish ROS messages to a PR2 robot. The PR2 robot will be controlled by an existing PS3 controller by changing C++ coding to subscribe and publish messages to ROS. Overall the skills that were obtained here will not be lost, but increased.
Reynolds-averaged Navier-Stokes based ice accretion for aircraft wings
NASA Astrophysics Data System (ADS)
Lashkajani, Kazem Hasanzadeh
This thesis addresses one of the current issues in flight safety towards increasing icing simulation capabilities for prediction of complex 2D and 3D glaze ice shapes over aircraft surfaces. During the 1980's and 1990's, the field of aero-icing was established to support design and certification of aircraft flying in icing conditions. The multidisciplinary technologies used in such codes were: aerodynamics (panel method), droplet trajectory calculations (Lagrangian framework), thermodynamic module (Messinger model) and geometry module (ice accretion). These are embedded in a quasi-steady module to simulate the time-dependent ice accretion process (multi-step procedure). The objectives of the present research are to upgrade the aerodynamic module from Laplace to Reynolds-Average Navier-Stokes equations solver. The advantages are many. First, the physical model allows accounting for viscous effects in the aerodynamic module. Second, the solution of the aero-icing module directly provides the means for characterizing the aerodynamic effects of icing, such as loss of lift and increased drag. Third, the use of a finite volume approach to solving the Partial Differential Equations allows rigorous mesh and time convergence analysis. Finally, the approaches developed in 2D can be easily transposed to 3D problems. The research was performed in three major steps, each providing insights into the overall numerical approaches. The most important realization comes from the need to develop specific mesh generation algorithms to ensure feasible solutions in very complex multi-step aero-icing calculations. The contributions are presented in chronological order of their realization. First, a new framework for RANS based two-dimensional ice accretion code, CANICE2D-NS, is developed. A multi-block RANS code from U. of Liverpool (named PMB) is providing the aerodynamic field using the Spalart-Allmaras turbulence model. The ICEM-CFD commercial tool is used for the iced airfoil remeshing and field smoothing. The new coupling is fully automated and capable of multi-step ice accretion simulations via a quasi-steady approach. In addition, the framework allows for flow analysis and aerodynamic performance prediction of the iced airfoils. The convergence of the quasi-steady algorithm is verified and identifies the need for an order of magnitude increase in the number of multi-time steps in icing simulations to achieve solver independent solutions. Second, a Multi-Block Navier-Stokes code, NSMB, is coupled with the CANICE2D icing framework. Attention is paid to the roughness implementation of the ONERA roughness model within the Spalart-Allmaras turbulence model, and to the convergence of the steady and quasi-steady iterative procedure. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases. The results of CANICE2D-NS show good agreement with experimental data both in terms of predicted ice shapes as well as aerodynamic analysis of predicted and experimental ice shapes. Third, an efficient single-block structured Navier-Stokes CFD code, NSCODE, is coupled with the CANICE2D-NS icing framework. Attention is paid to the roughness implementation of the Boeing model within the Spalart-Allmaras turbulence model, and to acceleration of the convergence of the steady and quasi-steady iterative procedures. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases, including code to code comparisons with the same framework coupled with the NSMB Navier-Stokes solver. The efficiency of the J-multigrid approach to solve the flow equations on complex iced geometries is demonstrated. Since it was noted in all these calculations that the ICEM-CFD grid generation package produced a number of issues such as inefficient mesh quality and smoothing deficiencies (notably grid shocks), a fourth study proposes a new mesh generation algorithm. A PDE based multi-block structured grid generation code, NSGRID, is developed for this purpose. The study includes the developments of novel mesh generation algorithms over complex glaze ice shapes containing multi-curvature ice accretion geometries, such as single/double ice horns. The twofold approaches tackle surface geometry discretization as well as field mesh generation. An adaptive curvilinear curvature control algorithm is constructed solving a 1D elliptic PDE equation with periodic source terms. This method controls the arclength grid spacing so that high convex and concave curvature regions around ice horns are appropriately captured and is shown to effectively treat the grid shock problem. Then, a novel blended method is developed by defining combinations of source terms with 2D elliptic equations. The source terms include two common control functions, Sorenson and Spekreijse, and an additional third source term to improve orthogonality. This blended method is shown to be very effective for improving grid quality metrics for complex glaze ice meshes with RANS resolution. The performance in terms of residual reduction per non-linear iteration of several solution algorithms (Point-Jacobi, Gauss-Seidel, ADI, Point and Line SOR) are discussed within the context of a full Multi-grid operator. Details are given on the various formulations used in the linearization process. It is shown that the performance of the solution algorithm depends on the type of control function used. Finally, the algorithms are validated on standard complex experimental ice shapes, demonstrating the applicability of the methods. Finally, the automated framework of RANS based two-dimensional multi-step ice accretion, CANICE2D-NS is developed, coupled with a Multi-Block Navier-Stokes CFD code, NSCODE2D, a Multi-Block elliptic grid generation code, NSGRID2D, and a Multi-Block Eulerian droplet solver, NSDROP2D (developed at Polytechnique Montreal). The framework allows Lagrangian and Eulerian droplet computations within a chimera approach treating multi-elements geometries. The code was tested on public and confidential validation test cases including standard NATO cases. In addition, up to 10 times speedup is observed in the mesh generation procedure by using the implicit line SOR and ADI smoothers within a multigrid procedure. The results demonstrate the benefits and robustness of the new framework in predicting ice shapes and aerodynamic performance parameters.
Runtime Detection of C-Style Errors in UPC Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirkelbauer, P; Liao, C; Panas, T
2011-09-29
Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less
NASA Astrophysics Data System (ADS)
López-Coto, R.; Hahn, J.; BenZvi, S.; Dingus, B.; Hinton, J.; Nisa, M. U.; Parsons, R. D.; Greus, F. Salesa; Zhang, H.; Zhou, H.
2018-11-01
The positron excess measured by PAMELA and AMS can only be explained if there is one or several sources injecting them. Moreover, at the highest energies, it requires the presence of nearby ( ∼ hundreds of parsecs) and middle age (maximum of ∼ hundreds of kyr) sources. Pulsars, as factories of electrons and positrons, are one of the proposed candidates to explain the origin of this excess. To calculate the contribution of these sources to the electron and positron flux at the Earth, we developed EDGE (Electron Diffusion and Gamma rays to the Earth), a code to treat the propagation of electrons and compute their diffusion from a central source with a flexible injection spectrum. Using this code, we can derive the source's gamma-ray spectrum, spatial extension, the all-electron density in space, the electron and positron flux reaching the Earth and the positron fraction measured at the Earth. We present in this paper the foundations of the code and study how different parameters affect the gamma-ray spectrum of a source and the electron flux measured at the Earth. We also studied the effect of several approximations usually performed in these studies. This code has been used to derive the results of the positron flux measured at the Earth in [1].
Fortibuoni, Tomaso; Libralato, Simone; Raicevich, Saša; Giovanardi, Otello; Solidoro, Cosimo
2010-01-01
The understanding of fish communities' changes over the past centuries has important implications for conservation policy and marine resource management. However, reconstructing these changes is difficult because information on marine communities before the second half of the 20th century is, in most cases, anecdotal and merely qualitative. Therefore, historical qualitative records and modern quantitative data are not directly comparable, and their integration for long-term analyses is not straightforward. We developed a methodology that allows the coding of qualitative information provided by early naturalists into semi-quantitative information through an intercalibration with landing proportions. This approach allowed us to reconstruct and quantitatively analyze a 200-year-long time series of fish community structure indicators in the Northern Adriatic Sea (Mediterranean Sea). Our analysis provides evidence of long-term changes in fish community structure, including the decline of Chondrichthyes, large-sized and late-maturing species. This work highlights the importance of broadening the time-frame through which we look at marine ecosystem changes and provides a methodology to exploit, in a quantitative framework, historical qualitative sources. To the purpose, naturalists' eyewitness accounts proved to be useful for extending the analysis on fish community back in the past, well before the onset of field-based monitoring programs. PMID:21103349
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...
2017-05-03
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Coding of Stimuli by Animals: Retrospection, Prospection, Episodic Memory and Future Planning
ERIC Educational Resources Information Center
Zentall, Thomas R.
2010-01-01
When animals code stimuli for later retrieval they can either code them in terms of the stimulus presented (as a retrospective memory) or in terms of the response or outcome anticipated (as a prospective memory). Although retrospective memory is typically assumed (as in the form of a memory trace), evidence of prospective coding has been found…
IMPROVEMENTS IN THE THERMAL NEUTRON CALIBRATION UNIT, TNF2, AT LNMRI/IRD.
Astuto, A; Fernandes, S S; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2018-02-21
The standard thermal neutron flux unit, TNF2, in the Brazilian National Ionizing Radiation Metrology Laboratory was rebuilt. Fluence is still achieved by moderating of four 241Am-Be sources with 0.6 TBq each. The facility was again simulated and redesigned with graphite core and paraffin added graphite blocks surrounding it. Simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The resulting neutron fluence quality in terms of intensity, spectrum and cadmium ratio was evaluated. After this step, the system was assembled based on the results obtained from the simulations and measurements were performed with equipment existing in LNMRI/IRD and by simulated equipment. This work focuses on the characterization of a central chamber point and external points around the TNF2 in terms of neutron spectrum, fluence and ambient dose equivalent, H*(10). This system was validated with spectra measurements, fluence and H*(10) to ensure traceability.
Simulation of a beam rotation system for a spallation source
NASA Astrophysics Data System (ADS)
Reiss, Tibor; Reggiani, Davide; Seidel, Mike; Talanov, Vadim; Wohlmuther, Michael
2015-04-01
With a nominal beam power of nearly 1 MW on target, the Swiss Spallation Neutron Source (SINQ), ranks among the world's most powerful spallation neutron sources. The proton beam transport to the SINQ target is carried out exclusively by means of linear magnetic elements. In the transport line to SINQ the beam is scattered in two meson production targets and as a consequence, at the SINQ target entrance the beam shape can be described by Gaussian distributions in transverse x and y directions with tails cut short by collimators. This leads to a highly nonuniform power distribution inside the SINQ target, giving rise to thermal and mechanical stresses. In view of a future proton beam intensity upgrade, the possibility of homogenizing the beam distribution by means of a fast beam rotation system is currently under investigation. Important aspects which need to be studied are the impact of a rotating proton beam on the resulting neutron spectra, spatial flux distributions and additional—previously not present—proton losses causing unwanted activation of accelerator components. Hence a new source description method was developed for the radiation transport code MCNPX. This new feature makes direct use of the results from the proton beam optics code TURTLE. Its advantage to existing MCNPX source options is that all phase space information and correlations of each primary beam particle computed with TURTLE are preserved and transferred to MCNPX. Simulations of the different beam distributions together with their consequences in terms of neutron production are presented in this publication. Additionally, a detailed description of the coupling method between TURTLE and MCNPX is provided.
Evaluating a Dental Diagnostic Terminology in an Electronic Health Record
White, Joel M.; Kalenderian, Elsbeth; Stark, Paul C.; Ramoni, Rachel L.; Vaderhobli, Ram; Walji, Muhammad F.
2011-01-01
Standardized treatment procedure codes and terms are routinely used in dentistry. Utilization of a diagnostic terminology is common in medicine, but there is not a satisfactory or commonly standardized dental diagnostic terminology available at this time. Recent advances in dental informatics have provided an opportunity for inclusion of diagnostic codes and terms as part of treatment planning and documentation in the patient treatment history. This article reports the results of the use of a diagnostic coding system in a large dental school’s predoctoral clinical practice. A list of diagnostic codes and terms, called Z codes, was developed by dental faculty members. The diagnostic codes and terms were implemented into an electronic health record (EHR) for use in a predoctoral dental clinic. The utilization of diagnostic terms was quantified. The validity of Z code entry was evaluated by comparing the diagnostic term entered to the procedure performed, where valid diagnosis-procedure associations were determined by consensus among three calibrated academically based dentists. A total of 115,004 dental procedures were entered into the EHR during the year sampled. Of those, 43,053 were excluded from this analysis because they represent diagnosis or other procedures unrelated to treatments. Among the 71,951 treatment procedures, 27,973 had diagnoses assigned to them with an overall utilization of 38.9 percent. Of the 147 available Z codes, ninety-three were used (63.3 percent). There were 335 unique procedures provided and 2,127 procedure/diagnosis pairs captured in the EHR. Overall, 76.7 percent of the diagnoses entered were valid. We conclude that dental diagnostic terminology can be incorporated within an electronic health record and utilized in an academic clinical environment. Challenges remain in the development of terms and implementation and ease of use that, if resolved, would improve the utilization. PMID:21546594
Admiralty Inlet Advanced Turbulence Measurements: final data and code archive
Kilcher, Levi (ORCID:0000000183851131); Thomson, Jim (ORCID:0000000289290088); Harding, Samuel
2011-02-01
Data and code that is not already in a public location that is used in Kilcher, Thomson, Harding, and Nylund (2017) "Turbulence Measurements from Compliant Moorings - Part II: Motion Correction" doi: 10.1175/JTECH-D-16-0213.1. The links point to Python source code used in the publication. All other files are source data used in the publication.
Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part 2. Code Manual
1979-09-01
imaging of source axes for magnetic source. Ax R VSOURC(1,1) + 9 VSOURC(1,2) + T VSOURC(1,3) 4pi = x VIMAG(I,1) + ^ VINAG (1,2)+ VIMAG(l,3) An =unit...VNC A. yt and z components of the end cap unit normal OUTPUT VARIABLE VINAG X.. Y, and z components defining thesource image coordinate system axesin
Java Source Code Analysis for API Migration to Embedded Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Victor; McCoy, James A.; Guerrero, Jonathan
Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered bymore » APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.« less
Upwind MacCormack Euler solver with non-equilibrium chemistry
NASA Technical Reports Server (NTRS)
Sherer, Scott E.; Scott, James N.
1993-01-01
A computer code, designated UMPIRE, is currently under development to solve the Euler equations in two dimensions with non-equilibrium chemistry. UMPIRE employs an explicit MacCormack algorithm with dissipation introduced via Roe's flux-difference split upwind method. The code also has the capability to employ a point-implicit methodology for flows where stiffness is introduced through the chemical source term. A technique consisting of diagonal sweeps across the computational domain from each corner is presented, which is used to reduce storage and execution requirements. Results depicting one dimensional shock tube flow for both calorically perfect gas and thermally perfect, dissociating nitrogen are presented to verify current capabilities of the program. Also, computational results from a chemical reactor vessel with no fluid dynamic effects are presented to check the chemistry capability and to verify the point implicit strategy.
Spectral analysis of variable-length coded digital signals
NASA Astrophysics Data System (ADS)
Cariolaro, G. L.; Pierobon, G. L.; Pupolin, S. G.
1982-05-01
A spectral analysis is conducted for a variable-length word sequence by an encoder driven by a stationary memoryless source. A finite-state sequential machine is considered as a model of the line encoder, and the spectral analysis of the encoded message is performed under the assumption that the sourceword sequence is composed of independent identically distributed words. Closed form expressions for both the continuous and discrete parts of the spectral density are derived in terms of the encoder law and sourceword statistics. The jump part exhibits jumps at multiple integers of per lambda(sub 0)T, where lambda(sub 0) is the greatest common divisor of the possible codeword lengths, and T is the symbol period. The derivation of the continuous part can be conveniently factorized, and the theory is applied to the spectral analysis of BnZS and HDBn codes.
Determination of near and far field acoustics for advanced propeller configurations
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Jaeger, S. M.; Kim, J. H.
1989-01-01
A method has been studied for predicting the acoustic field of the SR-3 transonic propfan using flow data generated by two versions of the NASPROP-E computer code. Since the flow fields calculated by the solvers include the shock-wave system of the propeller, the nonlinear quadrupole noise source term is included along with the monopole and dipole noise sources in the calculation of the acoustic near field. Acoustic time histories in the near field are determined by transforming the azimuthal coordinate in the rotating, blade-fixed coordinate system to the time coordinate in a nonrotating coordinate system. Fourier analysis of the pressure time histories is used to obtain the frequency spectra of the near-field noise.
AMIDE: a free software tool for multimodality medical image analysis.
Loening, Andreas Markus; Gambhir, Sanjiv Sam
2003-07-01
Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.
A comprehensive experimental characterization of the iPIX gamma imager
NASA Astrophysics Data System (ADS)
Amgarou, K.; Paradiso, V.; Patoz, A.; Bonnet, F.; Handley, J.; Couturier, P.; Becker, F.; Menaa, N.
2016-08-01
The results of more than 280 different experiments aimed at exploring the main features and performances of a newly developed gamma imager, called iPIX, are summarized in this paper. iPIX is designed to quickly localize radioactive sources while estimating the ambient dose equivalent rate at the measurement point. It integrates a 1 mm thick CdTe detector directly bump-bonded to a Timepix chip, a tungsten coded-aperture mask, and a mini RGB camera. It also represents a major technological breakthrough in terms of lightness, compactness, usability, response sensitivity, and angular resolution. As an example of its key strengths, an 241Am source with a dose rate of only few nSv/h can be localized in less than one minute.
Inexpensive, Low Power, Open-Source Data Logging in the Field
NASA Astrophysics Data System (ADS)
Sandell, C. T.; Wickert, A. D.
2016-12-01
Collecting a robust data set of environmental conditions with commercial equipment is often cost prohibitive. I present the ALog, a general-purpose, inexpensive, low-power, open-source data logger that has proven its durability on long-term deployments in the harsh conditions of high altitude glaciers and humid river deltas. The ALog was developed to fill the need for a capable, rugged, easy-to-use, inexpensive, open-source hardware targeted at long-term remote deployment in nearly any environment. Building on the popular Arduino platform, the hardware features a high-precision clock, full size SD card slot for high-volume data storage, screw terminals, six analog inputs, two digital inputs, one digital interrupt, 3.3V and 5V power outputs, and SPI and I2C communication capability. The design is focused on extremely low power consumption allowing the Alog to be deployed for years on a single set of common alkaline batteries. The power efficiency of the Alog eliminates the difficulties associated with field power collection including additional hardware and installation costs, dependence on weather conditions, possible equipment failure, and the transport of bulky/heavy equipment to a remote site. Battery power increases suitable data collection sites (too shaded for photovoltaics) and allows for low profile installation options (including underground). The ALog has gone through continuous development with over four years of successful data collection in hydrologic field research. Over this time, software support for a wide range of sensors has been made available such as ultrasonic rangefinders (for water level, snow accumulation and glacial melt), temperature sensors (air and groundwater), humidity sensors, pyranometers, inclinometers, rain gauges, soil moisture and water potential sensors, resistance-based tools to measure frost heave, and cameras that trigger on events. The software developed for use with the ALog allows simple integration of established commercial sensors, including example implementation code so users with limited programming knowledge can get up and running with ease. All development files including design schematics, circuit board layouts, and source code files are open-source to further eliminate barriers to its use and allow community development contribution.
Making your code citable with the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.
2016-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-07
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-01
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Hyperbolic and semi-hyperbolic surface codes for quantum storage
NASA Astrophysics Data System (ADS)
Breuckmann, Nikolas P.; Vuillot, Christophe; Campbell, Earl; Krishna, Anirudh; Terhal, Barbara M.
2017-09-01
We show how a hyperbolic surface code could be used for overhead-efficient quantum storage. We give numerical evidence for a noise threshold of 1.3 % for the \\{4,5\\}-hyperbolic surface code in a phenomenological noise model (as compared with 2.9 % for the toric code). In this code family, parity checks are of weight 4 and 5, while each qubit participates in four different parity checks. We introduce a family of semi-hyperbolic codes that interpolate between the toric code and the \\{4,5\\}-hyperbolic surface code in terms of encoding rate and threshold. We show how these hyperbolic codes outperform the toric code in terms of qubit overhead for a target logical error probability. We show how Dehn twists and lattice code surgery can be used to read and write individual qubits to this quantum storage medium.
Hernando, Victoria; Sobrino-Vegas, Paz; Burriel, M Carmen; Berenguer, Juan; Navarro, Gemma; Santos, Ignacio; Reparaz, Jesús; Martínez, M Angeles; Antela, Antonio; Gutiérrez, Félix; del Amo, Julia
2012-09-10
To compare causes of death (CoDs) from two independent sources: National Basic Death File (NBDF) and deaths reported to the Spanish HIV Research cohort [Cohort de adultos con infección por VIH de la Red de Investigación en SIDA CoRIS)] and compare the two coding algorithms: International Classification of Diseases, 10th revision (ICD-10) and revised version of Coding Causes of Death in HIV (revised CoDe). Between 2004 and 2008, CoDs were obtained from the cohort records (free text, multiple causes) and also from NBDF (ICD-10). CoDs from CoRIS were coded according to ICD-10 and revised CoDe by a panel. Deaths were compared by 13 disease groups: HIV/AIDS, liver diseases, malignancies, infections, cardiovascular, blood disorders, pulmonary, central nervous system, drug use, external, suicide, other causes and ill defined. There were 160 deaths. Concordance for the 13 groups was observed in 111 (69%) cases for the two sources and in 115 (72%) cases for the two coding algorithms. According to revised CoDe, the commonest CoDs were HIV/AIDS (53%), non-AIDS malignancies (11%) and liver related (9%), these percentages were similar, 57, 10 and 8%, respectively, for NBDF (coded as ICD-10). When using ICD-10 to code deaths in CoRIS, wherein HIV infection was known in everyone, the proportion of non-AIDS malignancies was 13%, liver-related accounted for 3%, while HIV/AIDS reached 70% due to liver-related, infections and ill-defined causes being coded as HIV/AIDS. There is substantial variation in CoDs in HIV-infected persons according to sources and algorithms. ICD-10 in patients known to be HIV-positive overestimates HIV/AIDS-related deaths at the expense of underestimating liver-related diseases, infections and ill defined causes. CoDe seems as the best option for cohort studies.
Source Methodology for Turbofan Noise Prediction (SOURCE3D Technical Documentation)
NASA Technical Reports Server (NTRS)
Meyer, Harold D.
1999-01-01
This report provides the analytical documentation for the SOURCE3D Rotor Wake/Stator Interaction Code. It derives the equations for the rotor scattering coefficients and stator source vector and scattering coefficients that are needed for use in the TFANS (Theoretical Fan Noise Design/Prediction System). SOURCE3D treats the rotor and stator as isolated source elements. TFANS uses this information, along with scattering coefficients for inlet and exit elements, and provides complete noise solutions for turbofan engines. SOURCE3D is composed of a collection of FORTRAN programs that have been obtained by extending the approach of the earlier V072 Rotor Wake/Stator Interaction Code. Similar to V072, it treats the rotor and stator as a collection of blades and vanes having zero thickness and camber contained in an infinite, hardwall annular duct. SOURCE3D adds important features to the V072 capability-a rotor element, swirl flow and vorticity waves, actuator disks for flow turning, and combined rotor/actuator disk and stator/actuator disk elements. These items allow reflections from the rotor, frequency scattering, and mode trapping, thus providing more complete noise predictions than previously. The code has been thoroughly verified through comparison with D.B. Hanson's CUP2D two- dimensional code using a narrow annulus test case.
ERIC Educational Resources Information Center
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Hansen, J H; Nandkumar, S
1995-01-01
The formulation of reliable signal processing algorithms for speech coding and synthesis require the selection of a prior criterion of performance. Though coding efficiency (bits/second) or computational requirements can be used, a final performance measure must always include speech quality. In this paper, three objective speech quality measures are considered with respect to quality assessment for American English, noisy American English, and noise-free versions of seven languages. The purpose is to determine whether objective quality measures can be used to quantify changes in quality for a given voice coding method, with a known subjective performance level, as background noise or language conditions are changed. The speech coding algorithm chosen is regular-pulse excitation with long-term prediction (RPE-LTP), which has been chosen as the standard voice compression algorithm for the European Digital Mobile Radio system. Three areas are considered for objective quality assessment which include: (i) vocoder performance for American English in a noise-free environment, (ii) speech quality variation for three additive background noise sources, and (iii) noise-free performance for seven languages which include English, Japanese, Finnish, German, Hindi, Spanish, and French. It is suggested that although existing objective quality measures will never replace subjective testing, they can be a useful means of assessing changes in performance, identifying areas for improvement in algorithm design, and augmenting subjective quality tests for voice coding/compression algorithms in noise-free, noisy, and/or non-English applications.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
NASA Technical Reports Server (NTRS)
Meyer, H. D.
1993-01-01
The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.
Diffusive deposition of aerosols in Phebus containment during FPT-2 test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontautas, A.; Urbonavicius, E.
2012-07-01
At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less
Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.
Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai
2017-03-31
As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.03.2017.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
Astrophysics Source Code Library Enhancements
NASA Astrophysics Data System (ADS)
Hanisch, R. J.; Allen, A.; Berriman, G. B.; DuPrie, K.; Mink, J.; Nemiroff, R. J.; Schmidt, J.; Shamir, L.; Shortridge, K.; Taylor, M.; Teuben, P. J.; Wallin, J.
2015-09-01
The Astrophysics Source Code Library (ASCL)1 is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This paper covers these recent changes to the ASCL.
Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique
NASA Technical Reports Server (NTRS)
Tiampo, Kristy F.
1999-01-01
In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.
Source term evaluation for accident transients in the experimental fusion facility ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virot, F.; Barrachin, M.; Cousin, F.
2015-03-15
We have studied the transport and chemical speciation of radio-toxic and toxic species for an event of water ingress in the vacuum vessel of experimental fusion facility ITER with the ASTEC code. In particular our evaluation takes into account an assessed thermodynamic data for the beryllium gaseous species. This study shows that deposited beryllium dusts of atomic Be and Be(OH){sub 2} are formed. It also shows that Be(OT){sub 2} could exist in some conditions in the drain tank. (authors)
Annotare--a tool for annotating high-throughput biomedical investigations and resulting data.
Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J; Ball, Catherine A
2010-10-01
Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows.
Dictionary-Based Tensor Canonical Polyadic Decomposition
NASA Astrophysics Data System (ADS)
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Methodology of decreasing software complexity using ontology
NASA Astrophysics Data System (ADS)
DÄ browska-Kubik, Katarzyna
2015-09-01
In this paper a model of web application`s source code, based on the OSD ontology (Ontology for Software Development), is proposed. This model is applied to implementation and maintenance phase of software development process through the DevOntoCreator tool [5]. The aim of this solution is decreasing software complexity of that source code, using many different maintenance techniques, like creation of documentation, elimination dead code, cloned code or bugs, which were known before [1][2]. Due to this approach saving on software maintenance costs of web applications will be possible.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Astrophysics Source Code Library -- Now even better!
NASA Astrophysics Data System (ADS)
Allen, Alice; Schmidt, Judy; Berriman, Bruce; DuPrie, Kimberly; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.
2015-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. Indexed by ADS, it now contains nearly 1,000 codes and with recent major changes, is better than ever! The resource has a new infrastructure that offers greater flexibility and functionality for users, including an easier submission process, better browsing, one-click author search, and an RSS feeder for news. The new database structure is easier to maintain and offers new possibilities for collaboration. Come see what we've done!
Study of statistical coding for digital TV
NASA Technical Reports Server (NTRS)
Gardenhire, L. W.
1972-01-01
The results are presented for a detailed study to determine a pseudo-optimum statistical code to be installed in a digital TV demonstration test set. Studies of source encoding were undertaken, using redundancy removal techniques in which the picture is reproduced within a preset tolerance. A method of source encoding, which preliminary studies show to be encouraging, is statistical encoding. A pseudo-optimum code was defined and the associated performance of the code was determined. The format was fixed at 525 lines per frame, 30 frames per second, as per commercial standards.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Nurses' attitudes toward the use of the bar-coding medication administration system.
Marini, Sana Daya; Hasman, Arie; Huijer, Huda Abu-Saad; Dimassi, Hani
2010-01-01
This study determines nurses' attitudes toward bar-coding medication administration system use. Some of the factors underlying the successful use of bar-coding medication administration systems that are viewed as a connotative indicator of users' attitudes were used to gather data that describe the attitudinal basis for system adoption and use decisions in terms of subjective satisfaction. Only 67 nurses in the United States had the chance to respond to the e-questionnaire posted on the CARING list server for the months of June and July 2007. Participants rated their satisfaction with bar-coding medication administration system use based on system functionality, usability, and its positive/negative impact on the nursing practice. Results showed, to some extent, positive attitude, but the image profile draws attention to nurses' concerns for improving certain system characteristics. The high bar-coding medication administration system skills revealed a more negative perception of the system by the nursing staff. The reasons underlying dissatisfaction with bar-coding medication administration use by skillful users are an important source of knowledge that can be helpful for system development as well as system deployment. As a result, strengthening bar-coding medication administration system usability by magnifying its ability to eliminate medication errors and the contributing factors, maximizing system functionality by ascertaining its power as an extra eye in the medication administration process, and impacting the clinical nursing practice positively by being helpful to nurses, speeding up the medication administration process, and being user-friendly can offer a congenial settings for establishing positive attitude toward system use, which in turn leads to successful bar-coding medication administration system use.
NASA Astrophysics Data System (ADS)
Woolsey, L. N.; Cranmer, S. R.
2013-12-01
The study of solar wind acceleration has made several important advances recently due to improvements in modeling techniques. Existing code and simulations test the competing theories for coronal heating, which include reconnection/loop-opening (RLO) models and wave/turbulence-driven (WTD) models. In order to compare and contrast the validity of these theories, we need flexible tools that predict the emergent solar wind properties from a wide range of coronal magnetic field structures such as coronal holes, pseudostreamers, and helmet streamers. ZEPHYR (Cranmer et al. 2007) is a one-dimensional magnetohydrodynamics code that includes Alfven wave generation and reflection and the resulting turbulent heating to accelerate solar wind in open flux tubes. We present the ZEPHYR output for a wide range of magnetic field geometries to show the effect of the magnetic field profiles on wind properties. We also investigate the competing acceleration mechanisms found in ZEPHYR to determine the relative importance of increased gas pressure from turbulent heating and the separate pressure source from the Alfven waves. To do so, we developed a code that will become publicly available for solar wind prediction. This code, TEMPEST, provides an outflow solution based on only one input: the magnetic field strength as a function of height above the photosphere. It uses correlations found in ZEPHYR between the magnetic field strength at the source surface and the temperature profile of the outflow solution to compute the wind speed profile based on the increased gas pressure from turbulent heating. With this initial solution, TEMPEST then adds in the Alfven wave pressure term to the modified Parker equation and iterates to find a stable solution for the wind speed. This code, therefore, can make predictions of the wind speeds that will be observed at 1 AU based on extrapolations from magnetogram data, providing a useful tool for empirical forecasting of the sol! ar wind.
Development of an upwind, finite-volume code with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Molvik, Gregory A.
1995-01-01
Under this grant, two numerical algorithms were developed to predict the flow of viscous, hypersonic, chemically reacting gases over three-dimensional bodies. Both algorithms take advantage of the benefits of upwind differencing, total variation diminishing techniques and of a finite-volume framework, but obtain their solution in two separate manners. The first algorithm is a zonal, time-marching scheme, and is generally used to obtain solutions in the subsonic portions of the flow field. The second algorithm is a much less expensive, space-marching scheme and can be used for the computation of the larger, supersonic portion of the flow field. Both codes compute their interface fluxes with a temporal Riemann solver and the resulting schemes are made fully implicit including the chemical source terms and boundary conditions. Strong coupling is used between the fluid dynamic, chemical and turbulence equations. These codes have been validated on numerous hypersonic test cases and have provided excellent comparison with existing data. This report summarizes the research that took place from August 1,1994 to January 1, 1995.
NASA Astrophysics Data System (ADS)
Jara, Daniel; de Dreuzy, Jean-Raynald; Cochepin, Benoit
2017-12-01
Reactive transport modeling contributes to understand geophysical and geochemical processes in subsurface environments. Operator splitting methods have been proposed as non-intrusive coupling techniques that optimize the use of existing chemistry and transport codes. In this spirit, we propose a coupler relying on external geochemical and transport codes with appropriate operator segmentation that enables possible developments of additional splitting methods. We provide an object-oriented implementation in TReacLab developed in the MATLAB environment in a free open source frame with an accessible repository. TReacLab contains classical coupling methods, template interfaces and calling functions for two classical transport and reactive software (PHREEQC and COMSOL). It is tested on four classical benchmarks with homogeneous and heterogeneous reactions at equilibrium or kinetically-controlled. We show that full decoupling to the implementation level has a cost in terms of accuracy compared to more integrated and optimized codes. Use of non-intrusive implementations like TReacLab are still justified for coupling independent transport and chemical software at a minimal development effort but should be systematically and carefully assessed.
Radiation Coupling with the FUN3D Unstructured-Grid CFD Code
NASA Technical Reports Server (NTRS)
Wood, William A.
2012-01-01
The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.
Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres
NASA Astrophysics Data System (ADS)
Liu, Quanhua; Weng, Fuzhong
2006-12-01
The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
Nordic Cancer Registries - an overview of their procedures and data comparability.
Pukkala, Eero; Engholm, Gerda; Højsgaard Schmidt, Lise Kristine; Storm, Hans; Khan, Staffan; Lambe, Mats; Pettersson, David; Ólafsdóttir, Elínborg; Tryggvadóttir, Laufey; Hakanen, Tiina; Malila, Nea; Virtanen, Anni; Johannesen, Tom Børge; Larønningen, Siri; Ursin, Giske
2018-04-01
The Nordic Cancer Registries are among the oldest population-based registries in the world, with more than 60 years of complete coverage of what is now a combined population of 26 million. However, despite being the source of a substantial number of studies, there is no published paper comparing the different registries. Therefore, we did a systematic review to identify similarities and dissimilarities of the Nordic Cancer Registries, which could possibly explain some of the differences in cancer incidence rates across these countries. We describe and compare here the core characteristics of each of the Nordic Cancer Registries: (i) data sources; (ii) registered disease entities and deviations from IARC multiple cancer coding rules; (iii) variables and related coding systems. Major changes over time are described and discussed. All Nordic Cancer Registries represent a high quality standard in terms of completeness and accuracy of the registered data. Even though the information in the Nordic Cancer Registries in general can be considered more similar than any other collection of data from five different countries, there are numerous differences in registration routines, classification systems and inclusion of some tumors. These differences are important to be aware of when comparing time trends in the Nordic countries.
A Particle-In-Cell Gun Code for Surface-Converter H- Ion Source Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon-Golcher, Edwin; Bowers, Kevin J.
2007-08-10
We present the current status of a particle-in-cell with Monte Carlo collisions (PIC-MCC) gun code under development at Los Alamos for the study of surface-converter H- ion sources. The program preserves a first-principles approach to a significant extent and simulates the production processes without ad hoc models within the plasma region. Some of its features include: solution of arbitrary electrostatic and magnetostatic fields in an axisymmetric (r,z) geometry to describe the self-consistent time evolution of a plasma; simulation of a multi-species (e-,H+,H{sub 2}{sup +},H{sub 3}{sup +},H-) plasma discharge from a neutral hydrogen gas and filament-originated seed electrons; full 2-dimensional (r,z)more » 3-velocity (vr,vz,v{phi}) dynamics for all species with exact conservation of the canonical angular momentum p{phi}; detailed collision physics between charged particles and neutrals and the ability to represent multiple smooth (not stair-stepped) electrodes of arbitrary shape and voltage whose surfaces may be secondary-particle emitters (H- and e-). The status of this development is discussed in terms of its physics content and current implementation details.« less
Acta Aeronautica et Astronautica Sinica.
1982-07-28
AERONAUTICA ET ASTRONAUTICA SINICA - <,y English pages: 212 _r Source : Acta Aeronautica et Astronautica Sinica, Vol. 2, Nr. 4, December 1981 , . pp. 1...ADVOCATED OR IMPLIED ARE THOSE OP THE SOURCE ANDDO NOT NECESSARILY REFLECT THE POSITION TRANSLATION DIVISION OR OPINION OF THE FOREnjN TECHNOLOGY DI...axial) solution section code 2 Lower Corner Symbols i code of sectional cylindrical coordinate system j,k radial and peripheral codes of solution
Stey, Anne M; Ko, Clifford Y; Hall, Bruce Lee; Louie, Rachel; Lawson, Elise H; Gibbons, Melinda M; Zingmond, David S; Russell, Marcia M
2014-08-01
Identifying iatrogenic injuries using existing data sources is important for improved transparency in the occurrence of intraoperative events. There is evidence that procedure codes are reliably recorded in claims data. The objective of this study was to assess whether concurrent splenic procedure codes in patients undergoing colectomy procedures are reliably coded in claims data as compared with clinical registry data. Patients who underwent colectomy procedures in the absence of neoplastic diagnosis codes were identified from American College of Surgeons (ACS) NSQIP data linked with Medicare inpatient claims data file (2005 to 2008). A κ statistic was used to assess coding concordance between ACS NSQIP and Medicare inpatient claims, with ACS NSQIP serving as the reference standard. A total of 11,367 colectomy patients were identified from 212 hospitals. There were 114 patients (1%) who had a concurrent splenic procedure code recorded in either ACS NSQIP or Medicare inpatient claims. There were 7 patients who had a splenic injury diagnosis code recorded in either data source. Agreement of splenic procedure codes between the data sources was substantial (κ statistic 0.72; 95% CI, 0.64-0.79). Medicare inpatient claims identified 81% of the splenic procedure codes recorded in ACS NSQIP, and 99% of the patients without a splenic procedure code. It is feasible to use Medicare claims data to identify splenic injuries occurring during colectomy procedures, as claims data have moderate sensitivity and excellent specificity for capturing concurrent splenic procedure codes compared with ACS NSQIP. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, B. X.; Welton, R. F.; Stockli, M. P.; Luciano, N. P.; Carmichael, J. R.
2008-02-01
Beam simulation codes PBGUNS, SIMION, and LORENTZ-3D were evaluated by modeling the well-diagnosed SNS base line ion source and low energy beam transport (LEBT) system. Then, an investigation was conducted using these codes to assist our ion source and LEBT development effort which is directed at meeting the SNS operational and also the power-upgrade project goals. A high-efficiency H- extraction system as well as magnetic and electrostatic LEBT configurations capable of transporting up to 100mA is studied using these simulation tools.
ObsPy: Establishing and maintaining an open-source community package
NASA Astrophysics Data System (ADS)
Krischer, L.; Megies, T.; Barsch, R.
2017-12-01
Python's ecosystem evolved into one of the most powerful and productive research environment across disciplines. ObsPy (https://obspy.org) is a fully community driven, open-source project dedicated to provide a bridge for seismology into that ecosystem. It does so by offering Read and write support for essentially every commonly used data format in seismology, Integrated access to the largest data centers, web services, and real-time data streams, A powerful signal processing toolbox tuned to the specific needs of seismologists, and Utility functionality like travel time calculations, geodetic functions, and data visualizations. ObsPy has been in constant unfunded development for more than eight years and is developed and used by scientists around the world with successful applications in all branches of seismology. By now around 70 people directly contributed code to ObsPy and we aim to make it a self-sustaining community project.This contributions focusses on several meta aspects of open-source software in science, in particular how we experienced them. During the panel we would like to discuss obvious questions like long-term sustainability with very limited to no funding, insufficient computer science training in many sciences, and gaining hard scientific credits for software development, but also the following questions: How to best deal with the fact that a lot of scientific software is very specialized thus usually solves a complex problem but at the same time can only ever reach a limited pool of developers and users by virtue of it being so specialized? Therefore the "many eyes on the code" approach to develop and improve open-source software only applies in a limited fashion. An initial publication for a significant new scientific software package is fairly straightforward. How to on-board and motivate potential new contributors when they can no longer be lured by a potential co-authorship? When is spending significant time and effort on reusable scientific open-source development a reasonable choice for young researchers? The effort to go from purpose tailored code for a single application resulting in a scientific publication is significantly less compared to generalising and engineering it well enough so it can be used by others.
PD5: a general purpose library for primer design software.
Riley, Michael C; Aubrey, Wayne; Young, Michael; Clare, Amanda
2013-01-01
Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C(++) with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design.
Development and Implementation of Dynamic Scripts to Execute Cycled WRF/GSI Forecasts
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Quanli; Watson, Leela
2014-01-01
Automating the coupling of data assimilation (DA) and modeling systems is a unique challenge in the numerical weather prediction (NWP) research community. In recent years, the Development Testbed Center (DTC) has released well-documented tools such as the Weather Research and Forecasting (WRF) model and the Gridpoint Statistical Interpolation (GSI) DA system that can be easily downloaded, installed, and run by researchers on their local systems. However, developing a coupled system in which the various preprocessing, DA, model, and postprocessing capabilities are all integrated can be labor-intensive if one has little experience with any of these individual systems. Additionally, operational modeling entities generally have specific coupling methodologies that can take time to understand and develop code to implement properly. To better enable collaborating researchers to perform modeling and DA experiments with GSI, the Short-term Prediction Research and Transition (SPoRT) Center has developed a set of Perl scripts that couple GSI and WRF in a cycling methodology consistent with the use of real-time, regional observation data from the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC). Because Perl is open source, the code can be easily downloaded and executed regardless of the user's native shell environment. This paper will provide a description of this open-source code and descriptions of a number of the use cases that have been performed by SPoRT collaborators using the scripts on different computing systems.
Particle-in-cell code library for numerical simulation of the ECR source plasma
NASA Astrophysics Data System (ADS)
Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.
2003-05-01
The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
Phonological, visual, and semantic coding strategies and children's short-term picture memory span.
Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura
2012-01-01
Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.
Open Source Paradigm: A Synopsis of The Cathedral and the Bazaar for Health and Social Care.
Benson, Tim
2016-07-04
Open source software (OSS) is becoming more fashionable in health and social care, although the ideas are not new. However progress has been slower than many had expected. The purpose is to summarise the Free/Libre Open Source Software (FLOSS) paradigm in terms of what it is, how it impacts users and software engineers and how it can work as a business model in health and social care sectors. Much of this paper is a synopsis of Eric Raymond's seminal book The Cathedral and the Bazaar, which was the first comprehensive description of the open source ecosystem, set out in three long essays. Direct quotes from the book are used liberally, without reference to specific passages. The first part contrasts open and closed source approaches to software development and support. The second part describes the culture and practices of the open source movement. The third part considers business models. A key benefit of open source is that users can access and collaborate on improving the software if they wish. Closed source code may be regarded as a strategic business risk that that may be unacceptable if there is an open source alternative. The sharing culture of the open source movement fits well with that of health and social care.
Reflections on the role of open source in health information system interoperability.
Sfakianakis, S; Chronaki, C E; Chiarugi, F; Conforti, F; Katehakis, D G
2007-01-01
This paper reflects on the role of open source in health information system interoperability. Open source is a driving force in computer science research and the development of information systems. It facilitates the sharing of information and ideas, enables evolutionary development and open collaborative testing of code, and broadens the adoption of interoperability standards. In health care, information systems have been developed largely ad hoc following proprietary specifications and customized design. However, the wide deployment of integrated services such as Electronic Health Records (EHRs) over regional health information networks (RHINs) relies on interoperability of the underlying information systems and medical devices. This reflection is built on the experiences of the PICNIC project that developed shared software infrastructure components in open source for RHINs and the OpenECG network that offers open source components to lower the implementation cost of interoperability standards such as SCP-ECG, in electrocardiography. Open source components implementing standards and a community providing feedback from real-world use are key enablers of health care information system interoperability. Investing in open source is investing in interoperability and a vital aspect of a long term strategy towards comprehensive health services and clinical research.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Schmitz, Matthew; Forst, Linda
2016-02-15
Inclusion of information about a patient's work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers' compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for "industry" and "occupation" based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. The objective of the study was to evaluate the intercoder reliability of NIOSH's Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the "high confidence" level and 49%-58% at the "medium confidence" level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are "substantial" at the 2-digit level, but only "fair" to "good" at the 4-digit level. This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted.
Numerical simulations of the Cosmic Battery in accretion flows around astrophysical black holes
NASA Astrophysics Data System (ADS)
Contopoulos, I.; Nathanail, A.; Sądowski, A.; Kazanas, D.; Narayan, R.
2018-01-01
We implement the KORAL code to perform two sets of very long general relativistic radiation magnetohydrodynamic simulations of an axisymmetric optically thin magnetized flow around a non-rotating black hole: one with a new term in the electromagnetic field tensor due to the radiation pressure felt by the plasma electrons on the comoving frame of the electron-proton plasma, and one without. The source of the radiation is the accretion flow itself. Without the new term, the system evolves to a standard accretion flow due to the development of the magneto-rotational instability. With the new term, however, the system eventually evolves to a magnetically arrested disc state in which a large-scale jet-like magnetic field threads the black hole horizon. Our results confirm the secular action of the Cosmic Battery in accretion flows around astrophysical black holes.
Long-term dataset on aquatic responses to concurrent climate change and recovery from acidification
NASA Astrophysics Data System (ADS)
Leach, Taylor H.; Winslow, Luke A.; Acker, Frank W.; Bloomfield, Jay A.; Boylen, Charles W.; Bukaveckas, Paul A.; Charles, Donald F.; Daniels, Robert A.; Driscoll, Charles T.; Eichler, Lawrence W.; Farrell, Jeremy L.; Funk, Clara S.; Goodrich, Christine A.; Michelena, Toby M.; Nierzwicki-Bauer, Sandra A.; Roy, Karen M.; Shaw, William H.; Sutherland, James W.; Swinton, Mark W.; Winkler, David A.; Rose, Kevin C.
2018-04-01
Concurrent regional and global environmental changes are affecting freshwater ecosystems. Decadal-scale data on lake ecosystems that can describe processes affected by these changes are important as multiple stressors often interact to alter the trajectory of key ecological phenomena in complex ways. Due to the practical challenges associated with long-term data collections, the majority of existing long-term data sets focus on only a small number of lakes or few response variables. Here we present physical, chemical, and biological data from 28 lakes in the Adirondack Mountains of northern New York State. These data span the period from 1994-2012 and harmonize multiple open and as-yet unpublished data sources. The dataset creation is reproducible and transparent; R code and all original files used to create the dataset are provided in an appendix. This dataset will be useful for examining ecological change in lakes undergoing multiple stressors.
ERIC Educational Resources Information Center
Olsen, Florence
2003-01-01
Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)
Watterson, Dina; Cleland, Heather; Picton, Natalie; Simpson, Pam M; Gabbe, Belinda J
2011-03-01
The percentage of total body surface area burnt (%TBSA) is a critical measure of burn injury severity and a key predictor of burn injury outcome. This study evaluated the level of agreement between four sources of %TBSA using 120 cases identified through the Victorian State Trauma Registry. Expert clinician, ICD-10-AM, Abbreviated Injury Scale, and burns registry coding were compared using measures of agreement. There was near-perfect agreement (weighted Kappa statistic 0.81-1) between all sources of data, suggesting that ICD-10-AM is a valid source of %TBSA and use of ICD-10-AM codes could reduce the resource used by trauma and burns registries capturing this information.
Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide
NASA Astrophysics Data System (ADS)
Justus, C. G.; James, B. F.
1999-05-01
Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, M.S.Y.
1990-12-01
The PAGAN code system is a part of the performance assessment methodology developed for use by the U.S. Nuclear Regulatory Commission in evaluating license applications for low-level waste disposal facilities. In this methodology, PAGAN is used as one candidate approach for analysis of the ground-water pathway. PAGAN, Version 1.1. has the capability to model the source term, vadose-zone transport, and aquifer transport of radionuclides from a waste disposal unit. It combines the two codes SURFACE and DISPERSE which are used as semi-analytical solutions to the convective-dispersion equation. This system uses menu driven input/out for implementing a simple ground-water transport analysismore » and incorporates statistical uncertainty functions for handling data uncertainties. The output from PAGAN includes a time and location-dependent radionuclide concentration at a well in the aquifer, or a time and location-dependent radionuclide flux into a surface-water body.« less
Alview: Portable Software for Viewing Sequence Reads in BAM Formatted Files.
Finney, Richard P; Chen, Qing-Rong; Nguyen, Cu V; Hsu, Chih Hao; Yan, Chunhua; Hu, Ying; Abawi, Massih; Bian, Xiaopeng; Meerzaman, Daoud M
2015-01-01
The name Alview is a contraction of the term Alignment Viewer. Alview is a compiled to native architecture software tool for visualizing the alignment of sequencing data. Inputs are files of short-read sequences aligned to a reference genome in the SAM/BAM format and files containing reference genome data. Outputs are visualizations of these aligned short reads. Alview is written in portable C with optional graphical user interface (GUI) code written in C, C++, and Objective-C. The application can run in three different ways: as a web server, as a command line tool, or as a native, GUI program. Alview is compatible with Microsoft Windows, Linux, and Apple OS X. It is available as a web demo at https://cgwb.nci.nih.gov/cgi-bin/alview. The source code and Windows/Mac/Linux executables are available via https://github.com/NCIP/alview.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide
NASA Technical Reports Server (NTRS)
Justus, C. G.; James, B. F.
1999-01-01
Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.
Development of Northeast Asia Nuclear Power Plant Accident Simulator.
Kim, Juyub; Kim, Juyoul; Po, Li-Chi Cliff
2017-06-15
A conclusion from the lessons learned after the March 2011 Fukushima Daiichi accident was that Korea needs a tool to estimate consequences from a major accident that could occur at a nuclear power plant located in a neighboring country. This paper describes a suite of computer-based codes to be used by Korea's nuclear emergency response staff for training and potentially operational support in Korea's national emergency preparedness and response program. The systems of codes, Northeast Asia Nuclear Accident Simulator (NANAS), consist of three modules: source-term estimation, atmospheric dispersion prediction and dose assessment. To quickly assess potential doses to the public in Korea, NANAS includes specific reactor data from the nuclear power plants in China, Japan and Taiwan. The completed simulator is demonstrated using data for a hypothetical release. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Analysis of unmitigated large break loss of coolant accidents using MELCOR code
NASA Astrophysics Data System (ADS)
Pescarini, M.; Mascari, F.; Mostacci, D.; De Rosa, F.; Lombardo, C.; Giannetti, F.
2017-11-01
In the framework of severe accident research activity developed by ENEA, a MELCOR nodalization of a generic Pressurized Water Reactor of 900 MWe has been developed. The aim of this paper is to present the analysis of MELCOR code calculations concerning two independent unmitigated large break loss of coolant accident transients, occurring in the cited type of reactor. In particular, the analysis and comparison between the transients initiated by an unmitigated double-ended cold leg rupture and an unmitigated double-ended hot leg rupture in the loop 1 of the primary cooling system is presented herein. This activity has been performed focusing specifically on the in-vessel phenomenology that characterizes this kind of accidents. The analysis of the thermal-hydraulic transient phenomena and the core degradation phenomena is therefore here presented. The analysis of the calculated data shows the capability of the code to reproduce the phenomena typical of these transients and permits their phenomenological study. A first sequence of main events is here presented and shows that the cold leg break transient results faster than the hot leg break transient because of the position of the break. Further analyses are in progress to quantitatively assess the results of the code nodalization for accident management strategy definition and fission product source term evaluation.
A Cooperative Downloading Method for VANET Using Distributed Fountain Code.
Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi
2016-10-12
Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.
PARAVT: Parallel Voronoi tessellation code
NASA Astrophysics Data System (ADS)
González, R. E.
2016-10-01
In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy
NASA Astrophysics Data System (ADS)
Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.
2016-12-01
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.
Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M
2016-12-07
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
Plouff, Donald
2000-01-01
Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first must be converted (compiled) into an executable form on the user's computer. Although program testing was done in a UNIX (tradename of American Telephone and Telegraph Company) computer environment, it is anticipated that only a system-dependent date-and-time function may need to be changed for adaptation to other computer platforms that accept standard Fortran code.d del iliscipit volorer sequi ting etue feum zzriliquatum zzriustrud esenibh ex esto esequat.
Maximum aposteriori joint source/channel coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Gibson, Jerry D.
1991-01-01
A maximum aposteriori probability (MAP) approach to joint source/channel coder design is presented in this paper. This method attempts to explore a technique for designing joint source/channel codes, rather than ways of distributing bits between source coders and channel coders. For a nonideal source coder, MAP arguments are used to design a decoder which takes advantage of redundancy in the source coder output to perform error correction. Once the decoder is obtained, it is analyzed with the purpose of obtaining 'desirable properties' of the channel input sequence for improving overall system performance. Finally, an encoder design which incorporates these properties is proposed.
Development a computer codes to couple PWR-GALE output and PC-CREAM input
NASA Astrophysics Data System (ADS)
Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.
2018-02-01
Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.
Hamilton, Clayon B; Wong, Ming-Kin; Gignac, Monique A M; Davis, Aileen M; Chesworth, Bert M
2017-01-01
To identify validated measures that capture illness perception and behavior and have been used to assess people who have knee pain/osteoarthritis. A scoping review was performed. Nine electronic databases were searched for records from inception through April 19, 2015. Search terms included illness perception, illness behavior, knee, pain, osteoarthritis, and their related terms. This review included English language publications of primary data on people with knee pain/osteoarthritis who were assessed with validated measures capturing any of 4 components of illness perception and behavior: monitor body, define and interpret symptoms, take remedial action, and utilize sources of help. Seventy-one publications included relevant measures. Two reviewers independently coded and analyzed each relevant measure within the 4 components. Sixteen measures were identified that capture components of illness perception and behavior in the target population. These measures were originally developed to capture constructs that include coping strategies/skills/styles, illness belief, illness perception, self-efficacy, and pain behavior. Coding results indicated that 5, 11, 12, and 5 of these measures included the monitor body, define and interpret symptoms, take remedial action, and utilize sources of help components, respectively. Several validated measures were interpreted as capturing some components, and only 1 measure was interpreted as capturing all of the components of illness perception and behavior in the target population. A measure that comprehensively captures illness perception and behavior could be valuable for informing and evaluating therapy for patients along a continuum of symptomatic knee osteoarthritis. © 2016 World Institute of Pain.
Grouping in Short-Term Memory: Do Oscillators Code the Positions of Items?
ERIC Educational Resources Information Center
Ng, Honey L. H.; Maybery, Murray T.
2005-01-01
According to several current models of short-term memory, items are retained in order by associating them with positional codes. The models differ as to whether temporal oscillators provide those codes. The authors examined errors in recall of sequences comprising 2 groups of 4 consonants. A critical manipulation was the precise timing of items…
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...
Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design
2011-02-01
Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1997-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
Top ten reasons to register your code with the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; DuPrie, Kimberly; Berriman, G. Bruce; Mink, Jessica D.; Nemiroff, Robert J.; Robitaille, Thomas; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Teuben, Peter J.; Wallin, John F.; Warmels, Rein
2017-01-01
With 1,400 codes, the Astrophysics Source Code Library (ASCL, ascl.net) is the largest indexed resource for codes used in astronomy research in existence. This free online registry was established in 1999, is indexed by Web of Science and ADS, and is citable, with citations to its entries tracked by ADS. Registering your code with the ASCL is easy with our online submissions system. Making your software available for examination shows confidence in your research and makes your research more transparent, reproducible, and falsifiable. ASCL registration allows your software to be cited on its own merits and provides a citation that is trackable and accepted by all astronomy journals and journals such as Science and Nature. Registration also allows others to find your code more easily. This presentation covers the benefits of registering astronomy research software with the ASCL.
Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C
2017-10-17
Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.
National network television news coverage of contraception - a content analysis.
Patton, Elizabeth W; Moniz, Michelle H; Hughes, Lauren S; Buis, Lorraine; Howell, Joel
2017-01-01
The objective was to describe and analyze national network television news framing of contraception, recognizing that onscreen news can influence the public's knowledge and beliefs. We used the Vanderbilt Television News Archives and LexisNexis Database to obtain video and print transcripts of all relevant national network television news segments covering contraception from January 2010 to June 2014. We conducted a content analysis of 116 TV news segments covering contraception during the rollout of the Affordable Care Act. Segments were quantitatively coded for contraceptive methods covered, story sources used, and inclusion of medical and nonmedical content (intercoder reliability using Krippendorf's alpha ranged 0.6-1 for coded categories). Most (55%) news stories focused on contraception in general rather than specific methods. The most effective contraceptive methods were rarely discussed (implant, 1%; intrauterine device, 4%). The most frequently used sources were political figures (40%), advocates (25%), the general public (25%) and Catholic Church leaders (16%); medical professionals (11%) and health researchers (4%) appeared in a minority of stories. A minority of stories (31%) featured medical content. National network news coverage of contraception frequently focuses on contraception in political and social terms and uses nonmedical figures such as politicians and church leaders as sources. This focus deemphasizes the public health aspect of contraception, leading medical professionals and health content to be rarely featured. Media coverage of contraception may influence patients' views about contraception. Understanding the content, sources and medical accuracy of current media portrayals of contraception may enable health care professionals to dispel popular misperceptions. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravishankar, C., Hughes Network Systems, Germantown, MD
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.« less
NASA Technical Reports Server (NTRS)
Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.
1989-01-01
The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.
Two-step web-mining approach to study geology/geophysics-related open-source software projects
NASA Astrophysics Data System (ADS)
Behrends, Knut; Conze, Ronald
2013-04-01
Geology/geophysics is a highly interdisciplinary science, overlapping with, for instance, physics, biology and chemistry. In today's software-intensive work environments, geoscientists often encounter new open-source software from scientific fields that are only remotely related to the own field of expertise. We show how web-mining techniques can help to carry out systematic discovery and evaluation of such software. In a first step, we downloaded ~500 abstracts (each consisting of ~1 kb UTF-8 text) from agu-fm12.abstractcentral.com. This web site hosts the abstracts of all publications presented at AGU Fall Meeting 2012, the world's largest annual geology/geophysics conference. All abstracts belonged to the category "Earth and Space Science Informatics", an interdisciplinary label cross-cutting many disciplines such as "deep biosphere", "atmospheric research", and "mineral physics". Each publication was represented by a highly structured record with ~20 short data attributes, the largest authorship-record being the unstructured "abstract" field. We processed texts of the abstracts with the statistics software "R" to calculate a corpus and a term-document matrix. Using R package "tm", we applied text-mining techniques to filter data and develop hypotheses about software-development activities happening in various geology/geophysics fields. Analyzing the term-document matrix with basic techniques (e.g., word frequencies, co-occurences, weighting) as well as more complex methods (clustering, classification) several key pieces of information were extracted. For example, text-mining can be used to identify scientists who are also developers of open-source scientific software, and the names of their programming projects and codes can also be identified. In a second step, based on the intermediate results found by processing the conference-abstracts, any new hypotheses can be tested in another webmining subproject: by merging the dataset with open data from github.com and stackoverflow.com. These popular, developer-centric websites have powerful application-programmer interfaces, and follow an open-data policy. In this regard, these sites offer a web-accessible reservoir of information that can be tapped to study questions such as: which open source software projects are eminent in the various geoscience fields? What are the most popular programming languages? How are they trending? Are there any interesting temporal patterns in committer activities? How large are programming teams and how do they change over time? What free software packages exist in the vast realms of related fields? Does the software from these fields have capabilities that might still be useful to me as a researcher, or can help me perform my work better? Are there any open-source projects that might be commercially interesting? This evaluation strategy reveals programming projects that tend to be new. As many important legacy codes are not hosted on open-source code-repositories, the presented search method might overlook some older projects.
Some practical universal noiseless coding techniques
NASA Technical Reports Server (NTRS)
Rice, R. F.
1979-01-01
Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Watkins, Sharon
2017-01-01
Objectives: The primary objective of this study was to identify patients with heat-related illness (HRI) using codes for heat-related injury diagnosis and external cause of injury in 3 administrative data sets: emergency department (ED) visit records, hospital discharge records, and death certificates. Methods: We obtained data on ED visits, hospitalizations, and deaths for Florida residents for May 1 through October 31, 2005-2012. To identify patients with HRI, we used codes from the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to search data on ED visits and hospitalizations and codes from the International Classification of Diseases, Tenth Revision (ICD-10) to search data on deaths. We stratified the results by data source and whether the HRI was work related. Results: We identified 23 981 ED visits, 4816 hospitalizations, and 140 deaths in patients with non–work-related HRI and 2979 ED visits, 415 hospitalizations, and 23 deaths in patients with work-related HRI. The most common diagnosis codes among patients were for severe HRI (heat exhaustion or heatstroke). The proportion of patients with a severe HRI diagnosis increased with data source severity. If ICD-9-CM code E900.1 and ICD-10 code W92 (excessive heat of man-made origin) were used as exclusion criteria for HRI, 5.0% of patients with non–work-related deaths, 3.0% of patients with work-related ED visits, and 1.7% of patients with work-related hospitalizations would have been removed. Conclusions: Using multiple data sources and all diagnosis fields may improve the sensitivity of HRI surveillance. Future studies should evaluate the impact of converting ICD-9-CM to ICD-10-CM codes on HRI surveillance of ED visits and hospitalizations. PMID:28379784
On the optimality of code options for a universal noiseless coder
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner
1991-01-01
A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.
Toward Intelligent Software Defect Detection
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2011-01-01
Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.
The Study of High-Speed Surface Dynamics Using a Pulsed Proton Beam
NASA Astrophysics Data System (ADS)
Buttler, William; Stone, Benjamin; Oro, David; Dimonte, Guy; Preston, Dean; Cherne, Frank; Germann, Timothy; Terrones, Guillermo; Tupa, Dale
2011-06-01
Los Alamos National Laboratory is presently engaged in development and implementation of ejecta source term and transport models for integration into LANL hydrodynamic computer codes. Experimental support for the effort spans a broad array of activities, including ejecta source term measurements from machine roughened Sn surfaces shocked by HE or flyer plates. Because the underlying postulate for ejecta formation is that ejecta are characterized by Richtmyer-Meshkov instability (RMI) phenomena, a key element of the theory and modeling effort centers on validation and verification RMI experiments at the LANSCE Proton Radiography Facility (pRad) to compare with modeled ejecta measurements. Here we present experimental results used to define and validate a physics based ejecta model together with remarkable, unexpected results of Sn instability growth in vacuum and gasses, and Sn and Cu RM growth that reveals the sensitivity of the RM instability to the yield strength of the material, Cu. The motivation of this last subject, RM growth linked to material strength, is to probe the shock pressure regions over which ejecta begins to form. Presenter
NASA Astrophysics Data System (ADS)
Hoffmann, T. L.; Lieb, S.; Pauldrach, A. W. A.; Lesch, H.; Hultzsch, P. J. N.; Birk, G. T.
2012-08-01
Aims: The aim of this work is to verify whether turbulent magnetic reconnection can provide the additional energy input required to explain the up to now only poorly understood ionization mechanism of the diffuse ionized gas (DIG) in galaxies and its observed emission line spectra. Methods: We use a detailed non-LTE radiative transfer code that does not make use of the usual restrictive gaseous nebula approximations to compute synthetic spectra for gas at low densities. Excitation of the gas is via an additional heating term in the energy balance as well as by photoionization. Numerical values for this heating term are derived from three-dimensional resistive magnetohydrodynamic two-fluid plasma-neutral-gas simulations to compute energy dissipation rates for the DIG under typical conditions. Results: Our simulations show that magnetic reconnection can liberate enough energy to by itself fully or partially ionize the gas. However, synthetic spectra from purely thermally excited gas are incompatible with the observed spectra; a photoionization source must additionally be present to establish the correct (observed) ionization balance in the gas.
Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations
NASA Astrophysics Data System (ADS)
Hopper, Seth
2018-03-01
Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.
NSRD-15:Computational Capability to Substantiate DOE-HDBK-3010 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Bignell, John; Dingreville, Remi Philippe Michel
Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Handbook, DOE-HDBK-3010, Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities, to determine radionuclide source terms from postulated accident scenarios. In calculating source terms, analysts tend to use the DOE Handbook’s bounding values on airborne release fractions (ARFs) and respirable fractions (RFs) for various categories of insults (representing potential accident release categories). This is typically due to both time constraints and the avoidance of regulatory critique. Unfortunately, these bounding ARFs/RFs represent extremely conservative values. Moreover, they were derived frommore » very limited small-scale bench/laboratory experiments and/or from engineered judgment. Thus, the basis for the data may not be representative of the actual unique accident conditions and configurations being evaluated. The goal of this research is to develop a more accurate and defensible method to determine bounding values for the DOE Handbook using state-of-art multi-physics-based computer codes.« less
NASA Astrophysics Data System (ADS)
Di Lemma, F. G.; Nakajima, K.; Yamashita, S.; Osaka, M.
2017-02-01
Chemisorption phenomena can affect fission products (FP) retention in a nuclear reactor vessel during a severe accident (SA). Detailed information on the FP chemisorbed deposits, especially for Cs, are important for a rational decommissioning of the reactor following a SA, as for the Fukushima Daiichi Power Station. Moreover the retention of Cs will influence the source term assessment and thus improved models for this phenomenon are needed in SA codes. This paper describes the influence on Cs chemisorption of molybdenum contained in stainless steel (SS) type 316. In our experiments it was observed that Cs-Mo deposits (CsFe(MoO4)3, Cs2MoO4) were formed together with CsFeSiO4, which is the predominant compound formed by chemisorption. The Cs-Mo deposits were found to revaporize from the SS sample at 1000 °C, and thus could contribute to the source term. On the other hand, CsFeSiO4 will be probably retained in the reactor during a SA due to its stability.
PuLSE: Quality control and quantification of peptide sequences explored by phage display libraries.
Shave, Steven; Mann, Stefan; Koszela, Joanna; Kerr, Alastair; Auer, Manfred
2018-01-01
The design of highly diverse phage display libraries is based on assumption that DNA bases are incorporated at similar rates within the randomized sequence. As library complexity increases and expected copy numbers of unique sequences decrease, the exploration of library space becomes sparser and the presence of truly random sequences becomes critical. We present the program PuLSE (Phage Library Sequence Evaluation) as a tool for assessing randomness and therefore diversity of phage display libraries. PuLSE runs on a collection of sequence reads in the fastq file format and generates tables profiling the library in terms of unique DNA sequence counts and positions, translated peptide sequences, and normalized 'expected' occurrences from base to residue codon frequencies. The output allows at-a-glance quantitative quality control of a phage library in terms of sequence coverage both at the DNA base and translated protein residue level, which has been missing from toolsets and literature. The open source program PuLSE is available in two formats, a C++ source code package for compilation and integration into existing bioinformatics pipelines and precompiled binaries for ease of use.
NASA Astrophysics Data System (ADS)
Konnik, Mikhail V.; Welsh, James
2012-09-01
Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.
Toward a common language for biobanking.
Fransson, Martin N; Rial-Sebbag, Emmanuelle; Brochhausen, Mathias; Litton, Jan-Eric
2015-01-01
To encourage the process of harmonization, the biobank community should support and use a common terminology. Relevant terms may be found in general thesauri for medicine, legal instruments or specific glossaries for biobanking. A comparison of the use of these sources has so far not been conducted and would be a useful instrument to further promote harmonization and data sharing. Thus, the purpose of the present study was to investigate the preference of definitions important for sharing biological samples and data. Definitions for 10 terms -[human] biobank, sample/specimen, sample collection, study, aliquot, coded, identifying information, anonymised, personal data and informed consent-were collected from several sources. A web-based questionnaire was sent to 560 European individuals working with biobanks asking to select their preferred definition for the terms. A total of 123 people participated in the survey, giving a response rate of 23%. The result was evaluated from four aspects: scope of definitions, potential regional differences, differences in semantics and definitions in the context of ontologies, guided by comments from responders. Indicative from the survey is the risk of focusing only on the research aspect of biobanking in definitions. Hence, it is recommended that important terms should be formulated in such a way that all areas of biobanking are covered to improve the bridges between research and clinical application. Since several of the terms investigated here within can also be found in a legal context, which may differ between countries, establishing what is a proper definition on how it adheres to law is also crucial.
Supporting Source Code Comprehension during Software Evolution and Maintenance
ERIC Educational Resources Information Center
Alhindawi, Nouh
2013-01-01
This dissertation addresses the problems of program comprehension to support the evolution of large-scale software systems. The research concerns how software engineers locate features and concepts along with categorizing changes within very large bodies of source code along with their versioned histories. More specifically, advanced Information…
Automating RPM Creation from a Source Code Repository
2012-02-01
apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make
Reliability of routinely collected hospital data for child maltreatment surveillance.
McKenzie, Kirsten; Scott, Debbie A; Waller, Garry S; Campbell, Margaret
2011-01-05
Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most 'under-coded' by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting.
Reliability of Routinely Collected Hospital Data for Child Maltreatment Surveillance
2011-01-01
Background Internationally, research on child maltreatment-related injuries has been hampered by a lack of available routinely collected health data to identify cases, examine causes, identify risk factors and explore health outcomes. Routinely collected hospital separation data coded using the International Classification of Diseases and Related Health Problems (ICD) system provide an internationally standardised data source for classifying and aggregating diseases, injuries, causes of injuries and related health conditions for statistical purposes. However, there has been limited research to examine the reliability of these data for child maltreatment surveillance purposes. This study examined the reliability of coding of child maltreatment in Queensland, Australia. Methods A retrospective medical record review and recoding methodology was used to assess the reliability of coding of child maltreatment. A stratified sample of hospitals across Queensland was selected for this study, and a stratified random sample of cases was selected from within those hospitals. Results In 3.6% of cases the coders disagreed on whether any maltreatment code could be assigned (definite or possible) versus no maltreatment being assigned (unintentional injury), giving a sensitivity of 0.982 and specificity of 0.948. The review of these cases where discrepancies existed revealed that all cases had some indications of risk documented in the records. 15.5% of cases originally assigned a definite or possible maltreatment code, were recoded to a more or less definite strata. In terms of the number and type of maltreatment codes assigned, the auditor assigned a greater number of maltreatment types based on the medical documentation than the original coder assigned (22% of the auditor coded cases had more than one maltreatment type assigned compared to only 6% of the original coded data). The maltreatment types which were the most 'under-coded' by the original coder were psychological abuse and neglect. Cases coded with a sexual abuse code showed the highest level of reliability. Conclusion Given the increasing international attention being given to improving the uniformity of reporting of child-maltreatment related injuries and the emphasis on the better utilisation of routinely collected health data, this study provides an estimate of the reliability of maltreatment-specific ICD-10-AM codes assigned in an inpatient setting. PMID:21208411
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murata, K.K.; Williams, D.C.; Griffith, R.O.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less
Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratta, A.J.
1997-07-01
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts andmore » engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.« less
Code of Federal Regulations, 2010 CFR
2010-07-01
... Offender Registration Act of 1999 (D.C. Official Code Section 22-4001). (b) The term “the Act” means the Sex Offender Registration Act of 1999 (D.C. Official Code Section 22-4001 et seq.). (c) The term “days” means business days unless otherwise specified. (d) In relation to a motor vehicle, the term “owns...
ERIC Educational Resources Information Center
Wisniewski, Janusz L.
1986-01-01
Discussion of a new method of index term dictionary compression in an inverted-file-oriented database highlights a technique of word coding, which generates short fixed-length codes obtained from the index terms themselves by analysis of monogram and bigram statistical distributions. Substantial savings in communication channel utilization are…
An adaptable binary entropy coder
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.
2001-01-01
We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.
Annotare—a tool for annotating high-throughput biomedical investigations and resulting data
Shankar, Ravi; Parkinson, Helen; Burdett, Tony; Hastings, Emma; Liu, Junmin; Miller, Michael; Srinivasa, Rashmi; White, Joseph; Brazma, Alvis; Sherlock, Gavin; Stoeckert, Christian J.; Ball, Catherine A.
2010-01-01
Summary: Computational methods in molecular biology will increasingly depend on standards-based annotations that describe biological experiments in an unambiguous manner. Annotare is a software tool that enables biologists to easily annotate their high-throughput experiments, biomaterials and data in a standards-compliant way that facilitates meaningful search and analysis. Availability and Implementation: Annotare is available from http://code.google.com/p/annotare/ under the terms of the open-source MIT License (http://www.opensource.org/licenses/mit-license.php). It has been tested on both Mac and Windows. Contact: rshankar@stanford.edu PMID:20733062
Plasma separation process. Betacell (BCELL) code, user's manual
NASA Astrophysics Data System (ADS)
Taherzadeh, M.
1987-11-01
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the Plasma Separation Program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation and source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison.
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Characterization and Remediation of Contaminated Sites:Modeling, Measurement and Assessment
NASA Astrophysics Data System (ADS)
Basu, N. B.; Rao, P. C.; Poyer, I. C.; Christ, J. A.; Zhang, C. Y.; Jawitz, J. W.; Werth, C. J.; Annable, M. D.; Hatfield, K.
2008-05-01
The complexity of natural systems makes it impossible to estimate parameters at the required level of spatial and temporal detail. Thus, it becomes necessary to transition from spatially distributed parameters to spatially integrated parameters that are capable of adequately capturing the system dynamics, without always accounting for local process behavior. Contaminant flux across the source control plane is proposed as an integrated metric that captures source behavior and links it to plume dynamics. Contaminant fluxes were measured using an innovative technology, the passive flux meter at field sites contaminated with dense non-aqueous phase liquids or DNAPLs in the US and Australia. Flux distributions were observed to be positively or negatively correlated with the conductivity distribution, depending on the source characteristics of the site. The impact of partial source depletion on the mean contaminant flux and flux architecture was investigated in three-dimensional complex heterogeneous settings using the multiphase transport code UTCHEM and the reactive transport code ISCO3D. Source mass depletion reduced the mean contaminant flux approximately linearly, while the contaminant flux standard deviation reduced proportionally with the mean (i.e., coefficient of variation of flux distribution is constant with time). Similar analysis was performed using data from field sites, and the results confirmed the numerical simulations. The linearity of the mass depletion-flux reduction relationship indicates the ability to design remediation systems that deplete mass to achieve target reduction in source strength. Stability of the flux distribution indicates the ability to characterize the distributions in time once the initial distribution is known. Lagrangian techniques were used to predict contaminant flux behavior during source depletion in terms of the statistics of the hydrodynamic and DNAPL distribution. The advantage of the Lagrangian techniques lies in their small computation time and their inclusion of spatially integrated parameters that can be measured in the field using tracer tests. Analytical models that couple source depletion to plume transport were used for optimization of source and plume treatment. These models are being used for the development of decision and management tools (for DNAPL sites) that consider uncertainty assessments as an integral part of the decision-making process for contaminated site remediation.
Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed.
Eisinger, Daniel; Tsatsaronis, George; Bundschus, Markus; Wieneke, Ulrich; Schroeder, Michael
2013-04-15
Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Patent documents are another important information source, though they are considerably less accessible. One option to expand patent search beyond pure keywords is the inclusion of classification information: Since every patent is assigned at least one class code, it should be possible for these assignments to be automatically used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. This report describes our comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms.Our analysis shows a strong structural similarity of the hierarchies, but significant differences of terms and annotations. The low number of IPC class assignments and the lack of occurrences of class labels in patent texts imply that current patent search is severely limited. To overcome these limits, we evaluate a method for the automated assignment of additional classes to patent documents, and we propose a system for guided patent search based on the use of class co-occurrence information and external resources.
Automated Patent Categorization and Guided Patent Search using IPC as Inspired by MeSH and PubMed
2013-01-01
Document search on PubMed, the pre-eminent database for biomedical literature, relies on the annotation of its documents with relevant terms from the Medical Subject Headings ontology (MeSH) for improving recall through query expansion. Patent documents are another important information source, though they are considerably less accessible. One option to expand patent search beyond pure keywords is the inclusion of classification information: Since every patent is assigned at least one class code, it should be possible for these assignments to be automatically used in a similar way as the MeSH annotations in PubMed. In order to develop a system for this task, it is necessary to have a good understanding of the properties of both classification systems. This report describes our comparative analysis of MeSH and the main patent classification system, the International Patent Classification (IPC). We investigate the hierarchical structures as well as the properties of the terms/classes respectively, and we compare the assignment of IPC codes to patents with the annotation of PubMed documents with MeSH terms. Our analysis shows a strong structural similarity of the hierarchies, but significant differences of terms and annotations. The low number of IPC class assignments and the lack of occurrences of class labels in patent texts imply that current patent search is severely limited. To overcome these limits, we evaluate a method for the automated assignment of additional classes to patent documents, and we propose a system for guided patent search based on the use of class co-occurrence information and external resources. PMID:23734562
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Hamzah, N. S., E-mail: mark-dennis@nuclearmalaysia.gov.my; Abi, M. J. B., E-mail: mark-dennis@nuclearmalaysia.gov.my
ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences ofmore » results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.« less
Computations of steady-state and transient premixed turbulent flames using pdf methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulek, T.; Lindstedt, R.P.
1996-03-01
Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less
FEDEF: A High Level Architecture Federate Development Framework
2010-09-01
require code changes for operability between HLA specifications. Configuration of federate requirements such as publications, subscriptions, time ... management , and management protocol should occur outside of federate source code, allowing for federate reusability without code modification and re
Simulation study on ion extraction from electron cyclotron resonance ion sources
NASA Astrophysics Data System (ADS)
Fu, S.; Kitagawa, A.; Yamada, S.
1994-04-01
In order to study beam optics of NIRS-ECR ion source used in the HIMAC project, the EGUN code has been modified to make it capable of modeling ion extraction from a plasma. Two versions of the modified code are worked out with two different methods in which 1D and 2D sheath theories are used, respectively. Convergence problem of the strong nonlinear self-consistent equations is investigated. Simulations on NIRS-ECR ion source and HYPER-ECR ion source are presented in this paper, exhibiting an agreement with the experiment results.
User Interface Design in Medical Distributed Web Applications.
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
2016-01-01
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel
2014-01-01
High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by 241Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy-1 for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy-1 achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production. PMID:24600167
Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel
2014-01-01
High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by (241)Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy(-1) for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy(-1) achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Automated Source-Code-Based Testing of Object-Oriented Software
NASA Astrophysics Data System (ADS)
Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten
2014-08-01
With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.
Particle model of a cylindrical inductively coupled ion source
NASA Astrophysics Data System (ADS)
Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.
2017-08-01
In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
cncRNAs: Bi-functional RNAs with protein coding and non-coding functions
Kumari, Pooja; Sampath, Karuna
2015-01-01
For many decades, the major function of mRNA was thought to be to provide protein-coding information embedded in the genome. The advent of high-throughput sequencing has led to the discovery of pervasive transcription of eukaryotic genomes and opened the world of RNA-mediated gene regulation. Many regulatory RNAs have been found to be incapable of protein coding and are hence termed as non-coding RNAs (ncRNAs). However, studies in recent years have shown that several previously annotated non-coding RNAs have the potential to encode proteins, and conversely, some coding RNAs have regulatory functions independent of the protein they encode. Such bi-functional RNAs, with both protein coding and non-coding functions, which we term as ‘cncRNAs’, have emerged as new players in cellular systems. Here, we describe the functions of some cncRNAs identified from bacteria to humans. Because the functions of many RNAs across genomes remains unclear, we propose that RNAs be classified as coding, non-coding or both only after careful analysis of their functions. PMID:26498036
17 CFR 229.406 - (Item 406) Code of ethics.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 406) Code of ethics. 229... 406) Code of ethics. (a) Disclose whether the registrant has adopted a code of ethics that applies to... code of ethics, explain why it has not done so. (b) For purposes of this Item 406, the term code of...
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Lin, S.
1984-01-01
Several error control coding techniques for reliable satellite communications were investigated to find algorithms for fast decoding of Reed-Solomon codes in terms of dual basis. The decoding of the (255,223) Reed-Solomon code, which is used as the outer code in the concatenated TDRSS decoder, was of particular concern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flacco, A.; Fairchild, M.; Reiche, S.
2004-12-07
The coherent radiation emitted by electrons in high brightness beam-based experiments is important from the viewpoints of both radiation source development, and the understanding and diagnosing the basic physical processes important in beam manipulations at high intensity. While much theoretical work has been developed to aid in calculating aspects of this class of radiation, these methods do not often produce accurate information concerning the experimentally relevant aspects of the radiation. At UCLA, we are particularly interested in coherent synchrotron radiation and the related phenomena of coherent edge radiation, in the context of a fs-beam chicane compression experiment at the BNLmore » ATF. To analyze this and related problems, we have developed a program that acts as an extension to the Lienard-Wiechert-based 3D simulation code TREDI, termed FieldEye. This program allows the evaluation of electromagnetic fields in the time and frequency domain in an arbitrary 2D detector planar area. We discuss here the implementation of the FieldEye code, and give examples of results relevant to the case of the ATF chicane compressor experiment.« less
Implementation of international code of marketing breast-milk substitutes in China.
Liu, Aihua; Dai, Yaohua; Xie, Xiaohua; Chen, Li
2014-11-01
Breastmilk is the best source of nourishment for infants and young children, and breastfeeding is one of the most effective ways to ensure child health and survival. In May 1981, the World Health Assembly adopted the International Code of Marketing Breast-Milk Substitutes. Since then several subsequent resolutions have been adopted by the World Health Assembly, which both update and clarify the articles within the International Code (herein after the term "Code" refers to both the International Code and all subsequent resolutions). The Code is designed to regulate "inappropriate sales promotion" of breastmilk substitutes and instructs signatory governments to ensure the implementation of its aims through legislation. The Chinese Regulations of the Code were adopted by six government sectors in 1995. However, challenges in promotion, protection, and support of breastfeeding remain. This study aimed to monitor the implementation of the Code in China. Six cities were selected with considerable geographic coverage. In each city three hospitals and six stores were surveyed. The International Baby Food Action Network Interview Form was adapted, and direct observations were made. Research assistants administered the questionnaires to a random sample of mothers of infants under 6 months old who were in the outpatient department of the hospitals. In total, 291 mothers of infants, 35 stores, 17 hospitals, and 26 companies were surveyed. From the whole sample of 291 mothers, the proportion who reported exclusively breastfeeding their infant was 30.9%; 69.1% of mothers reported feeding their infant with commercially available formula. Regarding violations of the Code, 40.2% of the mothers reported receiving free formula samples. Of these, 76.1% received the free samples in or near hospitals. Among the stores surveyed, 45.7% were found promoting products in a way that violates the Code. Also, 69.0% of the labeling on the formula products did not comply with the regulations set out in the Code. As the social and economic developments continue, the interactions of more and more factors curb further success in breastfeeding. Support from all sectors of the society is needed in order to create a social environment to enable the promotion of breastfeeding, in addition to the efforts already made by the healthcare system.
Maund, Emma; Tendal, Britta; Hróbjartsson, Asbjørn; Lundh, Andreas; Gøtzsche, Peter C
2014-06-04
To assess the effects of coding and coding conventions on summaries and tabulations of adverse events data on suicidality within clinical study reports. Systematic electronic search for adverse events of suicidality in tables, narratives, and listings of adverse events in individual patients within clinical study reports. Where possible, for each event we extracted the original term reported by the investigator, the term as coded by the medical coding dictionary, medical coding dictionary used, and the patient's trial identification number. Using the patient's trial identification number, we attempted to reconcile data on the same event between the different formats for presenting data on adverse events within the clinical study report. 9 randomised placebo controlled trials of duloxetine for major depressive disorder submitted to the European Medicines Agency for marketing approval. Clinical study reports obtained from the EMA in 2011. Six trials used the medical coding dictionary COSTART (Coding Symbols for a Thesaurus of Adverse Reaction Terms) and three used MedDRA (Medical Dictionary for Regulatory Activities). Suicides were clearly identifiable in all formats of adverse event data in clinical study reports. Suicide attempts presented in tables included both definitive and provisional diagnoses. Suicidal ideation and preparatory behaviour were obscured in some tables owing to the lack of specificity of the medical coding dictionary, especially COSTART. Furthermore, we found one event of suicidal ideation described in narrative text that was absent from tables and adverse event listings of individual patients. The reason for this is unclear, but may be due to the coding conventions used. Data on adverse events in tables in clinical study reports may not accurately represent the underlying patient data because of the medical dictionaries and coding conventions used. In clinical study reports, the listings of adverse events for individual patients and narratives of adverse events can provide additional information, including original investigator reported adverse event terms, which can enable a more accurate estimate of harms. © Maund et al 2014.
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew
2014-11-01
We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
The Astrophysics Source Code Library: Supporting software publication and citation
NASA Astrophysics Data System (ADS)
Allen, Alice; Teuben, Peter
2018-01-01
The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.
Self-consistent modeling of electron cyclotron resonance ion sources
NASA Astrophysics Data System (ADS)
Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.
2004-05-01
In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
Glinos, Irene A
2015-12-01
The WHO Global Code of Practice on the International Recruitment of Health Personnel is a landmark in the health workforce migration debate. Yet its principles apply only partly within the European Union (EU) where freedom of movement prevails. The purpose of this article is to explore whether free mobility of health professionals contributes to "equitably strengthen health systems" in the EU. The article proposes an analytical tool (matrix), which looks at the effects of health professional mobility in terms of efficiency and equity implications at three levels: for the EU, for destination countries and for source countries. The findings show that destinations as well as sources experience positive and negative effects, and that the effects of mobility are complex because they change, overlap and are hard to pin down. The analysis suggests that there is a risk that free health workforce mobility disproportionally benefits wealthier Member States at the expense of less advantaged EU Member States, and that mobility may feed disparities as flows redistribute resources from poorer to wealthier EU countries. The article argues that the principles put forward by the WHO Code appear to be as relevant within the EU as they are globally. Copyright © 2015. Published by Elsevier Ireland Ltd.
A computational geometry approach to pore network construction for granular packings
NASA Astrophysics Data System (ADS)
van der Linden, Joost H.; Sufian, Adnan; Narsilio, Guillermo A.; Russell, Adrian R.; Tordesillas, Antoinette
2018-03-01
Pore network construction provides the ability to characterize and study the pore space of inhomogeneous and geometrically complex granular media in a range of scientific and engineering applications. Various approaches to the construction have been proposed, however subtle implementational details are frequently omitted, open access to source code is limited, and few studies compare multiple algorithms in the context of a specific application. This study presents, in detail, a new pore network construction algorithm, and provides a comprehensive comparison with two other, well-established Delaunay triangulation-based pore network construction methods. Source code is provided to encourage further development. The proposed algorithm avoids the expensive non-linear optimization procedure in existing Delaunay approaches, and is robust in the presence of polydispersity. Algorithms are compared in terms of structural, geometrical and advanced connectivity parameters, focusing on the application of fluid flow characteristics. Sensitivity of the various networks to permeability is assessed through network (Stokes) simulations and finite-element (Navier-Stokes) simulations. Results highlight strong dependencies of pore volume, pore connectivity, throat geometry and fluid conductance on the degree of tetrahedra merging and the specific characteristics of the throats targeted by the merging algorithm. The paper concludes with practical recommendations on the applicability of the three investigated algorithms.
A finite-volume ELLAM for three-dimensional solute-transport modeling
Russell, T.F.; Heberton, C.I.; Konikow, Leonard F.; Hornberger, G.Z.
2003-01-01
A three-dimensional finite-volume ELLAM method has been developed, tested, and successfully implemented as part of the U.S. Geological Survey (USGS) MODFLOW-2000 ground water modeling package. It is included as a solver option for the Ground Water Transport process. The FVELLAM uses space-time finite volumes oriented along the streamlines of the flow field to solve an integral form of the solute-transport equation, thus combining local and global mass conservation with the advantages of Eulerian-Lagrangian characteristic methods. The USGS FVELLAM code simulates solute transport in flowing ground water for a single dissolved solute constituent and represents the processes of advective transport, hydrodynamic dispersion, mixing from fluid sources, retardation, and decay. Implicit time discretization of the dispersive and source/sink terms is combined with a Lagrangian treatment of advection, in which forward tracking moves mass to the new time level, distributing mass among destination cells using approximate indicator functions. This allows the use of large transport time increments (large Courant numbers) with accurate results, even for advection-dominated systems (large Peclet numbers). Four test cases, including comparisons with analytical solutions and benchmarking against other numerical codes, are presented that indicate that the FVELLAM can usually yield excellent results, even if relatively few transport time steps are used, although the quality of the results is problem-dependent.
2013-01-01
Background The objective was to examine feasibility of using hospital discharge register data for studying fire-related injuries. Methods The Finnish National Hospital Discharge Register (FHDR) was the database used to select relevant hospital discharge data to study usability and data quality issues. Patterns of E-coding were assessed, as well as prominent challenges in defining the incidence of injuries. Additionally, the issue of defining the relevant amount of hospital days accounted for in injury care was considered. Results Directly after the introduction of the ICD-10 classification system, in 1996, the completeness of E-coding was found to be poor, but to have improved dramatically around 2000 and thereafter. The scale of the challenges to defining the incidence of injuries was found to be manageable. In counting the relevant hospital days, psychiatric and long-term care were found to be the obvious and possible sources of overestimation. Conclusions The FHDR was found to be a feasible data source for studying fire-related injuries so long as potential challenges are acknowledged and taken into account. Hospital discharge data can be a unique and powerful means for injury research as issues of representativeness and coverage of traditional probability samples can frequently be completely avoided. PMID:23496937
Universal Noiseless Coding Subroutines
NASA Technical Reports Server (NTRS)
Schlutsmeyer, A. P.; Rice, R. F.
1986-01-01
Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.
Calculation note for an underground leak which remains underground
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, H.J.
1997-05-20
This calculation note supports the subsurface leak accident scenario which remains subsurface. It is assumed that a single walled pipe carrying waste from tank 106-C ruptures, releasing the liquid waste into the soil. In this scenario, the waste does not form a surface pool, but remains subsurface. However, above the pipe is a berm, 0.762 m (2.5 ft) high and 2.44 m (8 ft) wide, and the liquid released from the leak rises into the berm. The slurry line, which transports a source term of higher activity than the sluice line, leaks into the soil at a rate of 5%more » of the maximum flow rate of 28.4 L/s (450 gpm) for twelve hours. The dose recipient was placed a perpendicular distance of 100 m from the pipe. Two source terms were considered, mitigated and unmitigated release as described in section 3.4.1 of UANF-SD-WM-BIO-001, Addendum 1. The unmitigated consisted of two parts of AWF liquid and one part AWF solid. The mitigated release consisted of two parts SST liquid, eighteen parts AWF liquid, nine parts SST solid, and one part AWF solid. The isotopic breakdown of the release in these cases is presented. Two geometries were considered in preliminary investigations, disk source, and rectangular source. Since the rectangular source results from the assumption that the contamination is wicked up into the berm, only six inches of shielding from uncontaminated earth is present, while the disk source, which remains six inches below the level of the surface of the land is often shielded by a thick shield due to the slant path to the dose point. For this reason, only the rectangular source was considered in the final analysis. The source model was a rectangle 2.134 m (7 ft) thick, 0.6096 m (2 ft) high, and 130.899 m (131 ft) long. The top and sides of this rectangular source was covered with earth of density 1.6 g/cm{sup 3} to a thickness of 15.24 cm (6 in). This soil is modeled as 40% void space. The source consisted of earth of the same density with the void spaces filled with the liquid waste which added 0.56 g/cm{sup 3} to the density. The dose point was 100 m (328 ft) away from the berm in a perpendicular direction off the center. The computer code MICROSKYSHINEO was used to calculate the skyshine from the source. This code calculates exposure rate at the receptor point. The photon spectrum from 2 MeV to 0.15 MeV, obtained from ISOSHLD, was used as input, although this did not differ substantially from the results obtained from using Co, 137mBa, and 154Eu. However, this methodology allowed the bremsstrahlung contribution to be included in the skyshine calculation as well as in the direct radiation calculation.« less
Safe, Multiphase Bounds Check Elimination in Java
2010-01-28
production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds
2016-01-01
Background Inclusion of information about a patient’s work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers’ compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for “industry” and “occupation” based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. Objective The objective of the study was to evaluate the intercoder reliability of NIOSH’s Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Methods Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Results Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the “high confidence” level and 49%-58% at the “medium confidence” level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are “substantial” at the 2-digit level, but only “fair” to “good” at the 4-digit level. Conclusions This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted. PMID:26878932
Application of CFD (Fluent) to LNG spills into geometrically complex environments.
Gavelli, Filippo; Bullister, Edward; Kytomaa, Harri
2008-11-15
Recent discussions on the fate of LNG spills into impoundments have suggested that the commonly used combination of SOURCE5 and DEGADIS to predict the flammable vapor dispersion distances is not accurate, as it does not account for vapor entrainment by wind. SOURCE5 assumes the vapor layer to grow upward uniformly in the form of a quiescent saturated gas cloud that ultimately spills over impoundment walls. The rate of spillage is then used as the source term for DEGADIS. A more rigorous approach to predict the flammable vapor dispersion distance is to use a computational fluid dynamics (CFD) model. CFD codes can take into account the physical phenomena that govern the fate of LNG spills into impoundments, such as the mixing between air and the evaporated gas. Before a CFD code can be proposed as an alternate method for the prediction of flammable vapor cloud distances, it has to be validated with proper experimental data. This paper describes the use of Fluent, a widely-used commercial CFD code, to simulate one of the tests in the "Falcon" series of LNG spill tests. The "Falcon" test series was the only series that specifically addressed the effects of impoundment walls and construction obstructions on the behavior and dispersion of the vapor cloud. Most other tests, such as the Coyote and the Burro series, involved spills onto water and relatively flat ground. The paper discusses the critical parameters necessary for a CFD model to accurately predict the behavior of a cryogenic spill in a geometrically complex domain, and presents comparisons between the gas concentrations measured during the Falcon-1 test and those predicted using Fluent. Finally, the paper discusses the effect vapor barriers have in containing part of the spill thereby shortening the ignitable vapor cloud and therefore the required hazard area. This issue was addressed by comparing the Falcon-1 simulation (spill into the impoundment) with the simulation of an identical spill without any impoundment walls, or obstacles within the impoundment area.
Roland, Carl L; Lake, Joanita; Oderda, Gary M
2016-12-01
We conducted a systematic review to evaluate worldwide human English published literature from 2009 to 2014 on prevalence of opioid misuse/abuse in retrospective databases where International Classification of Diseases (ICD) codes were used. Inclusion criteria for the studies were use of a retrospective database, measured abuse, dependence, and/or poisoning using ICD codes, stated prevalence or it could be derived, and documented time frame. A meta-analysis was not performed. A qualitative narrative synthesis was used, and 16 studies were included for data abstraction. ICD code use varies; 10 studies used ICD codes that encompassed all three terms: abuse, dependence, or poisoning. Eight studies limited determination of misuse/abuse to an opioid user population. Abuse prevalence among opioid users in commercial databases using all three terms of ICD codes varied depending on the opioid; 21 per 1000 persons (reformulated extended-release oxymorphone; 2011-2012) to 113 per 1000 persons (immediate-release opioids; 2010-2011). Abuse prevalence in general populations using all three ICD code terms ranged from 1.15 per 1000 persons (commercial; 6 months 2010) to 8.7 per 1000 persons (Medicaid; 2002-2003). Prevalence increased over time. When similar ICD codes are used, the highest prevalence is in US government-insured populations. Limiting population to continuous opioid users increases prevalence. Prevalence varies depending on ICD codes used, population, time frame, and years studied. Researchers using ICD codes to determine opioid abuse prevalence need to be aware of cautions and limitations.
Zhang, Jingpu; Zhang, Zuping; Wang, Zixiang; Liu, Yuting; Deng, Lei
2018-05-15
Long non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs. Considering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods. The source code and data are available at http://denglab.org/NeuraNetL2GO/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.
Analysis of the influence of the heat transfer phenomena on the late phase of the ThAI Iod-12 test
NASA Astrophysics Data System (ADS)
Gonfiotti, B.; Paci, S.
2014-11-01
Iodine is one of the major contributors to the source term during a severe accident in a Nuclear Power Plant for its volatility and high radiological consequences. Therefore, large efforts have been made to describe the Iodine behaviour during an accident, especially in the containment system. Due to the lack of experimental data, in the last years many attempts were carried out to fill the gaps on the knowledge of Iodine behaviour. In this framework, two tests (ThAI Iod-11 and Iod-12) were carried out inside a multi-compartment steel vessel. A quite complex transient characterizes these two tests; therefore they are also suitable for thermal- hydraulic benchmarks. The two tests were originally released for a benchmark exercise during the SARNET2 EU Project. At the end of this benchmark a report covering the main findings was issued, stating that the common codes employed in SA studies were able to simulate the tests but with large discrepancies. The present work is then related to the application of the new versions of ASTEC and MELCOR codes with the aim of carry out a new code-to-code comparison vs. ThAI Iod-12 experimental data, focusing on the influence of the heat exchanges with the outer environment, which seems to be one of the most challenging issues to cope with.
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Held, Eric D.
2015-09-01
Neoclassical tearing modes are macroscopic (L ∼ 1 m) instabilities in magnetic fusion experiments; if unchecked, these modes degrade plasma performance and may catastrophically destroy plasma confinement by inducing a disruption. Fortunately, the use of properly tuned and directed radiofrequency waves (λ ∼ 1 mm) can eliminate these modes. Numerical modeling of this difficult multiscale problem requires the integration of separate mathematical models for each length and time scale (Jenkins and Kruger, 2012 [21]); the extended MHD model captures macroscopic plasma evolution while the RF model tracks the flow and deposition of injected RF power through the evolving plasma profiles. The scale separation enables use of the eikonal (ray-tracing) approximation to model the RF wave propagation. In this work we demonstrate a technique, based on methods of computational geometry, for mapping the ensuing RF data (associated with discrete ray trajectories) onto the finite-element/pseudospectral grid that is used to model the extended MHD physics. In the new representation, the RF data can then be used to construct source terms in the equations of the extended MHD model, enabling quantitative modeling of RF-induced tearing mode stabilization. Though our specific implementation uses the NIMROD extended MHD (Sovinec et al., 2004 [22]) and GENRAY RF (Smirnov et al., 1994 [23]) codes, the approach presented can be applied more generally to any code coupling requiring the mapping of ray tracing data onto Eulerian grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, C.R.
The SNODOG Glossary is used by the DOE-supported life-span beagle studies to describe medical observations in a standardized format. It is an adaptation of the human medical glossary, SNOMED, which lists 107,165 terms. Each of the five laboratories, Argonne National Laboratory, the Inhalation Toxicology Research Institute, the Pacific Northwest Laboratory, the University of California at Davis, and the University of Utah, has selected an appropriate subset from the published SNOMED glossary and added beagle and research-specific terms. The National Radiobiology Archives is the coordinator of these enhancements, and periodically distributes SNODOG to the respective laboratories. Information donated by Colorado Statemore » University and Oak Ridge National Laboratory has been related to SNODOG and is available in a standardized format. This document is designed for the database manager and the scientist who will be managing or coding medical observations. It is also designed for the scientist analyzing coded information. The document includes: an overview of the NRA and the SNODOG glossary, a discussion of hardware requirements, a review of the SNODOG code structure and printed lists of the 4,770 terms which have been used at least once. Instructions for obtaining electronic copies of the glossary and for nominating additional terms are provided. This document describes the origins and structure of the SNODOG codes, explains code usage at each participating institution, and presents a usage frequency tabulation of the terms for neoplasia. A diskette or magnetic tape containing 15,641 SNODOG codes and translations is available on request.« less
SNODOG Glossary: Part 1, Introduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, C.R.
The SNODOG Glossary is used by the DOE-supported life-span beagle studies to describe medical observations in a standardized format. It is an adaptation of the human medical glossary, SNOMED, which lists 107,165 terms. Each of the five laboratories, Argonne National Laboratory, the Inhalation Toxicology Research Institute, the Pacific Northwest Laboratory, the University of California at Davis, and the University of Utah, has selected an appropriate subset from the published SNOMED glossary and added beagle and research-specific terms. The National Radiobiology Archives is the coordinator of these enhancements, and periodically distributes SNODOG to the respective laboratories. Information donated by Colorado Statemore » University and Oak Ridge National Laboratory has been related to SNODOG and is available in a standardized format. This document is designed for the database manager and the scientist who will be managing or coding medical observations. It is also designed for the scientist analyzing coded information. The document includes: an overview of the NRA and the SNODOG glossary, a discussion of hardware requirements, a review of the SNODOG code structure and printed lists of the 4,770 terms which have been used at least once. Instructions for obtaining electronic copies of the glossary and for nominating additional terms are provided. This document describes the origins and structure of the SNODOG codes, explains code usage at each participating institution, and presents a usage frequency tabulation of the terms for neoplasia. A diskette or magnetic tape containing 15,641 SNODOG codes and translations is available on request.« less
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
VizieR Online Data Catalog: ynogkm: code for calculating time-like geodesics (Yang+, 2014)
NASA Astrophysics Data System (ADS)
Yang, X.-L.; Wang, J.-C.
2013-11-01
Here we present the source file for a new public code named ynogkm, aim on calculating the time-like geodesics in a Kerr-Newmann spacetime fast. In the code the four Boyer-Lindquis coordinates and proper time are expressed as functions of a parameter p semi-analytically, i.e., r(p), μ(p), φ(p), t(p), and σ(p), by using the Weiers- trass' and Jacobi's elliptic functions and integrals. All of the ellip- tic integrals are computed by Carlson's elliptic integral method, which guarantees the fast speed of the code.The source Fortran file ynogkm.f90 contains three modules: constants, rootfind, ellfunction, and blcoordinates. (3 data files).
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
NASA Astrophysics Data System (ADS)
Labate, Demetrio; Pieri, Silvano; Pili, Paolo
1994-09-01
The Interferometric Analysis Computer Code is a program developed to evaluate the performances of Fourier Transform Spectrometers. It has been carried out in the frame of the IASI program. It is a stand-alone code which can use as input the optical system data set up by an optical design software. The interference phenomenon is evaluated using the optical data of both interferometer arms by means of real ray-tracing. The mathematical model used to obtain the output signal is based on the concept that, for a monochromatic source, this signal is quite similar to an ideal sine. This allows to calculate three functions describing the difference between the ideal interferogram and the simulated one. These represent the average level of the output irradiance, the modulation and the phase of the oscillating terms as a function of the Optical Path Difference. These functions are quite smooth and then easily representable by fitting. Therefore in order to have a good representation of them it is sufficient a number of points much smaller than those necessary to represent correctly an interferogram. Then a great advantage in terms of computation time is obtained, especially when many signals have to be added to simulate the effect of a detector covering a quite large field of view. Furthermore, the possibility to input in the optical data files different kinds of manufacturing or assembly errors allows to estimate the sensitivity of the optical components respect to these aspects. This makes possible the calculation of an exhaustive tolerance budget.
Distributed single source coding with side information
NASA Astrophysics Data System (ADS)
Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.
2004-01-01
In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakao, N.; /SLAC; Taniguchi, S.
Neutron energy spectra were measured behind the lateral shield of the CERF (CERN-EU High Energy Reference Field) facility at CERN with a 120 GeV/c positive hadron beam (a mixture of mainly protons and pions) on a cylindrical copper target (7-cm diameter by 50-cm long). An NE213 organic liquid scintillator (12.7-cm diameter by 12.7-cm long) was located at various longitudinal positions behind shields of 80- and 160-cm thick concrete and 40-cm thick iron. The measurement locations cover an angular range with respect to the beam axis between 13 and 133{sup o}. Neutron energy spectra in the energy range between 32 MeVmore » and 380 MeV were obtained by unfolding the measured pulse height spectra with the detector response functions which have been verified in the neutron energy range up to 380 MeV in separate experiments. Since the source term and experimental geometry in this experiment are well characterized and simple and results are given in the form of energy spectra, these experimental results are very useful as benchmark data to check the accuracies of simulation codes and nuclear data. Monte Carlo simulations of the experimental set up were performed with the FLUKA, MARS and PHITS codes. Simulated spectra for the 80-cm thick concrete often agree within the experimental uncertainties. On the other hand, for the 160-cm thick concrete and iron shield differences are generally larger than the experimental uncertainties, yet within a factor of 2. Based on source term simulations, observed discrepancies among simulations of spectra outside the shield can be partially explained by differences in the high-energy hadron production in the copper target.« less
A European classification of services for long-term care—the EU-project eDESDE-LTC
Weber, Germain; Brehmer, Barbara; Zeilinger, Elisabeth; Salvador-Carulla, Luis
2009-01-01
Purpose and theory The eDESDE-LTC project aims at developing an operational system for coding, mapping and comparing services for long-term care (LTC) across EU. The projects strategy is to improve EU listing and access to relevant sources of healthcare information via development of SEMANTIC INTER-OPERABILITY in eHEALTH (coding and listing of services for LTC); to increase access to relevant sources of information on LTC services, and to improve linkages between national and regional websites; to foster cooperation with international organizations (OECD). Methods This operational system will include a standard classification of main types of care for persons with LTC needs and an instrument for mapping and standard description of services. These instruments are based on previous classification systems for mental health services (ESMS), disabilities services (DESDE) and ageing services (DESDAE). A Delphi panel made by seven partners developed a DESDE-LTC beta version, which was translated into six languages. The feasibility of DESDE-LTC is tested in six countries using national focal groups. Then the final version will be developed by the Delphi panel, a webpage, training material and course will be carried out. Results and conclusions The eDESDE-LTC system will be piloted in two EU countries (Spain and Bulgaria). Evaluation will focus primarily on usability and impact analysis. Discussion The added value of this project is related to the right of “having access to high-quality healthcare when and where it is needed” by EU citizens. Due to semantic variability and service complexity, existing national listings of services do not provide an adequate framework for patient mobility.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
Simulations of the plasma dynamics in high-current ion diodes
NASA Astrophysics Data System (ADS)
Boine-Frankenheim, O.; Pointon, T. D.; Mehlhorn, T. A.
Our time-implicit fluid/Particle-In-Cell (PIC) code DYNAID [1]is applied to problems relevant for applied- B ion diode operation. We present simulations of the laser ion source, which will soon be employed on the SABRE accelerator at SNL, and of the dynamics of the anode source plasma in the applied electric and magnetic fields. DYNAID is still a test-bed for a higher-dimensional simulation code. Nevertheless, the code can already give new theoretical insight into the dynamics of plasmas in pulsed power devices.
Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part I. User’s Manual.
1979-09-01
Command RT : 29 I. Command PG: 32 J. Command GP: 35 K. Command CG: 36 L. Command SG: 39 M. Command AM: 44 N. Conumand PR: 48 0. Command NP: 49 P...these points and con- firm the validity of the solution. 1 0 1 -.- ’----.- ... The source presently considered in the computer code is an Plec - tric...Range Input 28 * RT : Translate and/or Rotate Coordinates 29 SG: Source Geometry Input IQ TO: Test Data Generation Options 17 [IN: Units of Input U)S
Modeling TAE Response To Nonlinear Drives
NASA Astrophysics Data System (ADS)
Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin
2012-10-01
Experiment has detected the Toroidal Alfven Eigenmodes (TAE) with signals at twice the eigenfrequency.These harmonic modes arise from the second order perturbation in amplitude of the MHD equation for the linear modes that are driven the energetic particle free energy. The structure of TAE in realistic geometry can be calculated by generalizing the linear numerical solver (AEGIS package). We have have inserted all the nonlinear MHD source terms, where are quadratic in the linear amplitudes, into AEGIS code. We then invert the linear MHD equation at the second harmonic frequency. The ratio of amplitudes of the first and second harmonic terms are used to determine the internal field amplitude. The spatial structure of energy and density distribution are investigated. The results can be directly employed to compare with experiments and determine the Alfven wave amplitude in the plasma region.
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Relay selection in energy harvesting cooperative networks with rateless codes
NASA Astrophysics Data System (ADS)
Zhu, Kaiyan; Wang, Fei
2018-04-01
This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
The FORTRAN static source code analyzer program (SAP) user's guide, revision 1
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Eslinger, S.
1982-01-01
The FORTRAN Static Source Code Analyzer Program (SAP) User's Guide (Revision 1) is presented. SAP is a software tool designed to assist Software Engineering Laboratory (SEL) personnel in conducting studies of FORTRAN programs. SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. This document is a revision of the previous SAP user's guide, Computer Sciences Corporation document CSC/TM-78/6045. SAP Revision 1 is the result of program modifications to provide several new reports, additional complexity analysis, and recognition of all statements described in the FORTRAN 77 standard. This document provides instructions for operating SAP and contains information useful in interpreting SAP output.
The Need for Vendor Source Code at NAS. Revised
NASA Technical Reports Server (NTRS)
Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Friedrich, Rainer
2015-05-01
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. The filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Somnath, E-mail: sghosh@aero.iitkgp.ernet.in; Friedrich, Rainer
2015-05-15
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. Themore » filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.« less
Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.
Li, Le; Yip, Kevin Y
2016-12-15
Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.
Distributed and Collaborative Software Analysis
NASA Astrophysics Data System (ADS)
Ghezzi, Giacomo; Gall, Harald C.
Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of
Power Balance and Impurity Studies in TCS
NASA Astrophysics Data System (ADS)
Grossnickle, J. A.; Pietrzyk, Z. A.; Vlases, G. C.
2003-10-01
A "zero-dimension" power balance model was developed based on measurements of absorbed power, radiated power, absolute D_α, temperature, and density for the TCS device. Radiation was determined to be the dominant source of power loss for medium to high density plasmas. The total radiated power was strongly correlated with the Oxygen line radiation. This suggests Oxygen is the dominant radiating species, which was confirmed by doping studies. These also extrapolate to a Carbon content below 1.5%. Determining the source of the impurities is an important question that must be answered for the TCS upgrade. Preliminary indications are that the primary sources of Oxygen are the stainless steel end cones. A Ti gettering system is being installed to reduce this Oxygen source. A field line code has been developed for use in tracking where open field lines terminate on the walls. Output from this code is also used to generate grids for an impurity tracking code.
Status report on the development of a tubular electron beam ion source
NASA Astrophysics Data System (ADS)
Donets, E. D.; Donets, E. E.; Becker, R.; Liljeby, L.; Rensfelt, K.-G.; Beebe, E. N.; Pikin, A. I.
2004-05-01
The theoretical estimations and numerical simulations of tubular electron beams in both beam and reflex mode of source operation as well as the off-axis ion extraction from a tubular electron beam ion source (TEBIS) are presented. Numerical simulations have been done with the use of the IGUN and OPERA-3D codes. Numerical simulations with IGUN code show that the effective electron current can reach more than 100 A with a beam current density of about 300-400 A/cm2 and the electron energy in the region of several KeV with a corresponding increase of the ion output. Off-axis ion extraction from the TEBIS, being the nonaxially symmetric problem, was simulated with OPERA-3D (SCALA) code. The conceptual design and main parameters of new tubular sources which are under consideration at JINR, MSL, and BNL are based on these simulations.
SDM - A geodetic inversion code incorporating with layered crust structure and curved fault geometry
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Diao, Faqi; Hoechner, Andreas
2013-04-01
Currently, inversion of geodetic data for earthquake fault ruptures is most based on a uniform half-space earth model because of its closed-form Green's functions. However, the layered structure of the crust can significantly affect the inversion results. The other effect, which is often neglected, is related to the curved fault geometry. Especially, fault planes of most mega thrust earthquakes vary their dip angle with depth from a few to several tens of degrees. Also the strike directions of many large earthquakes are variable. For simplicity, such curved fault geometry is usually approximated to several connected rectangular segments, leading to an artificial loss of the slip resolution and data fit. In this presentation, we introduce a free FORTRAN code incorporating with the layered crust structure and curved fault geometry in a user-friendly way. The name SDM stands for Steepest Descent Method, an iterative algorithm used for the constrained least-squares optimization. The new code can be used for joint inversion of different datasets, which may include systematic offsets, as most geodetic data are obtained from relative measurements. These offsets are treated as unknowns to be determined simultaneously with the slip unknowns. In addition, a-priori and physical constraints are considered. The a-priori constraint includes the upper limit of the slip amplitude and the variation range of the slip direction (rake angle) defined by the user. The physical constraint is needed to obtain a smooth slip model, which is realized through a smoothing term to be minimized with the misfit to data. In difference to most previous inversion codes, the smoothing can be optionally applied to slip or stress-drop. The code works with an input file, a well-documented example of which is provided with the source code. Application examples are demonstrated.
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jianwei; Gauld, Ian C.
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balsa Terzic, Gabriele Bassi
In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less
Nature Research journals reproducibility policies and initiatives in the Earth sciences
NASA Astrophysics Data System (ADS)
VanDecar, J. C.
2016-12-01
The Nature Research journals strongly support the long-term endeavour by funders, institutions, researchers and publishers toward increasing the reliability and reproducibility of published research. In the Earth, space and environmental sciences this mainly takes the form of ensuring that underlying data and methods in each manuscript are made as transparent and accessible as possible. Supporting data must be made available to editors and peer reviewers at the time of submission for the purposes of evaluating each manuscript. But the preferred way to share data sets is via public repositories. When appropriate community repositories are available, we strongly encourage authors to deposit their data prior to publication. We also now require that a statement be included in each manuscript, under the heading "Data availability", indicating whether and how the data can be accessed, including any restrictions to access. To allow authors to describe their experimental design and methods in as much detail as necessary, the Nature Research journals have effectively abolished space restrictions on online methods sections. To further increase transparency, we also encourage authors to provide tables of the data behind graphs and figures as Source Data. This builds on our established data-deposition policy for specific experiments and large data sets. The Source Data is made available directly from the figure legend, for easy access. We also require that details of geological samples and palaeontological specimens include clear provenance information to ensure full transparency of the research methods. Palaeontological and type specimens must be deposited in a recognised museum or collection to permit free access by other researchers in perpetuity. Finally, authors must make available upon request, to editors and reviewers, any previously unreported custom computer code used to generate results that are reported in the paper and central to its main claims. For all studies using custom code that is deemed central to the conclusions, a statement must be included, under the heading "Code availability", indicating whether and how the code can be accessed, including any restrictions to access.
Plasma Separation Process: Betacell (BCELL) code: User's manual. [Bipolar barrier junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taherzadeh, M.
1987-11-13
The emergence of clearly defined applications for (small or large) amounts of long-life and reliable power sources has given the design and production of betavoltaic systems a new life. Moreover, because of the availability of the plasma separation program, (PSP) at TRW, it is now possible to separate the most desirable radioisotopes for betacell power generating devices. A computer code, named BCELL, has been developed to model the betavoltaic concept by utilizing the available up-to-date source/cell parameters. In this program, attempts have been made to determine the betacell energy device maximum efficiency, degradation due to the emitting source radiation andmore » source/cell lifetime power reduction processes. Additionally, comparison is made between the Schottky and PN junction devices for betacell battery design purposes. Certain computer code runs have been made to determine the JV distribution function and the upper limit of the betacell generated power for specified energy sources. A Ni beta emitting radioisotope was used for the energy source and certain semiconductors were used for the converter subsystem of the betacell system. Some results for a Promethium source are also given here for comparison. 16 refs.« less
Coding Instead of Splitting - Algebraic Combinations in Time and Space
2016-06-09
sources message. For certain classes of two-unicast-Z networks, we show that the rate-tuple ( N ,1) is achievable as long as the individual source...destination cuts for the two source-destination pairs are respectively at least as large as N and 1, and the generalized network sharing cut - a bound...previously defined by Kamath et. al. - is at least as large as N + 1. We show this through a novel achievable scheme which is based on random linear coding at
A Flexible and Non-instrusive Approach for Computing Complex Structural Coverage Metrics
NASA Technical Reports Server (NTRS)
Whalen, Michael W.; Person, Suzette J.; Rungta, Neha; Staats, Matt; Grijincu, Daniela
2015-01-01
Software analysis tools and techniques often leverage structural code coverage information to reason about the dynamic behavior of software. Existing techniques instrument the code with the required structural obligations and then monitor the execution of the compiled code to report coverage. Instrumentation based approaches often incur considerable runtime overhead for complex structural coverage metrics such as Modified Condition/Decision (MC/DC). Code instrumentation, in general, has to be approached with great care to ensure it does not modify the behavior of the original code. Furthermore, instrumented code cannot be used in conjunction with other analyses that reason about the structure and semantics of the code under test. In this work, we introduce a non-intrusive preprocessing approach for computing structural coverage information. It uses a static partial evaluation of the decisions in the source code and a source-to-bytecode mapping to generate the information necessary to efficiently track structural coverage metrics during execution. Our technique is flexible; the results of the preprocessing can be used by a variety of coverage-driven software analysis tasks, including automated analyses that are not possible for instrumented code. Experimental results in the context of symbolic execution show the efficiency and flexibility of our nonintrusive approach for computing code coverage information
75 FR 14331 - Disaster Assistance Loan Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... meet current building code requirements. If your business is a major source of employment, SBA may..., granting tax exemption under sections 510(c), (d), or (e) of the Internal Revenue Code of 1954, or (2...; 8:45 am] BILLING CODE 8025-01-P ...
Building a Better Campus: An Update on Building Codes.
ERIC Educational Resources Information Center
Madden, Michael J.
2002-01-01
Discusses the implications for higher education institutions in terms of facility planning, design, construction, and renovation of the move from regionally-developed model-building codes to two international sets of codes. Also addresses the new performance-based design option within the codes. (EV)
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F.
2011-12-01
Almost all natural phenomena on Earth are highly nonlinear. Even simplifications to the equations describing nature usually end up being nonlinear partial differential equations. Transport (ADR) equation is a pivotal equation in atmospheric sciences and water quality. This nonlinear equation needs to be solved numerically for practical purposes so academicians and engineers thoroughly rely on the assistance of numerical codes. Thus, numerical codes require verification before they are utilized for multiple applications in science and engineering. Model verification is a mathematical procedure whereby a numerical code is checked to assure the governing equation is properly solved as it is described in the design document. CFD verification is not a straightforward and well-defined course. Only a complete test suite can uncover all the limitations and bugs. Results are needed to be assessed to make a distinction between bug-induced-defect and innate limitation of a numerical scheme. As Roache (2009) said, numerical verification is a state-of-the-art procedure. Sometimes novel tricks work out. This study conveys the synopsis of the experiences we gained during a comprehensive verification process which was done for a transport solver. A test suite was designed including unit tests and algorithmic tests. Tests were layered in complexity in several dimensions from simple to complex. Acceptance criteria defined for the desirable capabilities of the transport code such as order of accuracy, mass conservation, handling stiff source term, spurious oscillation, and initial shape preservation. At the begining, mesh convergence study which is the main craft of the verification is performed. To that end, analytical solution of ADR equation gathered. Also a new solution was derived. In the more general cases, lack of analytical solution could be overcome through Richardson Extrapolation and Manufactured Solution. Then, two bugs which were concealed during the mesh convergence study uncovered with the method of false injection and visualization of the results. Symmetry had dual functionality: there was a bug, which was hidden due to the symmetric nature of a test (it was detected afterward utilizing artificial false injection), on the other hand self-symmetry was used to design a new test, and in a case the analytical solution of the ADR equation was unknown. Assisting subroutines designed to check and post-process conservation of mass and oscillatory behavior. Finally, capability of the solver also checked for stiff reaction source term. The above test suite not only was a decent tool of error detection but also it provided a thorough feedback on the ADR solvers limitations. Such information is the crux of any rigorous numerical modeling for a modeler who deals with surface/subsurface pollution transport.
Hoare, Karen J; Mills, Jane; Francis, Karen
2012-12-01
The terminology used to analyse data in a grounded theory study can be confusing. Different grounded theorists use a variety of terms which all have similar meanings. In the following study, we use terms adopted by Charmaz including: initial, focused and axial coding. Initial codes are used to analyse data with an emphasis on identifying gerunds, a verb acting as a noun. If initial codes are relevant to the developing theory, they are grouped with similar codes into categories. Categories become saturated when there are no new codes identified in the data. Axial codes are used to link categories together into a grounded theory process. Memo writing accompanies this data sifting and sorting. The following article explains how one initial code became a category providing a worked example of the grounded theory method of constant comparative analysis. The interplay between coding and categorization is facilitated by the constant comparative method. © 2012 Wiley Publishing Asia Pty Ltd.
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Marshall, Harry H; Griffiths, David J; Mwanguhya, Francis; Businge, Robert; Griffiths, Amber G F; Kyabulima, Solomon; Mwesige, Kenneth; Sanderson, Jennifer L; Thompson, Faye J; Vitikainen, Emma I K; Cant, Michael A
2018-01-01
Studying ecological and evolutionary processes in the natural world often requires research projects to follow multiple individuals in the wild over many years. These projects have provided significant advances but may also be hampered by needing to accurately and efficiently collect and store multiple streams of the data from multiple individuals concurrently. The increase in the availability and sophistication of portable computers (smartphones and tablets) and the applications that run on them has the potential to address many of these data collection and storage issues. In this paper we describe the challenges faced by one such long-term, individual-based research project: the Banded Mongoose Research Project in Uganda. We describe a system we have developed called Mongoose 2000 that utilises the potential of apps and portable computers to meet these challenges. We discuss the benefits and limitations of employing such a system in a long-term research project. The app and source code for the Mongoose 2000 system are freely available and we detail how it might be used to aid data collection and storage in other long-term individual-based projects.
A structured approach to recording AIDS-defining illnesses in Kenya: A SNOMED CT based solution
Oluoch, Tom; de Keizer, Nicolette; Langat, Patrick; Alaska, Irene; Ochieng, Kenneth; Okeyo, Nicky; Kwaro, Daniel; Cornet, Ronald
2016-01-01
Introduction Several studies conducted in sub-Saharan Africa (SSA) have shown that routine clinical data in HIV clinics often have errors. Lack of structured and coded documentation of diagnosis of AIDS defining illnesses (ADIs) can compromise data quality and decisions made on clinical care. Methods We used a structured framework to derive a reference set of concepts and terms used to describe ADIs. The four sources used were: (i) CDC/Accenture list of opportunistic infections, (ii) SNOMED Clinical Terms (SNOMED CT), (iii) Focus Group Discussion (FGD) among clinicians and nurses attending to patients at a referral provincial hospital in western Kenya, and (iv) chart abstraction from the Maternal Child Health (MCH) and HIV clinics at the same hospital. Using the January 2014 release of SNOMED CT, concepts were retrieved that matched terms abstracted from approach iii & iv, and the content coverage assessed. Post-coordination matching was applied when needed. Results The final reference set had 1054 unique ADI concepts which were described by 1860 unique terms. Content coverage of SNOMED CT was high (99.9% with pre-coordinated concepts; 100% with post-coordination). The resulting reference set for ADIs was implemented as the interface terminology on OpenMRS data entry forms. Conclusion Different sources demonstrate complementarity in the collection of concepts and terms for an interface terminology. SNOMED CT provides a high coverage in the domain of ADIs. Further work is needed to evaluate the effect of the interface terminology on data quality and quality of care. PMID:26184057
AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8
2015-08-15
We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less
Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport
NASA Astrophysics Data System (ADS)
Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl
We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.
Evaluation of radiological dispersion/consequence codes supporting DOE nuclear facility SARs
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Kula, K.R.; Paik, I.K.; Chung, D.Y.
1996-12-31
Since the early 1990s, the authorization basis documentation of many U.S. Department of Energy (DOE) nuclear facilities has been upgraded to comply with DOE orders and standards. In this process, many safety analyses have been revised. Unfortunately, there has been nonuniform application of software, and the most appropriate computer and engineering methodologies often are not applied. A DOE Accident Phenomenology and Consequence (APAC) Methodology Evaluation Program was originated at the request of DOE Defense Programs to evaluate the safety analysis methodologies used in nuclear facility authorization basis documentation and to define future cost-effective support and development initiatives. Six areas, includingmore » source term development (fire, spills, and explosion analysis), in-facility transport, and dispersion/ consequence analysis (chemical and radiological) are contained in the APAC program. The evaluation process, codes considered, key results, and recommendations for future model and software development of the Radiological Dispersion/Consequence Working Group are summarized in this paper.« less
NASA Astrophysics Data System (ADS)
Panda, Satyasen
2018-05-01
This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.
A new exact method for line radiative transfer
NASA Astrophysics Data System (ADS)
Elitzur, Moshe; Asensio Ramos, Andrés
2006-01-01
We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.
An analysis of how The Irish Times portrayed Irish nursing during the 1999 strike.
Clarke, J; O'Neill, C S
2001-07-01
The aim of this article is to explore the images of nursing that were presented in the media during the recent industrial action by nurses and midwives in the Republic of Ireland. Although both nurses and midwives took industrial strike action, the strike was referred to as 'the nurses' strike' and both nurses and midwives were generally referred to by the generic term 'nurses'. Data were gathered from the printed news media of The Irish Times over a period of one month--4 October to 4 November 1999--which included the nine days of the strike. Although we limited the source of our data to just one newspaper, the findings do provide an image of how nurses and nursing care are viewed by both health professionals and the public. This image appeared to give a higher value to masculine cultural codes and the performance of technical skills, whereas acts associated with feminine cultural codes of caring were considered of lower value.
Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations
NASA Astrophysics Data System (ADS)
Husa, Sascha; Hinder, Ian; Lechner, Christiane
2006-06-01
We present a suite of Mathematica-based computer-algebra packages, termed "Kranc", which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a "rapid prototyping" system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations. Program summaryTitle of program: Kranc Catalogue identifier: ADXS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under Linux Programming language used: Mathematica, C, Fortran 90 Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KB Has the code been vectorized or parallelized: The code is parallelized based on the Cactus framework. Number of bytes in distributed program, including test data, etc.: 1 578 142 Number of lines in distributed program, including test data, etc.: 11 711 Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus
Towards Holography via Quantum Source-Channel Codes.
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-14
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Towards Holography via Quantum Source-Channel Codes
NASA Astrophysics Data System (ADS)
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
1978-01-01
complex, applications of the code . NASCAP CODE DESCRIPTION The NASCAP code is a finite-element spacecraft-charging simulation that is written in FORTRAN ...transport code POEM (ref. 1), is applicable to arbitrary dielectrics, source spectra, and current time histories. The code calculations are illustrated by...iaxk ’. Vlbouced _DstributionL- 9TNA Availability Codes %ELECTEf Nationa Aeronautics and Dist. Spec al TAvalland/or. MAY 2 21980 Space Administration
Technology Infusion of CodeSonar into the Space Network Ground Segment
NASA Technical Reports Server (NTRS)
Benson, Markland J.
2009-01-01
This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.
Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma
NASA Astrophysics Data System (ADS)
Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.
2017-10-01
For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that this input can be provided reliably by the NINJA code.
Mechanic: The MPI/HDF code framework for dynamical astronomy
NASA Astrophysics Data System (ADS)
Słonina, Mariusz; Goździewski, Krzysztof; Migaszewski, Cezary
2015-01-01
We introduce the Mechanic, a new open-source code framework. It is designed to reduce the development effort of scientific applications by providing unified API (Application Programming Interface) for configuration, data storage and task management. The communication layer is based on the well-established Message Passing Interface (MPI) standard, which is widely used on variety of parallel computers and CPU-clusters. The data storage is performed within the Hierarchical Data Format (HDF5). The design of the code follows core-module approach which allows to reduce the user’s codebase and makes it portable for single- and multi-CPU environments. The framework may be used in a local user’s environment, without administrative access to the cluster, under the PBS or Slurm job schedulers. It may become a helper tool for a wide range of astronomical applications, particularly focused on processing large data sets, such as dynamical studies of long-term orbital evolution of planetary systems with Monte Carlo methods, dynamical maps or evolutionary algorithms. It has been already applied in numerical experiments conducted for Kepler-11 (Migaszewski et al., 2012) and νOctantis planetary systems (Goździewski et al., 2013). In this paper we describe the basics of the framework, including code listings for the implementation of a sample user’s module. The code is illustrated on a model Hamiltonian introduced by (Froeschlé et al., 2000) presenting the Arnold diffusion. The Arnold web is shown with the help of the MEGNO (Mean Exponential Growth of Nearby Orbits) fast indicator (Goździewski et al., 2008a) applied onto symplectic SABAn integrators family (Laskar and Robutel, 2001).
NASA Technical Reports Server (NTRS)
Meyer, Harold D.
1999-01-01
This second volume of Acoustic Scattering by Three-Dimensional Stators and Rotors Using the SOURCE3D Code provides the scattering plots referenced by Volume 1. There are 648 plots. Half are for the 8750 rpm "high speed" operating condition and the other half are for the 7031 rpm "mid speed" operating condition.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things
Akan, Ozgur B.
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.
Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).
Code Analysis and Refactoring with Clang Tools, Version 0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, Timothy M.
2016-12-23
Code Analysis and Refactoring with Clang Tools is a small set of example code that demonstrates techniques for applying tools distributed with the open source Clang compiler. Examples include analyzing where variables are used and replacing old data structures with standard structures.
TEA: A Code Calculating Thermochemical Equilibrium Abundances
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but withmore » higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.« less
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Seligmann, Hervé
2018-05-01
Genetic codes mainly evolve by reassigning punctuation codons, starts and stops. Previous analyses assuming that undefined amino acids translate stops showed greater divergence between nuclear and mitochondrial genetic codes. Here, three independent methods converge on which amino acids translated stops at split between nuclear and mitochondrial genetic codes: (a) alignment-free genetic code comparisons inserting different amino acids at stops; (b) alignment-based blast analyses of hypothetical peptides translated from non-coding mitochondrial sequences, inserting different amino acids at stops; (c) biases in amino acid insertions at stops in proteomic data. Hence short-term protein evolution models reconstruct long-term genetic code evolution. Mitochondria reassign stops to amino acids otherwise inserted at stops by codon-anticodon mismatches (near-cognate tRNAs). Hence dual function (translation termination and translation by codon-anticodon mismatch) precedes mitochondrial reassignments of stops to amino acids. Stop ambiguity increases coded information, compensates endocellular mitogenome reduction. Mitochondrial codon reassignments might prevent viral infections. Copyright © 2018 Elsevier B.V. All rights reserved.
Almansa, Julio F; Guerrero, Rafael; Torres, Javier; Lallena, Antonio M
60 Co sources have been commercialized as an alternative to 192 Ir sources for high-dose-rate (HDR) brachytherapy. One of them is the Flexisource Co-60 HDR source manufactured by Elekta. The only available dosimetric characterization of this source is that of Vijande et al. [J Contemp Brachytherapy 2012; 4:34-44], whose results were not included in the AAPM/ESTRO consensus document. In that work, the dosimetric quantities were calculated as averages of the results obtained with the Geant4 and PENELOPE Monte Carlo (MC) codes, though for other sources, significant differences have been quoted between the values obtained with these two codes. The aim of this work is to perform the dosimetric characterization of the Flexisource Co-60 HDR source using PENELOPE. The MC simulation code PENELOPE (v. 2014) has been used. Following the recommendations of the AAPM/ESTRO report, the radial dose function, the anisotropy function, the air-kerma strength, the dose rate constant, and the absorbed dose rate in water have been calculated. The results we have obtained exceed those of Vijande et al. In particular, the absorbed dose rate constant is ∼0.85% larger. A similar difference is also found in the other dosimetric quantities. The effect of the electrons emitted in the decay of 60 Co, usually neglected in this kind of simulations, is significant up to the distances of 0.25 cm from the source. The systematic and significant differences we have found between PENELOPE results and the average values found by Vijande et al. point out that the dosimetric characterizations carried out with the various MC codes should be provided independently. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Asymptotic/numerical analysis of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Myers, M. K.; Wydeven, R.
1989-01-01
An asymptotic analysis based on the Mach surface structure of the field of a supersonic helical source distribution is applied to predict thickness and loading noise radiated by high speed propeller blades. The theory utilizes an integral representation of the Ffowcs-Williams Hawkings equation in a fully linearized form. The asymptotic results are used for chordwise strips of the blade, while required spanwise integrations are performed numerically. The form of the analysis enables predicted waveforms to be interpreted in terms of Mach surface propagation. A computer code developed to implement the theory is described and found to yield results in close agreement with more exact computations.
clusterProfiler: an R package for comparing biological themes among gene clusters.
Yu, Guangchuang; Wang, Li-Gen; Han, Yanyan; He, Qing-Yu
2012-05-01
Increasing quantitative data generated from transcriptomics and proteomics require integrative strategies for analysis. Here, we present an R package, clusterProfiler that automates the process of biological-term classification and the enrichment analysis of gene clusters. The analysis module and visualization module were combined into a reusable workflow. Currently, clusterProfiler supports three species, including humans, mice, and yeast. Methods provided in this package can be easily extended to other species and ontologies. The clusterProfiler package is released under Artistic-2.0 License within Bioconductor project. The source code and vignette are freely available at http://bioconductor.org/packages/release/bioc/html/clusterProfiler.html.
Zipf's Law in Short-Time Timbral Codings of Speech, Music, and Environmental Sound Signals
Haro, Martín; Serrà, Joan; Herrera, Perfecto; Corral, Álvaro
2012-01-01
Timbre is a key perceptual feature that allows discrimination between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have specific, distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. PMID:22479497