Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.
Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella
2010-07-01
Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.
Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.
2016-01-01
Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331
NASA Astrophysics Data System (ADS)
Menthe, R. W.; McColgan, C. J.; Ladden, R. M.
1991-05-01
The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.
NASA Technical Reports Server (NTRS)
Menthe, R. W.; Mccolgan, C. J.; Ladden, R. M.
1991-01-01
The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.
Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C
2016-06-01
Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
CFL3D User's Manual (Version 5.0)
NASA Technical Reports Server (NTRS)
Krist, Sherrie L.; Biedron, Robert T.; Rumsey, Christopher L.
1998-01-01
This document is the User's Manual for the CFL3D computer code, a thin-layer Reynolds-averaged Navier-Stokes flow solver for structured multiple-zone grids. Descriptions of the code's input parameters, non-dimensionalizations, file formats, boundary conditions, and equations are included. Sample 2-D and 3-D test cases are also described, and many helpful hints for using the code are provided.
Manual for obscuration code with space station applications
NASA Technical Reports Server (NTRS)
Marhefka, R. J.; Takacs, L.
1986-01-01
The Obscuration Code, referred to as SHADOW, is a user-oriented computer code to determine the case shadow of an antenna in a complex environment onto the far zone sphere. The surrounding structure can be composed of multiple composite cone frustums and multiply sided flat plates. These structural pieces are ideal for modeling space station configurations. The means of describing the geometry input is compatible with the NEC-BASIC Scattering Code. In addition, an interactive mode of operation has been provided for DEC VAX computers. The first part of this document is a user's manual designed to give a description of the method used to obtain the shadow map, to provide an overall view of the operation of the computer code, to instruct a user in how to model structures, and to give examples of inputs and outputs. The second part is a code manual that details how to set up the interactive and non-interactive modes of the code and provides a listing and brief description of each of the subroutines.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERM code GUI, as well as providing training applications.
Nucleon interaction data bases for background estimates
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.
1989-01-01
Nucleon interaction data bases available in the open literature are examined for potential use in a recently developed nucleon transport code. Particular attention is given to secondary particle penetration and the multiple charged ion products. A brief description of the transport algorithm is given.
Adaptive partially hidden Markov models with application to bilevel image coding.
Forchhammer, S; Rasmussen, T S
1999-01-01
Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.
Liquid rocket combustor computer code development
NASA Technical Reports Server (NTRS)
Liang, P. Y.
1985-01-01
The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.
C2M: Configurable Chemical Middleware
Roosendaal, Hans E.; Geurts, Peter A. T. M.
2001-01-01
One of the vexing problems that besets concurrent use of multiple, heterogeneous resources is format multiplicity. C2M aims to equip scientists with a wrapper generator on their desktop. The wrapper generator can build wrappers, or converters that can convert data from or into different formats, from a high-level description of the formats. The language in which such a high-level description is expressed is easy enough for scientists to be able to write format descriptions at minimal cost. In C2M, wrappers and documentation for human reading are automatically obtained from the same user-supplied specifications. Initial experiments demonstrate that the idea can, indeed, lead to the advent of usergoverned wrapper generators. Future research will consolidate the code and extend the approach to a realistic variety of formats. PMID:18628869
GSE, data management system programmers/User' manual
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Dolerhie, B. D., Jr.; Ghiglieri, F. J.
1974-01-01
The GSE data management system is a computerized program which provides for a central storage source for key data associated with the mechanical ground support equipment (MGSE). Eight major sort modes can be requested by the user. Attributes that are printed automatically with each sort include the GSE end item number, description, class code, functional code, fluid media, use location, design responsibility, weight, cost, quantity, dimensions, and applicable documents. Multiple subsorts are available for the class code, functional code, fluid media, use location, design responsibility, and applicable document categories. These sorts and how to use them are described. The program and GSE data bank may be easily updated and expanded.
User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.
1982-01-01
This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.
50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Gear Codes, Descriptions, and Use 15 Table... ALASKA Pt. 679, Table 15 Table 15 to Part 679—Gear Codes, Descriptions, and Use Gear Codes, Descriptions, and Use (X indicates where this code is used) Name of gear Use alphabetic code to complete the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simakov, Andrei N., E-mail: simakov@lanl.gov; Molvig, Kim
2016-03-15
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas 23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasma transport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ionmore » species plasma made of deuterium, tritium, and gold. These results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
Simakov, Andrei Nikolaevich; Molvig, Kim
2016-03-17
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasmatransport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ion species plasmamore » made of deuterium, tritium, and gold. We find that these results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
Afshar, Parvaneh; Saravi, Benyamin Mohseni; Nehmati, Ebrahim; Farahabbadi, Ebrahim Bagherian; Yazdanian, Azadeh; Siamian, Hasan; Vahedi, Mohammad
2013-01-01
One of the issues in health care delivery system is resistance to antibiotics. Many researches were done to show the causes and antibiotics which was resistance. In most researches the methods of classifying and reporting this resistance were made by researcher, so in this research we examined the International Classification of Diseases 10 the edition (ICD-10). This is a descriptive cross section study; data was collected from laboratory of Boo Ali Sina hospital, during 2011-2012. The check list was designed according the aim of study. Variables were age, bacterial agent, specimen, and antibiotics. The bacteria and resistance were classified with ICD-10. The data were analyzed with SPSS (16) soft ware and the descriptive statistics. Results showed that of the 10198 request for culture and antibiogram, there were 1020(10%) resistance. The specimen were 648 (63.5%) urine, blood 127(12.5%), other secretion 125 (12/3%), sputum 102 (10%), lumbar puncture 8 (0/8%), stool 6 (6/0%) and bone marrow 4 (0.4%). The E coli was the most 413 (40.5%) resistance cause to antibiotics which was coded with B96.2 and the most resistance was to multiple antibiotics 885(86.8%) with the U88 code. The results showed that by using the ICD-10 codes, the study of multiple causes and resistance is possible. The routine usage of coding of the ICD-10 would result to an up to date bank of resistance to antibiotics in every hospitals and useful for physicians, other health care, and health administrations.
From Theory to Practice: Measuring end-of-life communication quality using multiple goals theory.
Van Scoy, L J; Scott, A M; Reading, J M; Chuang, C H; Chinchilli, V M; Levi, B H; Green, M J
2017-05-01
To describe how multiple goals theory can be used as a reliable and valid measure (i.e., coding scheme) of the quality of conversations about end-of-life issues. We analyzed conversations from 17 conversations in which 68 participants (mean age=51years) played a game that prompted discussion in response to open-ended questions about end-of-life issues. Conversations (mean duration=91min) were audio-recorded and transcribed. Communication quality was assessed by three coders who assigned numeric scores rating how well individuals accomplished task, relational, and identity goals in the conversation. The coding measure, which results in a quantifiable outcome, yielded strong reliability (intra-class correlation range=0.73-0.89 and Cronbach's alpha range=0.69-0.89 for each of the coded domains) and validity (using multilevel nonlinear modeling, we detected significant variability in scores between games for each of the coded domains, all p-values <0.02). Our coding scheme provides a theory-based measure of end-of-life conversation quality that is superior to other methods of measuring communication quality. Our description of the coding method enables researches to adapt and apply this measure to communication interventions in other clinical contexts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
1979-09-01
KEY WORDS (Continue on revmrem elde It necmmemry and Identity by block number) Target Descriptions GIFT Code C0MGE0M Descriptions FASTGEN Code...which accepts the COMGEOM target description and 1 2 produces the shotline data is the GIFT ’ code. The GIFT code evolved 3 4 from and has...the COMGEOM/ GIFT methodology, while the Navy and Air Force use the PATCH/SHOTGEN-FASTGEN methodology. Lawrence W. Bain, Mathew J. Heisinger
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain Electromagnetic Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file, a discussion of radar cross section computations, a discussion of some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
A Wideband Satcom Based Avionics Network with CDMA Uplink and TDM Downlink
NASA Technical Reports Server (NTRS)
Agrawal, D.; Johnson, B. S.; Madhow, U.; Ramchandran, K.; Chun, K. S.
2000-01-01
The purpose of this paper is to describe some key technical ideas behind our vision of a future satcom based digital communication network for avionics applications The key features of our design are as follows: (a) Packetized transmission to permit efficient use of system resources for multimedia traffic; (b) A time division multiplexed (TDM) satellite downlink whose physical layer is designed to operate the satellite link at maximum power efficiency. We show how powerful turbo codes (invented originally for linear modulation) can be used with nonlinear constant envelope modulation, thus permitting the satellite amplifier to operate in a power efficient nonlinear regime; (c) A code division multiple access (CDMA) satellite uplink, which permits efficient access to the satellite from multiple asynchronous users. Closed loop power control is difficult for bursty packetized traffic, especially given the large round trip delay to the satellite. We show how adaptive interference suppression techniques can be used to deal with the ensuing near-far problem; (d) Joint source-channel coding techniques are required both at the physical and the data transport layer to optimize the end-to-end performance. We describe a novel approach to multiple description image encoding at the data transport layer in this paper.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version D is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version D code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMOND.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain (FDTD) Electromagnetic Scattering Code Version A is a three dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain technique. The supplied version of the code is one version of our current three dimensional FDTD code set. The manual provides a description of the code and the corresponding results for the default scattering problem. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version A code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONA.FOR), a section briefly discussing radar cross section (RCS) computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three-dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain (FDTD) technique. The supplied version of the code is one version of our current three-dimensional FDTD code set. The manual given here provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing radar cross section computations, a section discussing some scattering results, a new problem checklist, references, and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONB.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
Two way time transfer results at NRL and USNO
NASA Technical Reports Server (NTRS)
Galysh, Ivan J.; Landis, G. Paul
1993-01-01
The Naval Research Laboratory (NRL) has developed a two way time transfer modem system for the United States Naval Observatory (USNO). Two modems in conjunction with a pair of Very Small Aperture Terminal (VSAT) and a communication satellite can achieve sub nanosecond time transfer. This performance is demonstrated by the results of testing at and between NRL and USNO. The modems use Code Division Multiple Access (CDMA) methods to separate their signals through a single path in the satellite. Each modem transmitted a different Pseudo Random Noise (PRN) code and received the others PRN code. High precision time transfer is possible with two way methods because of reciprocity of many of the terms of the path and hardware delay between the two modems. The hardware description was given in a previous paper.
Contributions to HiLiftPW-3 Using Structured, Overset Grid Methods
NASA Technical Reports Server (NTRS)
Coder, James G.; Pulliam, Thomas H.; Jensen, James C.
2018-01-01
The High-Lift Common Research Model (HL-CRM) and the JAXA Standard Model (JSM) were analyzed computationally using both the OVERFLOW and LAVA codes for the third AIAA High-Lift Prediction Workshop. Geometry descriptions and the test cases simulated are described. With the HL-CRM, the effects of surface smoothness during grid projection and the effect of partially sealing a flap gap were studied. Grid refinement studies were performed at two angles of attack using both codes. For the JSM, simulations were performed with and without the nacelle/pylon. Without the nacelle/pylon, evidence of multiple solutions was observed when a quadratic constitutive relation is used in the turbulence modeling; however, using time-accurate simulation seemed to alleviate this issue. With the nacelle/pylon, no evidence of multiple solutions was observed. Laminar-turbulent transition modeling was applied to both JSM configuration, and had an overall favorable impact on the lift predictions.
COMOC: Three dimensional boundary region variant, programmer's manual
NASA Technical Reports Server (NTRS)
Orzechowski, J. A.; Baker, A. J.
1974-01-01
The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.
42 CFR Appendix A to Part 81 - Glossary of ICD-9 Codes and Their Cancer Descriptions 1
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Glossary of ICD-9 Codes and Their Cancer.... 81, App. A Appendix A to Part 81—Glossary of ICD-9 Codes and Their Cancer Descriptions 1 ICD-9 code Cancer description 140 Malignant neoplasm of lip. 141 Malignant neoplasm of tongue. 142 Malignant...
Computer Description of Black Hawk Helicopter
1979-06-01
Model Combinatorial Geometry Models Black Hawk Helicopter Helicopter GIFT Computer Code Geometric Description of Targets 20. ABSTRACT...description was made using the technique of combinatorial geometry (COM-GEOM) and will be used as input to the GIFT computer code which generates Tliic...rnHp The data used bv the COVART comtmter code was eenerated bv the Geometric Information for Targets ( GIFT )Z computer code. This report documents
This purpose of this SOP is to define the coding strategy for the Descriptive Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; descriptive questionnaire.
The National Human Exposure Assessment...
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Finite Difference Time Domain Electromagnetic Scattering Code Version A is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). This manual provides a description of the code and corresponding results for the default scattering problem. In addition to the description, the operation, resource requirements, version A code capabilities, a description of each subroutine, a brief discussion of the radar cross section computations, and a discussion of the scattering results.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional numerical electromagnetic scattering codes based upon the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current two dimensional FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem set section, a new problem checklist, references and figure titles.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional electromagnetic scattering codes based on the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.
This purpose of this SOP is to define the coding strategy for the Descriptive Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the Border study. Keywords: data; coding; descriptive questionnaire.
The U.S.-Mexico Border Program is sponso...
Functional Requirements of a Target Description System for Vulnerability Analysis
1979-11-01
called GIFT .1,2 Together the COMGEOM description model and GIFT codes make up the BRL’s target description system. The significance of a target...and modifying target descriptions are described. 1 Lawrence W. Bain, Jr. and Mathew J. Reisinger, "The GIFT Code User Manual; Volume 1...34The GIFT Code User Manual; Volume II, The Output Options," unpublished draft of BRL report. II. UNDERLYING PHILOSOPHY The BRL has a computer
A Combinatorial Geometry Computer Description of the M9 ACE (Armored Combat Earthmover) Vehicle
1984-12-01
program requires as input the M9 target descriptions as processed by the Geometric Information for Targets ( GIFT ) ’ computer code. The first step is...model of the target. This COM-GEOM target description is used as input to the Geometric Information For Targets ( GIFT ) computer code. Among other...things, the GIFT code traces shotlines through a COM-GEOM description from any specified aspect, listing pertinent information about each component hit
1985-11-01
Boiler and Pressure Vessel Code HEI Heat Exchanger Institute Heat and Material Balance c. System Description (1) Condenser... Boiler and Pressure Vessel Code "AN(SI B31.1 Power Piping d. System Description (1) Deaerator The deaerator will be d direct contact feedwater heater, and...vent, and drain piping. "b . Applicable Codes ASME Boiler and Pressure Vessel Code "ANSI B31.1 - Power Piping Code
Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.
2002-01-01
This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
NASA Astrophysics Data System (ADS)
Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan
2005-12-01
Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.
NASA Technical Reports Server (NTRS)
Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.
1992-01-01
The Penn State Finite Difference Time Domain Electromagnetic Scattering Code version D is a 3-D numerical electromagnetic scattering code based upon the finite difference time domain technique (FDTD). The manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction; description of the FDTD method; operation; resource requirements; version D code capabilities; a brief description of the default scattering geometry; a brief description of each subroutine; a description of the include file; a section briefly discussing Radar Cross Section computations; a section discussing some scattering results; a sample problem setup section; a new problem checklist; references and figure titles. The FDTD technique models transient electromagnetic scattering and interactions with objects of arbitrary shape and/or material composition. In the FDTD method, Maxwell's curl equations are discretized in time-space and all derivatives (temporal and spatial) are approximated by central differences.
Huang, Emily; Wyles, Susannah M; Chern, Hueylan; Kim, Edward; O'Sullivan, Patricia
2016-07-01
A developmental and descriptive approach to assessing trainee intraoperative performance was explored. Semistructured interviews with 20 surgeon educators were recorded, transcribed, deidentified, and analyzed using a grounded theory approach to identify emergent themes. Two researchers independently coded the transcripts. Emergent themes were also compared to existing theories of skill acquisition. Surgeon educators characterized intraoperative surgical performance as an integrated practice of multiple skill categories and included anticipating, planning for contingencies, monitoring progress, self-efficacy, and "working knowledge." Comments concerning progression through stages, broadly characterized as "technician," "anatomist," "anticipator," "strategist," and "executive," formed a narrative about each stage of development. The developmental trajectory with narrative, descriptive profiles of surgeons working toward mastery provide a standardized vocabulary for communicating feedback, while fostering reflection on trainee progress. Viewing surgical performance as integrated practice rather than the conglomerate of isolated skills enhances authentic assessment. Copyright © 2015 Elsevier Inc. All rights reserved.
Computer Description of the Field Artillery Ammunition Supply Vehicle
1983-04-01
Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and
A Combinatorial Geometry Computer Description of the MEP-021A Generator Set
1979-02-01
Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] *7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack
NASA Technical Reports Server (NTRS)
Furlong, K. L.; Fearn, R. L.
1983-01-01
A method is proposed to combine a numerical description of a jet in a crossflow with a lifting surface panel code to calculate the jet/aerodynamic-surface interference effects on a V/STOL aircraft. An iterative technique is suggested that starts with a model for the properties of a jet/flat plate configuration and modifies these properties based on the flow field calculated for the configuration of interest. The method would estimate the pressures, forces, and moments on an aircraft out of ground effect. A first-order approximation to the method suggested is developed and applied to two simple configurations. The first-order approximation is a noniterative precedure which does not allow for interactions between multiple jets in a crossflow and also does not account for the influence of lifting surfaces on the jet properties. The jet/flat plate model utilized in the examples presented is restricted to a uniform round jet injected perpendicularly into a uniform crossflow for a range of jet-to-crossflow velocity ratios from three to ten.
Exploring Trilingual Code-Switching: The Case of "Hokaglish"
ERIC Educational Resources Information Center
Gonzales, Wilkinson Daniel Wong
2016-01-01
This paper presents findings of an initial study on a trilingual code-switching (CS) phenomenon called "Hokaglish" in Binondo, Manila, The Philippines. Beginning with descriptions of multiculturalism and multilingualism in the Philippines, the discussion eventually leads to the description and survey of the code-switching phenomenon…
A Combinatorial Geometry Computer Description of the XR311 Vehicle
1978-04-01
cards or magnetic tape. The shot line output of the GRID subroutine of the GIFT code is also stored on magnetic tape for future vulnera- bility...descriptions as processed by the Geometric Information For Targets ( GIFT )2 computer code. This report documents the COM-GEOM target description for all...72, March 1974. ’L.W. Bains and M.J. Reisinger, "The GIFT Code User Manual, VOL 1, Introduction and Input Requirements, " Ballistic Research
State Dependency of Chemosensory Coding in the Gustatory Thalamus (VPMpc) of Alert Rats
Liu, Haixin
2015-01-01
The parvicellular portion of the ventroposteromedial nucleus (VPMpc) is the part of the thalamus that processes gustatory information. Anatomical evidence shows that the VPMpc receives ascending gustatory inputs from the parabrachial nucleus (PbN) in the brainstem and sends projections to the gustatory cortex (GC). Although taste processing in PbN and GC has been the subject of intense investigation in behaving rodents, much less is known on how VPMpc neurons encode gustatory information. Here we present results from single-unit recordings in the VPMpc of alert rats receiving multiple tastants. Thalamic neurons respond to taste with time-varying modulations of firing rates, consistent with those observed in GC and PbN. These responses encode taste quality as well as palatability. Comparing responses to tastants either passively delivered, or self-administered after a cue, unveiled the effects of general expectation on taste processing in VPMpc. General expectation led to an improvement of taste coding by modulating response dynamics, and single neuron ability to encode multiple tastants. Our results demonstrate that the time course of taste coding as well as single neurons' ability to encode for multiple qualities are not fixed but rather can be altered by the state of the animal. Together, the data presented here provide the first description that taste coding in VPMpc is dynamic and state-dependent. SIGNIFICANCE STATEMENT Over the past years, a great deal of attention has been devoted to understanding taste coding in the brainstem and cortex of alert rodents. Thanks to this research, we now know that taste coding is dynamic, distributed, and context-dependent. Alas, virtually nothing is known on how the gustatory thalamus (VPMpc) processes gustatory information in behaving rats. This manuscript investigates taste processing in the VPMpc of behaving rats. Our results show that thalamic neurons encode taste and palatability with time-varying patterns of activity and that thalamic coding of taste is modulated by general expectation. Our data will appeal not only to researchers interested in taste, but also to a broader audience of sensory and systems neuroscientists interested in the thalamocortical system. PMID:26609147
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
Animations Need Narrations: An Experimental Test of a Dual-Coding Hypothesis.
ERIC Educational Resources Information Center
Mayer, Richard E.; Anderson, Richard B.
1991-01-01
In two experiments, 102 mechanically naive college students viewed an animation on bicycle tire pump operation with a verbal description before or during the animation or without description. Improved performance of those receiving description during the animation supports a dual-coding hypothesis of connections between visual and verbal stimuli.…
A framework for streamlining research workflow in neuroscience and psychology
Kubilius, Jonas
2014-01-01
Successful accumulation of knowledge is critically dependent on the ability to verify and replicate every part of scientific conduct. However, such principles are difficult to enact when researchers continue to resort on ad-hoc workflows and with poorly maintained code base. In this paper I examine the needs of neuroscience and psychology community, and introduce psychopy_ext, a unifying framework that seamlessly integrates popular experiment building, analysis and manuscript preparation tools by choosing reasonable defaults and implementing relatively rigid patterns of workflow. This structure allows for automation of multiple tasks, such as generated user interfaces, unit testing, control analyses of stimuli, single-command access to descriptive statistics, and publication quality plotting. Taken together, psychopy_ext opens an exciting possibility for a faster, more robust code development and collaboration for researchers. PMID:24478691
User Manual for the PROTEUS Mesh Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Shemon, Emily R.
2015-06-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a givenmore » mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less
The language parallel Pascal and other aspects of the massively parallel processor
NASA Technical Reports Server (NTRS)
Reeves, A. P.; Bruner, J. D.
1982-01-01
A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.
MuSim, a Graphical User Interface for Multiple Simulation Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland
2016-06-01
MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simakov, Andrei Nikolaevich; Molvig, Kim
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasmatransport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ion species plasmamore » made of deuterium, tritium, and gold. We find that these results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
Kerr, Catherine E.; Josyula, Krishnapriya; Littenberg, Ronnie
2011-01-01
Mindfulness-based stress reduction (MBSR) is an 8-week training that is designed to teach participants mindful awareness of the present moment. In randomized clinical trials (RCTs), MBSR has demonstrated efficacy in various conditions including reducing chronic pain related distress and improving quality of life in healthy individuals. There have, however, been no qualitative studies investigating participants’ descriptions of changes experienced over multiple time-points during the course of the program. This qualitative study of a MBSR cohort (N=8 healthy individuals) in a larger RCT examined participants’ daily diary descriptions of their home-practice experiences. The study used a two-part method, combining grounded theory with a close-ended coding approach. The grounded theory analysis revealed that during the trial, all participants, to varying degrees, described moments of distress related to practice; at the end of the course, all participants who completed the training demonstrated greater detail and clarity in their descriptions, improved affect, and the emergence of an observing self. The closed-ended coding schema carried out to shed light on the development of an observing self, revealed that the emergence of an observing self was not related to the valence of participants’ experiential descriptions: even participants whose diaries contained predominantly negative characterizations of their experience throughout the trial were able, by the end of the trial, to demonstrate an observing, witnessing attitude towards their own distress. Conclusion Progress in MBSR may rely less on the valence of participants’ experiences and more on the way participants describe and relate to their own inner experience. PMID:21226129
NASA Astrophysics Data System (ADS)
Buzulukova, Natalia; Fok, Mei-Ching; Glocer, Alex; Moore, Thomas E.
2013-04-01
We report studies of the storm time ring current and its influence on the radiation belts, plasmasphere and global magnetospheric dynamics. The near-Earth space environment is described by multiscale physics that reflects a variety of processes and conditions that occur in magnetospheric plasma. For a successful description of such a plasma, a complex solution is needed which allows multiple physics domains to be described using multiple physical models. A key population of the inner magnetosphere is ring current plasma. Ring current dynamics affects magnetic and electric fields in the entire magnetosphere, the distribution of cold ionospheric plasma (plasmasphere), and radiation belts particles. To study electrodynamics of the inner magnetosphere, we present a MHD model (BATSRUS code) coupled with ionospheric solver for electric field and with ring current-radiation belt model (CIMI code). The model will be used as a tool to reveal details of coupling between different regions of the Earth's magnetosphere. A model validation will be also presented based on comparison with data from THEMIS, POLAR, GOES, and TWINS missions. INVITED TALK
NASA Astrophysics Data System (ADS)
Konnik, Mikhail V.; Welsh, James
2012-09-01
Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.
NASA Technical Reports Server (NTRS)
Maskew, B.
1979-01-01
The description of the modified code includes details of a doublet subpanel technique in which panels that are close to a velocity calculation point are replaced by a subpanel set. This treatment gives the effect of a higher panel density without increasing the number of unknowns. In particular, the technique removes the close approach problem of the earlier singularity model in which distortions occur in the detailed pressure calculation near panel corners. Removal of this problem allowed a complete wake relaxation and roll-up iterative procedure to be installed in the code. The geometry package developed for the new technique and also for the more general configurations is based on a multiple patch scheme. Each patch has a regular array of panels, but arbitrary relationships are allowed between neighboring panels at the edges of adjacent patches. This provides great versatility for treating general configurations.
Formally specifying the logic of an automatic guidance controller
NASA Technical Reports Server (NTRS)
Guaspari, David
1990-01-01
The following topics are covered in viewgraph form: (1) the Penelope Project; (2) the logic of an experimental automatic guidance control system for a 737; (3) Larch/Ada specification; (4) some failures of informal description; (5) description of mode changes caused by switches; (6) intuitive description of window status (chosen vs. current); (7) design of the code; (8) and specifying the code.
COSMOS: Python library for massively parallel workflows
Gafni, Erik; Luquette, Lovelace J.; Lancaster, Alex K.; Hawkins, Jared B.; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P.; Tonellato, Peter J.
2014-01-01
Summary: Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Availability and implementation: Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. Contact: dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24982428
COSMOS: Python library for massively parallel workflows.
Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J
2014-10-15
Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Automatic NEPHIS Coding of Descriptive Titles for Permuted Index Generation.
ERIC Educational Resources Information Center
Craven, Timothy C.
1982-01-01
Describes a system for the automatic coding of most descriptive titles which generates Nested Phrase Indexing System (NEPHIS) input strings of sufficient quality for permuted index production. A series of examples and an 11-item reference list accompany the text. (JL)
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components
NASA Technical Reports Server (NTRS)
1991-01-01
Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.
Relapse and Craving in Alcohol-Dependent Individuals: A Comparison of Self-Reported Determinants.
Snelleman, Michelle; Schoenmakers, Tim M; van de Mheen, Dike
2018-06-07
Negative affective states and alcohol-related stimuli increase risk of relapse in alcohol dependence. In research and in clinical practice, craving is often used as another important indicator of relapse, but this lacks a firm empirical foundation. The goal of the present study is to explore and compare determinants for relapse and craving, using Marlatt's (1996) taxonomy of high risk situations as a template. We conducted semi-structured interviews with 20 alcohol-dependent patients about their most recent relapse and craving episodes. Interview transcripts were carefully reviewed for their thematic content, and codes capturing the thematic content were formulated. In total, we formulated 42 relapse-related codes and 33 craving-related codes. Descriptions of craving episodes revealed that these episodes vary in frequency and intensity. The presence of alcohol-related stimuli (n = 11) and experiencing a negative emotional state (n = 11) were often occurring determinants of craving episodes. Both negative emotional states (n = 17) and testing personal control (n = 11) were viewed as important determinants of relapses. Craving was seldom mentioned as a determinant for relapse. Additionally, participants reported multiple determinants preceding a relapse, whereas craving episodes were preceded by only one determinant. Patient reports do not support the claim that craving by itself is an important proximal determinant for relapse. In addition, multiple determinants were present before a relapse. Therefore, future research should focus on a complexity of different determinants.
Reactive transport codes for subsurface environmental simulation
Steefel, C. I.; Appelo, C. A. J.; Arora, B.; ...
2014-09-26
A general description of the mathematical and numerical formulations used in modern numerical reactive transport codes relevant for subsurface environmental simulations is presented. The formulations are followed by short descriptions of commonly used and available subsurface simulators that consider continuum representations of flow, transport, and reactions in porous media. These formulations are applicable to most of the subsurface environmental benchmark problems included in this special issue. The list of codes described briefly here includes PHREEQC, HPx, PHT3D, OpenGeoSys (OGS), HYTEC, ORCHESTRA, TOUGHREACT, eSTOMP, HYDROGEOCHEM, CrunchFlow, MIN3P, and PFLOTRAN. The descriptions include a high-level list of capabilities for each of themore » codes, along with a selective list of applications that highlight their capabilities and historical development.« less
Operational manual for two-dimensional transonic code TSFOIL
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1978-01-01
This code solves the two-dimensional, transonic, small-disturbance equations for flow past lifting airfoils in both free air and various wind-tunnel environments by using a variant of the finite-difference method. A description of the theoretical and numerical basis of the code is provided, together with complete operating instructions and sample cases for the general user. In addition, a programmer's manual is also presented to assist the user interested in modifying the code. Included in the programmer's manual are a dictionary of subroutine variables in common and a detailed description of each subroutine.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
NASA Astrophysics Data System (ADS)
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.
Real-time operating system for selected Intel processors
NASA Technical Reports Server (NTRS)
Pool, W. R.
1980-01-01
The rationale for system development is given along with reasons for not using vendor supplied operating systems. Although many system design and performance goals were dictated by problems with vendor supplied systems, other goals surfaced as a result of a design for a custom system able to span multiple projects. System development and management problems and areas that required redesign or major code changes for system implementation are examined as well as the relative successes of the initial projects. A generic description of the actual project is provided and the ongoing support requirements and future plans are discussed.
A novel Monte Carlo algorithm for simulating crystals with McStas
NASA Astrophysics Data System (ADS)
Alianelli, L.; Sánchez del Río, M.; Felici, R.; Andersen, K. H.; Farhi, E.
2004-07-01
We developed an original Monte Carlo algorithm for the simulation of Bragg diffraction by mosaic, bent and gradient crystals. It has practical applications, as it can be used for simulating imperfect crystals (monochromators, analyzers and perhaps samples) in neutron ray-tracing packages, like McStas. The code we describe here provides a detailed description of the particle interaction with the microscopic homogeneous regions composing the crystal, therefore it can be used also for the calculation of quantities having a conceptual interest, as multiple scattering, or for the interpretation of experiments aiming at characterizing crystals, like diffraction topographs.
Multiple Trellis Coded Modulation (MTCM): An MSAT-X report
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.
1986-01-01
Conventional trellis coding outputs one channel symbol per trellis branch. The notion of multiple trellis coding is introduced wherein more than one channel symbol per trellis branch is transmitted. It is shown that the combination of multiple trellis coding with M-ary modulation yields a performance gain with symmetric signal set comparable to that previously achieved only with signal constellation asymmetry. The advantage of multiple trellis coding over the conventional trellis coded asymmetric modulation technique is that the potential for code catastrophe associated with the latter has been eliminated with no additional cost in complexity (as measured by the number of states in the trellis diagram).
Quality of outpatient clinical notes: a stakeholder definition derived through qualitative research.
Hanson, Janice L; Stephens, Mark B; Pangaro, Louis N; Gimbel, Ronald W
2012-11-19
There are no empirically-grounded criteria or tools to define or benchmark the quality of outpatient clinical documentation. Outpatient clinical notes document care, communicate treatment plans and support patient safety, medical education, medico-legal investigations and reimbursement. Accurately describing and assessing quality of clinical documentation is a necessary improvement in an increasingly team-based healthcare delivery system. In this paper we describe the quality of outpatient clinical notes from the perspective of multiple stakeholders. Using purposeful sampling for maximum diversity, we conducted focus groups and individual interviews with clinicians, nursing and ancillary staff, patients, and healthcare administrators at six federal health care facilities between 2009 and 2011. All sessions were audio-recorded, transcribed and qualitatively analyzed using open, axial and selective coding. The 163 participants included 61 clinicians, 52 nurse/ancillary staff, 31 patients and 19 administrative staff. Three organizing themes emerged: 1) characteristics of quality in clinical notes, 2) desired elements within the clinical notes and 3) system supports to improve the quality of clinical notes. We identified 11 codes to describe characteristics of clinical notes, 20 codes to describe desired elements in quality clinical notes and 11 codes to describe clinical system elements that support quality when writing clinical notes. While there was substantial overlap between the aspects of quality described by the four stakeholder groups, only clinicians and administrators identified ease of translation into billing codes as an important characteristic of a quality note. Only patients rated prioritization of their medical problems as an aspect of quality. Nurses included care and education delivered to the patient, information added by the patient, interdisciplinary information, and infection alerts as important content. Perspectives of these four stakeholder groups provide a comprehensive description of quality in outpatient clinical documentation. The resulting description of characteristics and content necessary for quality notes provides a research-based foundation for assessing the quality of clinical documentation in outpatient health care settings.
DCU@TRECMed 2012: Using Ad-Hoc Baselines for Domain-Specific Retrieval
2012-11-01
description to extend the query, for example: Patients with complicated GERD who receive endoscopy will be extended with Gastroesophageal reflux disease ... Diseases and Related Health Problems, version 9) for the patient’s admission or discharge status [1, 5]; treating negation (e.g. negative test results or...codes were mapped to a description of the code, usually a short phrase/sentence. For instance, the ICD9 code 253.5 corresponds to the disease Diabetes
NASA Technical Reports Server (NTRS)
Hartle, M.; McKnight, R. L.
2000-01-01
This manual is a combination of a user manual, theory manual, and programmer manual. The reader is assumed to have some previous exposure to the finite element method. This manual is written with the idea that the CSTEM (Coupled Structural Thermal Electromagnetic-Computer Code) user needs to have a basic understanding of what the code is actually doing in order to properly use the code. For that reason, the underlying theory and methods used in the code are described to a basic level of detail. The manual gives an overview of the CSTEM code: how the code came into existence, a basic description of what the code does, and the order in which it happens (a flowchart). Appendices provide a listing and very brief description of every file used by the CSTEM code, including the type of file it is, what routine regularly accesses the file, and what routine opens the file, as well as special features included in CSTEM.
Description and Simulation of a Fast Packet Switch Architecture for Communication Satellites
NASA Technical Reports Server (NTRS)
Quintana, Jorge A.; Lizanich, Paul J.
1995-01-01
The NASA Lewis Research Center has been developing the architecture for a multichannel communications signal processing satellite (MCSPS) as part of a flexible, low-cost meshed-VSAT (very small aperture terminal) network. The MCSPS architecture is based on a multifrequency, time-division-multiple-access (MF-TDMA) uplink and a time-division multiplex (TDM) downlink. There are eight uplink MF-TDMA beams, and eight downlink TDM beams, with eight downlink dwells per beam. The information-switching processor, which decodes, stores, and transmits each packet of user data to the appropriate downlink dwell onboard the satellite, has been fully described by using VHSIC (Very High Speed Integrated-Circuit) Hardware Description Language (VHDL). This VHDL code, which was developed in-house to simulate the information switching processor, showed that the architecture is both feasible and viable. This paper describes a shared-memory-per-beam architecture, its VHDL implementation, and the simulation efforts.
Development and application of CATIA-GDML geometry builder
NASA Astrophysics Data System (ADS)
Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Schetinin, V.
2014-06-01
Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. The paper presents an update on functionality and application practice of the CATIA-GDML geometry builder first introduced at CHEP2010. This set of CATIAv5 tools has been developed for building a MC optimized GEANT4/ROOT compatible geometry based on the existing CAD model. The model can be exported via Geometry Description Markup Language (GDML). The builder allows also import and visualization of GEANT4/ROOT geometries in CATIA. The structure of a GDML file, including replicated volumes, volume assemblies and variables, is mapped into a part specification tree. A dedicated file template, a wide range of primitives, tools for measurement and implicit calculation of parameters, different types of multiple volume instantiation, mirroring, positioning and quality check have been implemented. Several use cases are discussed.
50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use
Code of Federal Regulations, 2012 CFR
2012-10-01
... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...
50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use
Code of Federal Regulations, 2014 CFR
2014-10-01
... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...
50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use
Code of Federal Regulations, 2013 CFR
2013-10-01
... following: Alpha gear code NMFS logbooks Electronic check-in/ check-out Use numeric code to complete the following: Numeric gear code IERS eLandings ADF&G COAR NMFS AND ADF&G GEAR CODES Hook-and-line HAL X X 61 X...
NASA Astrophysics Data System (ADS)
Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo
2006-01-01
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
An open-access CMIP5 pattern library for temperature and precipitation: description and methodology
NASA Astrophysics Data System (ADS)
Lynch, Cary; Hartin, Corinne; Bond-Lamberty, Ben; Kravitz, Ben
2017-05-01
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squares regression methods. We explore the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90° N/S). Bias and mean errors between modeled and pattern-predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5 °C, but the choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. This paper describes our library of least squares regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns. The dataset and netCDF data generation code are available at doi:10.5281/zenodo.495632.
1985-10-01
NOTE3 1W. KFY OORDS (Continwo =n reverse aide If necesesar aid ldwttlfy by" block ntmber) •JW7 Regions, COM-EOM Region Ident• fication GIFT Material...technique of mobna.tcri• i Geometr- (Com-Geom). The Com-Gem data is used as input to the Geometric Inf• •cation for Targets ( GIFT ) computer code to... GIFT ) 2 3 computer code. This report documents the combinatorial geometry (Com-Geom) target description data which is the input data for the GIFT code
The SIFT hardware/software systems. Volume 2: Software listings
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1985-01-01
This document contains software listings of the SIFT operating system and application software. The software is coded for the most part in a variant of the Pascal language, Pascal*. Pascal* is a cross-compiler running on the VAX and Eclipse computers. The output of Pascal* is BDX-390 assembler code. When necessary, modules are written directly in BDX-390 assembler code. The listings in this document supplement the description of the SIFT system found in Volume 1 of this report, A Detailed Description.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Burstyn, Igor; Slutsky, Anton; Lee, Derrick G; Singer, Alison B; An, Yuan; Michael, Yvonne L
2014-05-01
Epidemiologists typically collect narrative descriptions of occupational histories because these are less prone than self-reported exposures to recall bias of exposure to a specific hazard. However, the task of coding these narratives can be daunting and prohibitively time-consuming in some settings. The aim of this manuscript is to evaluate the performance of a computer algorithm to translate the narrative description of occupational codes into standard classification of jobs (2010 Standard Occupational Classification) in an epidemiological context. The fundamental question we address is whether exposure assignment resulting from manual (presumed gold standard) coding of the narratives is materially different from that arising from the application of automated coding. We pursued our work through three motivating examples: assessment of physical demands in Women's Health Initiative observational study, evaluation of predictors of exposure to coal tar pitch volatiles in the US Occupational Safety and Health Administration's (OSHA) Integrated Management Information System, and assessment of exposure to agents known to cause occupational asthma in a pregnancy cohort. In these diverse settings, we demonstrate that automated coding of occupations results in assignment of exposures that are in reasonable agreement with results that can be obtained through manual coding. The correlation between physical demand scores based on manual and automated job classification schemes was reasonable (r = 0.5). The agreement between predictive probability of exceeding the OSHA's permissible exposure level for polycyclic aromatic hydrocarbons, using coal tar pitch volatiles as a surrogate, based on manual and automated coding of jobs was modest (Kendall rank correlation = 0.29). In the case of binary assignment of exposure to asthmagens, we observed that fair to excellent agreement in classifications can be reached, depending on presence of ambiguity in assigned job classification (κ = 0.5-0.8). Thus, the success of automated coding appears to depend on the setting and type of exposure that is being assessed. Our overall recommendation is that automated translation of short narrative descriptions of jobs for exposure assessment is feasible in some settings and essential for large cohorts, especially if combined with manual coding to both assess reliability of coding and to further refine the coding algorithm.
Morse Monte Carlo Radiation Transport Code System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one maymore » determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)« less
Moment Tensor Descriptions for Simulated Explosions of the Source Physics Experiment (SPE)
NASA Astrophysics Data System (ADS)
Yang, X.; Rougier, E.; Knight, E. E.; Patton, H. J.
2014-12-01
In this research we seek to understand damage mechanisms governing the behavior of geo-materials in the explosion source region, and the role they play in seismic-wave generation. Numerical modeling tools can be used to describe these mechanisms through the development and implementation of appropriate material models. Researchers at Los Alamos National Laboratory (LANL) have been working on a novel continuum-based-viscoplastic strain-rate-dependent fracture material model, AZ_Frac, in an effort to improve the description of these damage sources. AZ_Frac has the ability to describe continuum fracture processes, and at the same time, to handle pre-existing anisotropic material characteristics. The introduction of fractures within the material generates further anisotropic behavior that is also accounted for within the model. The material model has been calibrated to a granitic medium and has been applied in a number of modeling efforts under the SPE project. In our modeling, we use a 2D, axisymmetric layered earth model of the SPE site consisting of a weathered layer on top of a half-space. We couple the hydrodynamic simulation code with a seismic simulation code and propagate the signals to distances of up to 2 km. The signals are inverted for time-dependent moment tensors using a modified inversion scheme that accounts for multiple sources at different depths. The inversion scheme is evaluated for its resolving power to determine a centroid depth and a moment tensor description of the damage source. The capabilities of the inversion method to retrieve such information from waveforms recorded on three SPE tests conducted to date are also being assessed.
NASA Astrophysics Data System (ADS)
Kar, Somnath; Choudhury, Subikash; Muhuri, Sanjib; Ghosh, Premomoy
2017-01-01
Satisfactory description of data by hydrodynamics-motivated models, as has been reported recently by experimental collaborations at the LHC, confirm "collectivity" in high-multiplicity proton-proton (p p ) collisions. Notwithstanding this, a detailed study of high-multiplicity p p data in other approaches or models is essential for better understanding of the specific phenomenon. In this study, the focus is on a pQCD-inspired multiparton interaction (MPI) model, including a color reconnection (CR) scheme as implemented in the Monte Carlo code, PYTHIA8 tune 4C. The MPI with the color reconnection reproduces the dependence of the mean transverse momentum ⟨pT⟩ on the charged particle multiplicity Nch in p p collisions at the LHC, providing an alternate explanation to the signature of "hydrodynamic collectivity" in p p data. It is, therefore, worth exploring how this model responds to other related features of high-multiplicity p p events. This comparative study with recent experimental results demonstrates the limitations of the model in explaining some of the prominent features of the final-state charged particles up to the intermediate-pT (pT<2.0 GeV /c ) range in high-multiplicity p p events.
Sweeney, Angela; Greenwood, Kathryn E; Williams, Sally; Wykes, Til; Rose, Diana S
2013-12-01
Health research is frequently conducted in multi-disciplinary teams, with these teams increasingly including service user researchers. Whilst it is common for service user researchers to be involved in data collection--most typically interviewing other service users--it is less common for service user researchers to be involved in data analysis and interpretation. This means that a unique and significant perspective on the data is absent. This study aims to use an empirical report of a study on Cognitive Behavioural Therapy for psychosis (CBTp) to demonstrate the value of multiple coding in enabling service users voices to be heard in team-based qualitative data analysis. The CBTp study employed multiple coding to analyse service users' discussions of CBT for psychosis (CBTp) from the perspectives of a service user researcher, clinical researcher and psychology assistant. Multiple coding was selected to enable multiple perspectives to analyse and interpret data, to understand and explore differences and to build multi-disciplinary consensus. Multiple coding enabled the team to understand where our views were commensurate and incommensurate and to discuss and debate differences. Through the process of multiple coding, we were able to build strong consensus about the data from multiple perspectives, including that of the service user researcher. Multiple coding is an important method for understanding and exploring multiple perspectives on data and building team consensus. This can be contrasted with inter-rater reliability which is only appropriate in limited circumstances. We conclude that multiple coding is an appropriate and important means of hearing service users' voices in qualitative data analysis. © 2012 John Wiley & Sons Ltd.
Metrics for comparing dynamic earthquake rupture simulations
Barall, Michael; Harris, Ruth A.
2014-01-01
Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.
A singularity free analytical solution of artificial satellite motion with drag
NASA Technical Reports Server (NTRS)
Scheifele, G.; Mueller, A. C.; Starke, S. E.
1977-01-01
The connection between the existing Delaunay-Similar and Poincare-Similar satellite theories in the true anomaly version is outlined for the J(2) perturbation and the new drag approach. An overall description of the concept of the approach is given while the necessary expansions and the procedure to arrive at the computer program for the canonical forces is delineated. The procedure for the analytical integration of these developed equations is described. In addition, some numerical results are given. The computer program for the algebraic multiplication of the Fourier series which creates the FORTRAN coding in an automatic manner is described and documented.
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Automated Coding Software: Development and Use to Enhance Anti-Fraud Activities*
Garvin, Jennifer H.; Watzlaf, Valerie; Moeini, Sohrab
2006-01-01
This descriptive research project identified characteristics of automated coding systems that have the potential to detect improper coding and to minimize improper or fraudulent coding practices in the setting of automated coding used with the electronic health record (EHR). Recommendations were also developed for software developers and users of coding products to maximize anti-fraud practices. PMID:17238546
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
Overview of the Graphical User Interface for the GERMcode (GCR Event-Based Risk Model)
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERMcode calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERMcode also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERMcode for application to thick target experiments. The GERMcode provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERMcode GUI, as well as providing training applications.
Schroeder, Mary C; Chapman, Cole G; Nattinger, Matthew C; Halfdanarson, Thorvardur R; Abu-Hejleh, Taher; Tien, Yu-Yu; Brooks, John M
2016-07-18
An aging population, with its associated rise in cancer incidence and strain on the oncology workforce, will continue to motivate patients, healthcare providers and policy makers to better understand the existing and growing challenges of access to chemotherapy. Administrative data, and SEER-Medicare data in particular, have been used to assess patterns of healthcare utilization because of its rich information regarding patients, their treatments, and their providers. To create measures of geographic access to chemotherapy, patients and oncologists must first be identified. Others have noted that identifying chemotherapy providers from Medicare claims is not always straightforward, as providers may report multiple or incorrect specialties and/or practice in multiple locations. Although previous studies have found that specialty codes alone fail to identify all oncologists, none have assessed whether various methods of identifying chemotherapy providers and their locations affect estimates of geographic access to care. SEER-Medicare data was used to identify patients, physicians, and chemotherapy use in this population-based observational study. We compared two measures of geographic access to chemotherapy, local area density and distance to nearest provider, across two definitions of chemotherapy provider (identified by specialty codes or billing codes) and two definitions of chemotherapy service location (where chemotherapy services were proven to be or possibly available) using descriptive statistics. Access measures were mapped for three representative registries. In our sample, 57.2 % of physicians who submitted chemotherapy claims reported a specialty of hematology/oncology or medical oncology. These physicians were associated with 91.0 % of the chemotherapy claims. When providers were identified through billing codes instead of specialty codes, an additional 50.0 % of beneficiaries (from 23.8 % to 35.7 %) resided in the same ZIP code as a chemotherapy provider. Beneficiaries were also 1.3 times closer to a provider, in terms of driving time. Our access measures did not differ significantly across definitions of service location. Measures of geographic access to care were sensitive to definitions of chemotherapy providers; far more providers were identified through billing codes than specialty codes. They were not sensitive to definitions of service locations, as providers, regardless of how they are identified, generally provided chemotherapy at each of their practice locations.
7 CFR 1485.13 - Application process and strategic plan.
Code of Federal Regulations, 2010 CFR
2010-01-01
... affiliated organizations; (D) A description of management and administrative capability; (E) A description of... code and the percentage of U.S. origin content by weight, exclusive of added water; (B) A description... and the percentage of U.S. origin content by weight, exclusive of added water; (C) A description of...
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Construction of optimal resources for concatenated quantum protocols
NASA Astrophysics Data System (ADS)
Pirker, A.; Wallnöfer, J.; Briegel, H. J.; Dür, W.
2017-06-01
We consider the explicit construction of resource states for measurement-based quantum information processing. We concentrate on special-purpose resource states that are capable to perform a certain operation or task, where we consider unitary Clifford circuits as well as non-trace-preserving completely positive maps, more specifically probabilistic operations including Clifford operations and Pauli measurements. We concentrate on 1 →m and m →1 operations, i.e., operations that map one input qubit to m output qubits or vice versa. Examples of such operations include encoding and decoding in quantum error correction, entanglement purification, or entanglement swapping. We provide a general framework to construct optimal resource states for complex tasks that are combinations of these elementary building blocks. All resource states only contain input and output qubits, and are hence of minimal size. We obtain a stabilizer description of the resulting resource states, which we also translate into a circuit pattern to experimentally generate these states. In particular, we derive recurrence relations at the level of stabilizers as key analytical tool to generate explicit (graph) descriptions of families of resource states. This allows us to explicitly construct resource states for encoding, decoding, and syndrome readout for concatenated quantum error correction codes, code switchers, multiple rounds of entanglement purification, quantum repeaters, and combinations thereof (such as resource states for entanglement purification of encoded states).
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Alter, Stephen J.
1995-01-01
This document is a users' manual for a new three-dimensional structured multiple-block volume g generator called 3DGRAPE/AL. It is a significantly improved version of the previously-released a widely-distributed programs 3DGRAPE and 3DMAGGS. It generates volume grids by iteratively solving the Poisson Equations in three-dimensions. The right-hand-side terms are designed so that user-specific; grid cell heights and user-specified grid cell skewness near boundary surfaces result automatically, with little user intervention. The code is written in Fortran-77, and can be installed with or without a simple graphical user interface which allows the user to watch as the grid is generated. An introduction describing the improvements over the antecedent 3DGRAPE code is presented first. Then follows a chapter on the basic grid generator program itself, and comments on installing it. The input is then described in detail. After that is a description of the Graphical User Interface. Five example cases are shown next, with plots of the results. Following that is a chapter on two input filters which allow use of input data generated elsewhere. Last is a treatment of the theory embodied in the code.
The Development of the Ducted Fan Noise Propagation and Radiation Code CDUCT-LaRC
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Farassat, F.; Pope, D. Stuart; Vatsa, Veer
2003-01-01
The development of the ducted fan noise propagation and radiation code CDUCT-LaRC at NASA Langley Research Center is described. This code calculates the propagation and radiation of given acoustic modes ahead of the fan face or aft of the exhaust guide vanes in the inlet or exhaust ducts, respectively. This paper gives a description of the modules comprising CDUCT-LaRC. The grid generation module provides automatic creation of numerical grids for complex (non-axisymmetric) geometries that include single or multiple pylons. Files for performing automatic inviscid mean flow calculations are also generated within this module. The duct propagation is based on the parabolic approximation theory of R. P. Dougherty. This theory allows the handling of complex internal geometries and the ability to study the effect of non-uniform (i.e. circumferentially and axially segmented) liners. Finally, the duct radiation module is based on the Ffowcs Williams-Hawkings (FW-H) equation with a penetrable data surface. Refraction of sound through the shear layer between the external flow and bypass duct flow is included. Results for benchmark annular ducts, as well as other geometries with pylons, are presented and compared with available analytical data.
COGNATE: comparative gene annotation characterizer.
Wilbrandt, Jeanne; Misof, Bernhard; Niehuis, Oliver
2017-07-17
The comparison of gene and genome structures across species has the potential to reveal major trends of genome evolution. However, such a comparative approach is currently hampered by a lack of standardization (e.g., Elliott TA, Gregory TR, Philos Trans Royal Soc B: Biol Sci 370:20140331, 2015). For example, testing the hypothesis that the total amount of coding sequences is a reliable measure of potential proteome diversity (Wang M, Kurland CG, Caetano-Anollés G, PNAS 108:11954, 2011) requires the application of standardized definitions of coding sequence and genes to create both comparable and comprehensive data sets and corresponding summary statistics. However, such standard definitions either do not exist or are not consistently applied. These circumstances call for a standard at the descriptive level using a minimum of parameters as well as an undeviating use of standardized terms, and for software that infers the required data under these strict definitions. The acquisition of a comprehensive, descriptive, and standardized set of parameters and summary statistics for genome publications and further analyses can thus greatly benefit from the availability of an easy to use standard tool. We developed a new open-source command-line tool, COGNATE (Comparative Gene Annotation Characterizer), which uses a given genome assembly and its annotation of protein-coding genes for a detailed description of the respective gene and genome structure parameters. Additionally, we revised the standard definitions of gene and genome structures and provide the definitions used by COGNATE as a working draft suggestion for further reference. Complete parameter lists and summary statistics are inferred using this set of definitions to allow down-stream analyses and to provide an overview of the genome and gene repertoire characteristics. COGNATE is written in Perl and freely available at the ZFMK homepage ( https://www.zfmk.de/en/COGNATE ) and on github ( https://github.com/ZFMK/COGNATE ). The tool COGNATE allows comparing genome assemblies and structural elements on multiples levels (e.g., scaffold or contig sequence, gene). It clearly enhances comparability between analyses. Thus, COGNATE can provide the important standardization of both genome and gene structure parameter disclosure as well as data acquisition for future comparative analyses. With the establishment of comprehensive descriptive standards and the extensive availability of genomes, an encompassing database will become possible.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
Non-Coding RNA Analysis Using the Rfam Database.
Kalvari, Ioanna; Nawrocki, Eric P; Argasinska, Joanna; Quinones-Olvera, Natalia; Finn, Robert D; Bateman, Alex; Petrov, Anton I
2018-06-01
Rfam is a database of non-coding RNA families in which each family is represented by a multiple sequence alignment, a consensus secondary structure, and a covariance model. Using a combination of manual and literature-based curation and a custom software pipeline, Rfam converts descriptions of RNA families found in the scientific literature into computational models that can be used to annotate RNAs belonging to those families in any DNA or RNA sequence. Valuable research outputs that are often locked up in figures and supplementary information files are encapsulated in Rfam entries and made accessible through the Rfam Web site. The data produced by Rfam have a broad application, from genome annotation to providing training sets for algorithm development. This article gives an overview of how to search and navigate the Rfam Web site, and how to annotate sequences with RNA families. The Rfam database is freely available at http://rfam.org. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-03
... description of comorbidity for chronic renal failure. In addition, we inadvertently omitted from Table 11 the comorbidity code ``V4511'' for chronic renal failure. These changes are not substantive changes to the... heading ``Diagnoses codes,'' for the renal failure, chronic diagnoses codes, replace code ``V451'' with...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-30
...] FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code Names... replaced non- informative code names with descriptive identifiers on its public database of products that... on our public database with non-informative code names. After careful consideration of this matter...
The Social Interactive Coding System (SICS): An On-Line, Clinically Relevant Descriptive Tool.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1990-01-01
The Social Interactive Coding System (SICS) assesses the continuous verbal interactions of preschool children as a function of play areas, addressees, script codes, and play levels. This paper describes the 26 subjects and the setting involved in SICS development, coding definitions and procedures, training procedures, reliability, sample…
Hamming and Accumulator Codes Concatenated with MPSK or QAM
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel
2009-01-01
In a proposed coding-and-modulation scheme, a high-rate binary data stream would be processed as follows: 1. The input bit stream would be demultiplexed into multiple bit streams. 2. The multiple bit streams would be processed simultaneously into a high-rate outer Hamming code that would comprise multiple short constituent Hamming codes a distinct constituent Hamming code for each stream. 3. The streams would be interleaved. The interleaver would have a block structure that would facilitate parallelization for high-speed decoding. 4. The interleaved streams would be further processed simultaneously into an inner two-state, rate-1 accumulator code that would comprise multiple constituent accumulator codes - a distinct accumulator code for each stream. 5. The resulting bit streams would be mapped into symbols to be transmitted by use of a higher-order modulation - for example, M-ary phase-shift keying (MPSK) or quadrature amplitude modulation (QAM). The novelty of the scheme lies in the concatenation of the multiple-constituent Hamming and accumulator codes and the corresponding parallel architectures of the encoder and decoder circuitry (see figure) needed to process the multiple bit streams simultaneously. As in the cases of other parallel-processing schemes, one advantage of this scheme is that the overall data rate could be much greater than the data rate of each encoder and decoder stream and, hence, the encoder and decoder could handle data at an overall rate beyond the capability of the individual encoder and decoder circuits.
The kinetics of aerosol particle formation and removal in NPP severe accidents
NASA Astrophysics Data System (ADS)
Zatevakhin, Mikhail A.; Arefiev, Valentin K.; Semashko, Sergey E.; Dolganov, Rostislav A.
2016-06-01
Severe Nuclear Power Plant (NPP) accidents are accompanied by release of a massive amount of energy, radioactive products and hydrogen into the atmosphere of the NPP containment. A valid estimation of consequences of such accidents can only be carried out through the use of the integrated codes comprising a description of the basic processes which determine the consequences. A brief description of a coupled aerosol and thermal-hydraulic code to be used for the calculation of the aerosol kinetics within the NPP containment in case of a severe accident is given. The code comprises a KIN aerosol unit integrated into the KUPOL-M thermal-hydraulic code. Some features of aerosol behavior in severe NPP accidents are briefly described.
The kinetics of aerosol particle formation and removal in NPP severe accidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zatevakhin, Mikhail A.; Arefiev, Valentin K.; Semashko, Sergey E.
2016-06-08
Severe Nuclear Power Plant (NPP) accidents are accompanied by release of a massive amount of energy, radioactive products and hydrogen into the atmosphere of the NPP containment. A valid estimation of consequences of such accidents can only be carried out through the use of the integrated codes comprising a description of the basic processes which determine the consequences. A brief description of a coupled aerosol and thermal–hydraulic code to be used for the calculation of the aerosol kinetics within the NPP containment in case of a severe accident is given. The code comprises a KIN aerosol unit integrated into themore » KUPOL-M thermal–hydraulic code. Some features of aerosol behavior in severe NPP accidents are briefly described.« less
Herrick, Cynthia J.; Yount, Byron W.; Eyler, Amy A.
2016-01-01
Objective Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of this study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. Design This was a retrospective cross-sectional analysis. Home environment variables were derived using employee zip code. Descriptive statistics were run on all individual and zip code level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Setting Data was collected from employee health fairs in a Midwestern health system 2009–2012. Subjects The dataset contains 25,227 unique individuals across four years of data. From this group, using an individual’s first entry into the database, 15,522 individuals had complete data for analysis. Results The prevalence of high diabetes risk in this population was 2.3%. There was significant variability in individual and zip code level variables across worksites. From the multivariable analysis, living in a zip code with higher percent poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Conclusions Our study underscores the important relationship between poverty, home neighborhood environment, and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health. PMID:26638995
Herrick, Cynthia J; Yount, Byron W; Eyler, Amy A
2016-08-01
Diabetes is a growing public health problem, and the environment in which people live and work may affect diabetes risk. The goal of the present study was to examine the association between multiple aspects of environment and diabetes risk in an employee population. This was a retrospective cross-sectional analysis. Home environment variables were derived using employees' zip code. Descriptive statistics were run on all individual- and zip-code-level variables, stratified by diabetes risk and worksite. A multivariable logistic regression analysis was then conducted to determine the strongest associations with diabetes risk. Data were collected from employee health fairs in a Midwestern health system, 2009-2012. The data set contains 25 227 unique individuals across four years of data. From this group, using an individual's first entry into the database, 15 522 individuals had complete data for analysis. The prevalence of high diabetes risk in this population was 2·3 %. There was significant variability in individual- and zip-code-level variables across worksites. From the multivariable analysis, living in a zip code with higher percentage of poverty and higher walk score was positively associated with high diabetes risk, while living in a zip code with higher supermarket density was associated with a reduction in high diabetes risk. Our study underscores the important relationship between poverty, home neighbourhood environment and diabetes risk, even in a relatively healthy employed population, and suggests a role for the employer in promoting health.
Description of the AILS Alerting Algorithm
NASA Technical Reports Server (NTRS)
Samanant, Paul; Jackson, Mike
2000-01-01
This document provides a complete description of the Airborne Information for Lateral Spacing (AILS) alerting algorithms. The purpose of AILS is to provide separation assurance between aircraft during simultaneous approaches to closely spaced parallel runways. AILS will allow independent approaches to be flown in such situations where dependent approaches were previously required (typically under Instrument Meteorological Conditions (IMC)). This is achieved by providing multiple levels of alerting for pairs of aircraft that are in parallel approach situations. This document#s scope is comprehensive and covers everything from general overviews, definitions, and concepts down to algorithmic elements and equations. The entire algorithm is presented in complete and detailed pseudo-code format. This can be used by software programmers to program AILS into a software language. Additional supporting information is provided in the form of coordinate frame definitions, data requirements, calling requirements as well as all necessary pre-processing and post-processing requirements. This is important and required information for the implementation of AILS into an analysis, a simulation, or a real-time system.
User's manual for the time-dependent INERTIA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, A.W.; Bennett, R.B.
1985-01-01
The time-dependent INERTIA code is described. This code models the effects of neutral beam momentum input in tokamaks as predicted by the time-dependent formulation of the Stacey-Sigmar formalism. The operation and architecture of the code are described, as are the supplementary plotting and impurity line radiation routines. A short description of the steady-state version of the INERTIA code is also provided.
REGIONAL-SCALE ATMOSPHERIC MERCURY MODELING
This PowerPoint presentation gives a short synopsis of the state of the science of atmospheric mercury modeling, including a description of recent publications of model codes by EPA, a description of a recent mercury model intercomparison study, and a description of a synthesis p...
A methodological pilot: parenting among women in substance abuse treatment.
Lewin, Linda; Farkas, Kathleen; Niazi, Maryam
2014-01-01
Mothers who abuse substances are likely to have insecure emotional attachment with their children, placing their children at risk for social-emotional and psychiatric conditions. Sobriety does not inevitably improve parenting. We tested recruitment methods, audiovisual (AV) recording procedures, the protocol for identifying child abuse risk, the coding of mother-child interactions, and retention of the sample for repeated measures as the first phase in examining mother-child relational quality of women in substance abuse treatment. This innovative study involved AV recordings to capture the in-vivo mother-child interactional behaviors that were later coded and analyzed for mean scores on the 64-item Parent-Child Relational Quality Assessment. Repeated measurement was planned during treatment and two months after discharge from treatment. The pilot involved a small sample (n = 11) of mother-child (<6 years) dyads. Highest and lowest ratings of interaction behaviors were identified. Mothers showed less enthusiasm and creativity but matched their child's emotional state. The children showed appropriate motor skill items and attachment behaviors. The dyad coding showed less mutual enjoyment between the mother and child. Eight of the participants could not be located for the second measurement despite multiple contact methods. AV recordings capture rich, descriptive information that can be coded for interactional quality analysis. Repeated measurement with this cohort was not feasible, thus needing to assess for additional/more frequent contacts to maintain the sample.
Transport and equilibrium in field-reversed mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, J.K.
Two plasma models relevant to compact torus research have been developed to study transport and equilibrium in field reversed mirrors. In the first model for small Larmor radius and large collision frequency, the plasma is described as an adiabatic hydromagnetic fluid. In the second model for large Larmor radius and small collision frequency, a kinetic theory description has been developed. Various aspects of the two models have been studied in five computer codes ADB, AV, NEO, OHK, RES. The ADB code computes two dimensional equilibrium and one dimensional transport in a flux coordinate. The AV code calculates orbit average integralsmore » in a harmonic oscillator potential. The NEO code follows particle trajectories in a Hill's vortex magnetic field to study stochasticity, invariants of the motion, and orbit average formulas. The OHK code displays analytic psi(r), B/sub Z/(r), phi(r), E/sub r/(r) formulas developed for the kinetic theory description. The RES code calculates resonance curves to consider overlap regions relevant to stochastic orbit behavior.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niita, K.; Matsuda, N.; Iwamoto, Y.
The paper presents a brief description of the models incorporated in PHITS and the present status of the code, showing some benchmarking tests of the PHITS code for accelerator facilities and space radiation.
A Review on Spectral Amplitude Coding Optical Code Division Multiple Access
NASA Astrophysics Data System (ADS)
Kaur, Navpreet; Goyal, Rakesh; Rani, Monika
2017-06-01
This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.
NASA Astrophysics Data System (ADS)
Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni
2006-10-01
In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.
Telidon Videotex presentation level protocol: Augmented picture description instructions
NASA Astrophysics Data System (ADS)
Obrien, C. D.; Brown, H. G.; Smirle, J. C.; Lum, Y. F.; Kukulka, J. Z.; Kwan, A.
1982-02-01
The Telidon Videotex system is a method by which graphic and textual information and transactional services can be accessed from information sources by the general public. In order to transmit information to a Telidon terminal at a minimum bandwidth, and in a manner independent of the type of communications channel, a coding scheme was devised which permits the encoding of a picture into the geometric drawing elements which compose it. These picture description instructions are an alpha geometric coding model and are based on the primitives of POINT, LINE, ARC, RECTANGLE, POLYGON, and INCREMENT. Text is encoded as (ASCII) characters along with a supplementary table of accents and special characters. A mosaic shape table is included for compatibility. A detailed specification of the coding scheme and a description of the principles which make it independent of communications channel and display hardware are provided.
Use of data description languages in the interchange of data
NASA Technical Reports Server (NTRS)
Pignede, M.; Real-Planells, B.; Smith, S. R.
1994-01-01
The Consultative Committee for Space Data Systems (CCSDS) is developing Standards for the interchange of information between systems, including those operating under different environments. The objective is to perform the interchange automatically, i.e. in a computer interpretable manner. One aspect of the concept developed by CCSDS is the use of a separate data description to specify the data being transferred. Using the description, data can then be automatically parsed by the receiving computer. With a suitably expressive Data Description Language (DDL), data formats of arbitrary complexity can be handled. The advantages of this approach are: (1) that the description need only be written and distributed once to all users, and (2) new software does not need to be written for each new format, provided generic tools are available to support writing and interpretation of descriptions and the associated data instances. Consequently, the effort of 'hard coding' each new format is avoided and problems of integrating multiple implementations of a given format by different users are avoided. The approach is applicable in any context where computer parsable description of data could enhance efficiency (e.g. within a spacecraft control system, a data delivery system or an archive). The CCSDS have identified several candidate DDL's: EAST (Extended Ada Subset), TSDN (Transfer Syntax Data Notation) and MADEL (Modified ASN.1 as a Data Description Language -- a DDL based on the Abstract Syntax Notation One - ASN.1 - specified in the ISO/IEC 8824). This paper concentrates on ESA's development of MADEL. ESA have also developed a 'proof of concept' prototype of the required support tools, implemented on a PC under MS-DOS, which has successfully demonstrated the feasibility of the approach, including the capability within an application of retrieving and displaying particular data elements, given its MADEL description (i.e. a data description written in MADEL). This paper outlines the work done to date and assesses the applicability of this modified ASN.1 as a DDL. The feasibility of the approach is illustrated with several examples.
van Dijk, Margriet J H; Smorenburg, Nienke T A; Visser, Bart; Nijhuis-van der Sanden, Maria W G; Heerkens, Yvonne F
2017-03-01
As a first step to formulate a practical definition for movement quality (MQ), this study aims to explore how Dutch allied health care professionals (AHCPs) describe MQ of daily life activities in patients with low back pain (LBP). In this qualitative cross-sectional digital survey study, Dutch AHCPs (n = 91) described MQ in open text (n = 91) and with three keywords (n = 90). After exploratory qualitative content analysis, the ICF linking rules (International Classification of Functioning, Disability and Health) were applied to classify MQ descriptions and keywords. The identified meaningful concepts (MCs) of the descriptions (274) and keywords (239) were linked to ICF codes (87.5% and 80.3%, respectively), Personal factors (5.8% and 5.9%, respectively), and supplementary codes (6.6% and 13.8%, respectively). The MCs were linked to a total of 31 ICF codes, especially to b760 'control of voluntary movement functions', b7602 'coordination of voluntary movements', d4 'Mobility', and d230 'carry out daily routine'. Negative and positive formulated descriptions elucidated different MQ interpretations. Descriptions of MQ given by Dutch AHCPs in patients with LBP cover all ICF components. Coordination and functional movements are seen as the most elementary concepts of MQ. Variation in MQ descriptions and interpretations hinders defining MQ and indicates the necessity of additional steps.
Computer programs to predict induced effects of jets exhausting into a crossflow
NASA Technical Reports Server (NTRS)
Perkins, S. C., Jr.; Mendenhall, M. R.
1984-01-01
A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.
Effective International Medical Disaster Relief: A Qualitative Descriptive Study.
Broby, Nicolette; Lassetter, Jane H; Williams, Mary; Winters, Blaine A
2018-04-01
Purpose The aim of this study was to assist organizations seeking to develop or improve their medical disaster relief effort by identifying fundamental elements and processes that permeate high-quality, international, medical disaster relief organizations and the teams they deploy. A qualitative descriptive design was used. Data were gathered from interviews with key personnel at five international medical response organizations, as well as during field observations conducted at multiple sites in Jordan and Greece, including three refugee camps. Data were then reviewed by the research team and coded to identify patterns, categories, and themes. The results from this qualitative, descriptive design identified three themes which were key characteristics of success found in effective, well-established, international medical disaster relief organizations. These characteristics were first, ensuring an official invitation had been extended and the need for assistance had been identified. Second, the response to that need was done in an effective and sustainable manner. Third, effective organizations strived to obtain high-quality volunteers. By following the three key characteristics outlined in this research, organizations are more likely to improve the efficiency and quality of their work. In addition, they will be less likely to impede the overall recovery process. Broby N , Lassetter JH , Williams M , Winters BA . Effective international medical disaster relief: a qualitative descriptive study. Prehosp Disaster Med. 2018;33(2):119-126.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mokhov, Nikolai
MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less
User's manual for the BNW-II optimization code for dry/wet-cooled power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Bamberger, J.A.; Braun, D.J.
1978-05-01
This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.
NASA Astrophysics Data System (ADS)
Sikder, Somali; Ghosh, Shila
2018-02-01
This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.
Downtown Waterfront Form-Based Code Workshop
This document is a description of a Smart Growth Implementation Assistance for Coastal Communities project in Marquette, Michigan, to develop a form-based code that would attract and support vibrant development.
Laser identification system based on acousto-optical barcode scanner principles
NASA Astrophysics Data System (ADS)
Khansuvarov, Ruslan A.; Korol, Georgy I.; Preslenev, Leonid N.; Bestugin, Aleksandr R.; Paraskun, Arthur S.
2016-09-01
The main purpose of the bar code in the modern world is the unique identification of the product, service, or any of their features, so personal and stationary barcode scanners so widely used. One of the important parameters of bar code scanners is their reliability, accuracy of the barcode recognition, response time and performance. Nowadays, the most popular personal barcode scanners contain a mechanical part, which extremely impairs the reliability indices. Group of SUAI engineers has proposed bar code scanner based on laser beam acoustic deflection effect in crystals [RU patent No 156009 issued 4/16/2015] Through the use of an acousto-optic deflector element in barcode scanner described by a group of engineers SUAI, it can be implemented in the manual form factor, and the stationary form factor of a barcode scanner. Being a wave electronic device, an acousto-optic element in the composition of the acousto-optic barcode scanner allows you to clearly establish a mathematical link between the encoded function of the bar code with the accepted input photodetector intensities function that allows you to speak about the great probability of a bar code clear definition. This paper provides a description of the issued patent, the description of the principles of operation based on the mathematical analysis, a description of the layout of the implemented scanner.
50 CFR Table 1c to Part 679 - Product Type Codes
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 11 2011-10-01 2011-10-01 false Product Type Codes 1c Table 1c to Part..., Table 1c Table 1c to Part 679—Product Type Codes Description Code Ancillary product.A product, such as... the highest recovery rate. P Reprocessed or rehandled product.A product, such as meal, that results...
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
Federal Logistics Information System (FLIS) Procedures Manual, Volume 4. Item Identification.
1995-01-01
Functional I DRMS Defense Reutilization 1,15 Description and Marketing FDM Full Descriptive 2 Service Method (Item DPSC Defense Personnel 2,13,14...under DIC KRE, return code ment or segment mix of FLIS data. For interna- AU. tional cataloging, only one Output Data RequestV Code may be used per...Screening Results) with KMR (Matching NATO Maintenance and Supply Agency (NAMSA), Reference-Screening) and either KFC (File Data the custodian for control
Software quality and process improvement in scientific simulation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrosiano, J.; Webster, R.
1997-11-01
This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.
Social Workers and the NASW "Code of Ethics": Belief, Behavior, Disjuncture
ERIC Educational Resources Information Center
DiFranks, Nikki Nelson
2008-01-01
A quantitative descriptive survey of a national sample of social workers (N = 206) examined discrepancies between belief in the NASW "Code of Ethics" and behavior in implementing the code and social workers' disjunctive distress (disjuncture) when belief and behavior are discordant. Relationships between setting and disjuncture and ethics…
NASA Astrophysics Data System (ADS)
Lickteig, Amanda D.
In the past, literacy was viewed solely as the basic, functional skills of reading and writing. However, with the New London Group's (1996) proposal of multiliteracies and the more recent push for a plurality of literacies (NCTE, 2011), teachers have been urged to expand their definitions of literacy. This qualitative study explores how secondary-level social studies and science teachers perceive literacies and identifies their instructional literacies practices. Data were collected through a pre- and post-questionnaire, three focus group sessions, classroom observations, field notes, and artifacts. This study solicited nearly one hundred secondary social studies and science teachers from three Midwestern school districts. Eight educators (four social studies and four science) participated in the study that took place in the spring of 2015. Furthermore, a generous grant from a local chapter of Phi Delta Kappa partially funded this research. After applying initial and holistic codes to the data, nine themes emerged: conventional, progressive, hesitant/emerging, collaborate, calibrate, perform, practice, interdisciplinary, and intradisciplinary. The nine themes were further classified by how they appeared in the data: dispositional themes, behavioral themes, and bridge themes. Throughout the data analysis, contemporary genre theory guided the study (Devitt, 2004). Descriptive codes, derived from contemporary genre theory, further revealed that the situational, social, historical, and individual aspects of genre influence teachers' pedagogical practices related to multiple literacies across disciplines. Therefore, the ways in which teachers perceived multiple literacies and implemented them into classroom instruction are multifaceted and vary depending on grade level, content area, and teaching location. However, teachers' dispositions regarding literacy move beyond a traditional mindset of functional reading and writing as they engage in professional learning opportunities and collaborate within and across disciplines and grade levels. This study provides secondary educators insight into the prominence of multiple literacies present across content areas while also revealing the teaching methods and instructional strategies that foster multiple literacies.
2004-09-01
Required> </Equipment> <Equipment code="L44680"> <Description>LAUNCHER GRENADE SMOKE: SCREENING RP M250 </Description> <Required...EquipmentPiecesOnHand> </UnitEquipment> <UnitEquipment> <EquipmentDescription>LAUNCHER GRENADE SMOKE: SCREENING RP M250 </EquipmentDescription
1978-01-01
complex, applications of the code . NASCAP CODE DESCRIPTION The NASCAP code is a finite-element spacecraft-charging simulation that is written in FORTRAN ...transport code POEM (ref. 1), is applicable to arbitrary dielectrics, source spectra, and current time histories. The code calculations are illustrated by...iaxk ’. Vlbouced _DstributionL- 9TNA Availability Codes %ELECTEf Nationa Aeronautics and Dist. Spec al TAvalland/or. MAY 2 21980 Space Administration
Interface requirements to couple thermal hydraulics codes to severe accident codes: ICARE/CATHARE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camous, F.; Jacq, F.; Chatelard, P.
1997-07-01
In order to describe with the same code the whole sequence of severe LWR accidents, up to the vessel failure, the Institute of Protection and Nuclear Safety has performed a coupling of the severe accident code ICARE2 to the thermalhydraulics code CATHARE2. The resulting code, ICARE/CATHARE, is designed to be as pertinent as possible in all the phases of the accident. This paper is mainly devoted to the description of the ICARE2-CATHARE2 coupling.
Coding and decoding for code division multiple user communication systems
NASA Technical Reports Server (NTRS)
Healy, T. J.
1985-01-01
A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.
NASA Technical Reports Server (NTRS)
Walowit, Jed A.; Shapiro, Wilbur
2005-01-01
The SPIRALI code predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures. A derivation of the equations governing the performance of turbulent, incompressible, spiral groove cylindrical and face seals along with a description of their solution is given. The computer codes are described, including an input description, sample cases, and comparisons with results of other codes.
User's manual for the Heat Pipe Space Radiator design and analysis Code (HEPSPARC)
NASA Technical Reports Server (NTRS)
Hainley, Donald C.
1991-01-01
A heat pipe space radiatior code (HEPSPARC), was written for the NASA Lewis Research Center and is used for the design and analysis of a radiator that is constructed from a pumped fluid loop that transfers heat to the evaporative section of heat pipes. This manual is designed to familiarize the user with this new code and to serve as a reference for its use. This manual documents the completed work and is intended to be the first step towards verification of the HEPSPARC code. Details are furnished to provide a description of all the requirements and variables used in the design and analysis of a combined pumped loop/heat pipe radiator system. A description of the subroutines used in the program is furnished for those interested in understanding its detailed workings.
An overview of data acquisition, signal coding and data analysis techniques for MST radars
NASA Technical Reports Server (NTRS)
Rastogi, P. K.
1986-01-01
An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.
1981-01-01
Reference Direction4 at " Is - (198) SNetwork’Ports. In either c•es, the port voltagemay be related to the appl &id field on the "segment by’ t~h constant...04 6.|• swot -0 1, i.61-03 45.766 17 0 0.117* 0.US30 ,0001 0.01111,31 1 I. K-03 1.137ft-04 i .3%$K-03 11.i1i is 0 0a1113 0.2178 0.0003 0.00339 1.1117K
Jenkins, Adam M; Waterhouse, Robert M; Muskavitch, Marc A T
2015-04-23
Long non-coding RNAs (lncRNAs) have been defined as mRNA-like transcripts longer than 200 nucleotides that lack significant protein-coding potential, and many of them constitute scaffolds for ribonucleoprotein complexes with critical roles in epigenetic regulation. Various lncRNAs have been implicated in the modulation of chromatin structure, transcriptional and post-transcriptional gene regulation, and regulation of genomic stability in mammals, Caenorhabditis elegans, and Drosophila melanogaster. The purpose of this study is to identify the lncRNA landscape in the malaria vector An. gambiae and assess the evolutionary conservation of lncRNAs and their secondary structures across the Anopheles genus. Using deep RNA sequencing of multiple Anopheles gambiae life stages, we have identified 2,949 lncRNAs and more than 300 previously unannotated putative protein-coding genes. The lncRNAs exhibit differential expression profiles across life stages and adult genders. We find that across the genus Anopheles, lncRNAs display much lower sequence conservation than protein-coding genes. Additionally, we find that lncRNA secondary structure is highly conserved within the Gambiae complex, but diverges rapidly across the rest of the genus Anopheles. This study offers one of the first lncRNA secondary structure analyses in vector insects. Our description of lncRNAs in An. gambiae offers the most comprehensive genome-wide insights to date into lncRNAs in this vector mosquito, and defines a set of potential targets for the development of vector-based interventions that may further curb the human malaria burden in disease-endemic countries.
Patient Self-Defined Goals: Essentials of Person-Centered Care for Serious Illness.
Schellinger, Sandra Ellen; Anderson, Eric Worden; Frazer, Monica Schmitz; Cain, Cindy Lynn
2018-01-01
This research, a descriptive qualitative analysis of self-defined serious illness goals, expands the knowledge of what goals are important beyond the physical-making existing disease-specific guidelines more holistic. Integration of goals of care discussions and documentation is standard for quality palliative care but not consistently executed into general and specialty practice. Over 14 months, lay health-care workers (care guides) provided monthly supportive visits for 160 patients with advanced heart failure, cancer, and dementia expected to die in 2 to 3 years. Care guides explored what was most important to patients and documented their self-defined goals on a medical record flow sheet. Using definitions of an expanded set of whole-person domains adapted from the National Consensus Project (NCP) Clinical Practice Guidelines for Quality Palliative Care, 999 goals and their associated plans were deductively coded and examined. Four themes were identified-medical, nonmedical, multiple, and global. Forty percent of goals were coded into the medical domain; 40% were coded to nonmedical domains-social (9%), ethical (7%), family (6%), financial/legal (5%), psychological (5%), housing (3%), legacy/bereavement (3%), spiritual (1%), and end-of-life care (1%). Sixteen percent of the goals were complex and reflected a mix of medical and nonmedical domains, "multiple" goals. The remaining goals (4%) were too global to attribute to an NCP domain. Self-defined serious illness goals express experiences beyond physical health and extend into all aspects of whole person. It is feasible to elicit and record serious illness goals. This approach to goals can support meaningful person-centered care, decision-making, and planning that accords with individual preferences of late life.
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1993-01-01
The presentation begins with a brief description of the motivation and approach that has been taken for this research. This will be followed by a description of the Volterra Theory of Nonlinear Systems and the CAP-TSD code which is an aeroelastic, transonic CFD (Computational Fluid Dynamics) code. The application of the Volterra theory to a CFD model and, more specifically, to a CAP-TSD model of a rectangular wing with a NACA 0012 airfoil section will be presented.
2011-01-01
Introduction Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Methods Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. Results The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. Conclusions The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available. PMID:21548991
Palmer, Cameron S; Franklyn, Melanie; Read-Allsopp, Christine; McLellan, Susan; Niggemeyer, Louise E
2011-05-08
Many trauma registries have used the Abbreviated Injury Scale 1990 Revision Update 98 (AIS98) to classify injuries. In the current AIS version (Abbreviated Injury Scale 2005 Update 2008 - AIS08), injury classification and specificity differ substantially from AIS98, and the mapping tools provided in the AIS08 dictionary are incomplete. As a result, data from different AIS versions cannot currently be compared. The aim of this study was to develop an additional AIS98 to AIS08 mapping tool to complement the current AIS dictionary map, and then to evaluate the completed map (produced by combining these two maps) using double-coded data. The value of additional information provided by free text descriptions accompanying assigned codes was also assessed. Using a modified Delphi process, a panel of expert AIS coders established plausible AIS08 equivalents for the 153 AIS98 codes which currently have no AIS08 map. A series of major trauma patients whose injuries had been double-coded in AIS98 and AIS08 was used to assess the maps; both of the AIS datasets had already been mapped to another AIS version using the AIS dictionary maps. Following application of the completed (enhanced) map with or without free text evaluation, up to six AIS codes were available for each injury. Datasets were assessed for agreement in injury severity measures, and the relative performances of the maps in accurately describing the trauma population were evaluated. The double-coded injuries sustained by 109 patients were used to assess the maps. For data conversion from AIS98, both the enhanced map and the enhanced map with free text description resulted in higher levels of accuracy and agreement with directly coded AIS08 data than the currently available dictionary map. Paired comparisons demonstrated significant differences between direct coding and the dictionary maps, but not with either of the enhanced maps. The newly-developed AIS98 to AIS08 complementary map enabled transformation of the trauma population description given by AIS98 into an AIS08 estimate which was statistically indistinguishable from directly coded AIS08 data. It is recommended that the enhanced map should be adopted for dataset conversion, using free text descriptions if available.
NASA Astrophysics Data System (ADS)
Qiu, Kun; Zhang, Chongfu; Ling, Yun; Wang, Yibo
2007-11-01
This paper proposes an all-optical label processing scheme using multiple optical orthogonal codes sequences (MOOCS) for optical packet switching (OPS) (MOOCS-OPS) networks, for the first time to the best of our knowledge. In this scheme, the multiple optical orthogonal codes (MOOC) from multiple-groups optical orthogonal codes (MGOOC) are permuted and combined to obtain the MOOCS for the optical labels, which are used to effectively enlarge the capacity of available optical codes for optical labels. The optical label processing (OLP) schemes are reviewed and analyzed, the principles of MOOCS-based optical labels for OPS networks are given, and analyzed, then the MOOCS-OPS topology and the key realization units of the MOOCS-based optical label packets are studied in detail, respectively. The performances of this novel all-optical label processing technology are analyzed, the corresponding simulation is performed. These analysis and results show that the proposed scheme can overcome the lack of available optical orthogonal codes (OOC)-based optical labels due to the limited number of single OOC for optical label with the short code length, and indicate that the MOOCS-OPS scheme is feasible.
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-08-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-06-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-09-20
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-06-20
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-01-18
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-08-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-12-20
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-02-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-09-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2008-01-16
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Large-Signal Code TESLA: Current Status and Recent Development
2008-04-01
K.Eppley, J.J.Petillo, “ High - power four cavity S - band multiple- beam klystron design”, IEEE Trans. Plasma Sci. , vol. 32, pp. 1119-1135, June 2004. 4...advances in the development of the large-signal code TESLA, mainly used for the modeling of high - power single- beam and multiple-beam klystron ...amplifiers. Keywords: large-signal code; multiple-beam klystrons ; serial and parallel versions. Introduction The optimization and design of new high power
2015-01-01
This research has the purpose to establish a foundation for new classification and estimation of CDMA signals. Keywords: DS / CDMA signals, BPSK, QPSK...DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK AND QPSK SYMBOLS JANUARY 2015...To) OCT 2013 – OCT 2014 4. TITLE AND SUBTITLE DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK
NASA Astrophysics Data System (ADS)
Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.
2018-01-01
Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.
Angel, Vini M; Friedman, Marvin H; Friedman, Andrea L
This article describes an innovative project involving the integration of bar-code medication administration technology competencies in the nursing curriculum through interprofessional collaboration among nursing, pharmacy, and computer science disciplines. A description of the bar-code medication administration technology project and lessons learned are presented.
Coding and transmission of subband coded images on the Internet
NASA Astrophysics Data System (ADS)
Wah, Benjamin W.; Su, Xiao
2001-09-01
Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.
Anti-diabetic activity of a mineraloid isolate, in vitro and in genetically diabetic mice.
Deneau, Joel; Ahmed, Taufeeq; Blotsky, Roger; Bojanowski, Krzysztof
2011-01-01
Type II diabetes is a metabolic disease mediated through multiple molecular pathways. Here, we report anti-diabetic effect of a standardized isolate from a fossil material - a mineraloid leonardite - in in vitro tests and in genetically diabetic mice. The mineraloid isolate stimulated mitochondrial metabolism in human fibroblasts and this stimulation correlated with enhanced expression of genes coding for mitochondrial proteins such as ATP synthases and ribosomal protein precursors, as measured by DNA microarrays. In the diabetic animal model, consumption of the Totala isolate resulted in decreased weight gain, blood glucose, and glycated hemoglobin. To our best knowledge, this is the first description ever of a fossil material having anti-diabetic activity in pre-clinical models.
2017-05-23
Systems and the NRL Code 5763 Radio Frequency (RF) Stimulator. It includes and covers system descriptions , setup, data collection, and test goals that...6 4. Test Asset Descriptions ...7 4.1. Description of FOXTROT Anti-ship Missile (ASM) Simulator ......................................... 7
Identification of limit cycles in multi-nonlinearity, multiple path systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Barron, O. L.
1979-01-01
A method of analysis which identifies limit cycles in autonomous systems with multiple nonlinearities and multiple forward paths is presented. The FORTRAN code for implementing the Harmonic Balance Algorithm is reported. The FORTRAN code is used to identify limit cycles in multiple path and nonlinearity systems while retaining the effects of several harmonic components.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer
NASA Astrophysics Data System (ADS)
Armstrong, Wilbur C.
1992-06-01
The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).
Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer
NASA Technical Reports Server (NTRS)
Armstrong, Wilbur C.
1992-01-01
The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsdell, J.V. Jr.; Simonen, C.A.; Burk, K.W.
1994-02-01
The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate radiation doses that individuals may have received from operations at the Hanford Site since 1944. This report deals specifically with the atmospheric transport model, Regional Atmospheric Transport Code for Hanford Emission Tracking (RATCHET). RATCHET is a major rework of the MESOILT2 model used in the first phase of the HEDR Project; only the bookkeeping framework escaped major changes. Changes to the code include (1) significant changes in the representation of atmospheric processes and (2) incorporation of Monte Carlo methods for representing uncertainty in input data, model parameters,more » and coefficients. To a large extent, the revisions to the model are based on recommendations of a peer working group that met in March 1991. Technical bases for other portions of the atmospheric transport model are addressed in two other documents. This report has three major sections: a description of the model, a user`s guide, and a programmer`s guide. These sections discuss RATCHET from three different perspectives. The first provides a technical description of the code with emphasis on details such as the representation of the model domain, the data required by the model, and the equations used to make the model calculations. The technical description is followed by a user`s guide to the model with emphasis on running the code. The user`s guide contains information about the model input and output. The third section is a programmer`s guide to the code. It discusses the hardware and software required to run the code. The programmer`s guide also discusses program structure and each of the program elements.« less
Description and use of LSODE, the Livermore Solver for Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Hindmarsh, Alan C.
1993-01-01
LSODE, the Livermore Solver for Ordinary Differential Equations, is a package of FORTRAN subroutines designed for the numerical solution of the initial value problem for a system of ordinary differential equations. It is particularly well suited for 'stiff' differential systems, for which the backward differentiation formula method of orders 1 to 5 is provided. The code includes the Adams-Moulton method of orders 1 to 12, so it can be used for nonstiff problems as well. In addition, the user can easily switch methods to increase computational efficiency for problems that change character. For both methods a variety of corrector iteration techniques is included in the code. Also, to minimize computational work, both the step size and method order are varied dynamically. This report presents complete descriptions of the code and integration methods, including their implementation. It also provides a detailed guide to the use of the code, as well as an illustrative example problem.
User's manual: Subsonic/supersonic advanced panel pilot code
NASA Technical Reports Server (NTRS)
Moran, J.; Tinoco, E. N.; Johnson, F. T.
1978-01-01
Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.
Development of a MELCOR Sodium Chemistry (NAC) Package - FY17 Progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
This report describes the status of the development of MELCOR Sodium Chemistry (NAC) package. This development is based on the CONTAIN-LMR sodium physics and chemistry models to be implemented in MELCOR. In the past three years, the sodium equation of state as a working fluid from the nuclear fusion safety research and from the SIMMER code has been implemented into MELCOR. The chemistry models from the CONTAIN-LMR code, such as the spray and pool fire mode ls, have also been implemented into MELCOR. This report describes the implemented models and the issues encountered. Model descriptions and input descriptions are provided.more » Development testing of the spray and pool fire models is described, including the code-to-code comparison with CONTAIN-LMR. The report ends with an expected timeline for the remaining models to be implemented, such as the atmosphere chemistry, sodium-concrete interactions, and experimental validation tests .« less
Physical Model for the Evolution of the Genetic Code
NASA Astrophysics Data System (ADS)
Yamashita, Tatsuro; Narikiyo, Osamu
2011-12-01
Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.
Trends and Characteristics of Pediatric Dentistry Patients Treated under General Anesthesia.
Rudie, Maxwell N; Milano, Michael M; Roberts, Michael W; Divaris, Kimon
2018-05-11
The aims of this study were to describe the demographic characteristics of pediatric dentistry patients undergoing dental rehabilitation under general anesthesia (DRGA) at UNC-Chapel Hill during the last 13 years and identify factors associated with multiple (1 versus 2 or more) DRGA visits during that timeframe. Administrative claims data were used to identify children and adolescents (age <18 years) who underwent DRGA between 1/1/2002 and 12/31/2014 at the UNC Hospitals system. Information on children's age, sex and all treatment-associated CDT codes were collected. Descriptive statistics and bivariate tests of association were used for data analyses. There were 4,413 DRGAs among 3,973 children (median age=4 years 8 months, males=55%) during the study period. The annual rate of DRGAs increased over time, peaking (n=447) in 2013. Overall, 9% of children had ≥2 visits with repeat rates up to 18%. There was no association between children's sex and receipt of one versus multiple DRGAs; however, craniofacial cases were more likely (p<0.0005) to have multiple DRGAs compared to non-craniofacial ones. DRGAs are on the increase-with the exception of craniofacial and special health care needs patients, multiple DRGAs may be reflective of sub-optimal adherence to preventive and continuing care recommendations.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Users manual and modeling improvements for axial turbine design and performance computer code TD2-2
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
Utilization of recently developed codes for high power Brayton and Rankine cycle power systems
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
1993-01-01
Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.
NASTRAN hydroelastic modal studies. Volume 2: Programmer documentation
NASA Technical Reports Server (NTRS)
1977-01-01
The operational steps, data descriptions, and program code for the new NASTRAN hydroelastic analysis system are described. The overall flow of the system is described, followed by the descriptions of the individual modules and their subroutines.
Coding "Corrective Recasts": The Maintenance of Meaning and More Fundamental Problems
ERIC Educational Resources Information Center
Hauser, Eric
2005-01-01
A fair amount of descriptive research in the field of second language acquisition has looked at the presence of what have been labeled corrective recasts. This research has relied on the methodological practice of coding to identify particular turns as "corrective recasts." Often, the coding criteria make use of the notion of the maintenance of…
Micro PAVER, Version 1.0, User’s Guide, Airport Pavement Management System,
1986-10-01
repair data have been entered for the policy, the following prompts will appear on your screen. Policy Numbr:I Policy Description: PRIMA Y UNW AYS ND... Materia ’ Codes (those material codes entered by the Micro PAVER developers) can not be modified or deleted. New material codes can be added, modified, or
ERIC Educational Resources Information Center
National Forum on Education Statistics, 2011
2011-01-01
In this handbook, "Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data" (SCED), the National Center for Education Statistics (NCES) and the National Forum on Education Statistics have extended the existing secondary course classification system with codes and descriptions for courses offered at…
Tail reconnection in the global magnetospheric context: Vlasiator first results
NASA Astrophysics Data System (ADS)
Palmroth, Minna; Hoilijoki, Sanni; Juusola, Liisa; Pulkkinen, Tuija I.; Hietala, Heli; Pfau-Kempf, Yann; Ganse, Urs; von Alfthan, Sebastian; Vainio, Rami; Hesse, Michael
2017-11-01
The key dynamics of the magnetotail have been researched for decades and have been associated with either three-dimensional (3-D) plasma instabilities and/or magnetic reconnection. We apply a global hybrid-Vlasov code, Vlasiator, to simulate reconnection self-consistently in the ion kinetic scales in the noon-midnight meridional plane, including both dayside and nightside reconnection regions within the same simulation box. Our simulation represents a numerical experiment, which turns off the 3-D instabilities but models ion-scale reconnection physically accurately in 2-D. We demonstrate that many known tail dynamics are present in the simulation without a full description of 3-D instabilities or without the detailed description of the electrons. While multiple reconnection sites can coexist in the plasma sheet, one reconnection point can start a global reconfiguration process, in which magnetic field lines become detached and a plasmoid is released. As the simulation run features temporally steady solar wind input, this global reconfiguration is not associated with sudden changes in the solar wind. Further, we show that lobe density variations originating from dayside reconnection may play an important role in stabilising tail reconnection.
Pastor, D; Amaya, W; García-Olcina, R; Sales, S
2007-07-01
We present a simple theoretical model of and the experimental verification for vanishing of the autocorrelation peak due to wavelength detuning on the coding-decoding process of coherent direct sequence optical code multiple access systems based on a superstructured fiber Bragg grating. Moreover, the detuning vanishing effect has been explored to take advantage of this effect and to provide an additional degree of multiplexing and/or optical code tuning.
50 CFR Table 2d to Part 679 - Species Codes-Non-FMP Species
Code of Federal Regulations, 2011 CFR
2011-10-01
... description Code Arctic char, anadromous 521 Dolly varden, anadromous 531 Eels or eel-like fish 210 Eel, wolf... Arctic surf 812 Cockle 820 Eastern softshell 842 Pacific geoduck 815 Pacific littleneck 840 Pacific razor...
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Qiu, Kun; Xu, Bo; Ling, Yun
2008-05-01
This paper proposes an all-optical label processing scheme that uses the multiple optical orthogonal codes sequences (MOOCS)-based optical label for optical packet switching (OPS) (MOOCS-OPS) networks. In this scheme, each MOOCS is a permutation or combination of the multiple optical orthogonal codes (MOOC) selected from the multiple-groups optical orthogonal codes (MGOOC). Following a comparison of different optical label processing (OLP) schemes, the principles of MOOCS-OPS network are given and analyzed. Firstly, theoretical analyses are used to prove that MOOCS is able to greatly enlarge the number of available optical labels when compared to the previous single optical orthogonal code (SOOC) for OPS (SOOC-OPS) network. Then, the key units of the MOOCS-based optical label packets, including optical packet generation, optical label erasing, optical label extraction and optical label rewriting etc., are given and studied. These results are used to verify that the proposed MOOCS-OPS scheme is feasible.
Utilizing Spectrum Efficiently (USE)
2011-02-28
18 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and...numerical results. 4.8 Space-Time Coded Asynchronous DS - CDMA with Decentralized MAI Suppression: Performance and Spectral Efficiency In [60] multiple...supported at a given signal-to-interference ratio in asynchronous direct-sequence code-division multiple-access ( DS - CDMA ) sys- tems was examined. It was
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) for the purpose of simulating space radiation biological effects. In the first option, properties of monoenergetic beams are treated. In the second option, the transport of beams in different materials is treated. Similar biophysical properties as in the first option are evaluated for the primary ion and its secondary particles. Additional properties related to the nuclear fragmentation of the beam are evaluated. The GERM code is a computationally efficient Monte-Carlo heavy-ion-beam model. It includes accurate models of LET, range, residual energy, and straggling, and the quantum multiple scattering fragmentation (QMSGRG) nuclear database.
Becker, H; Bialasiewicz, A A; Schaudig, U; Schäfer, H; von Domarus, D
2002-05-01
A new data bank developed for ophthalmopathology using a computer-generated, multidigital data code is expected to be able to accomplish complex clinicopathologic correlations of diagnoses and signs, as provided by (multiple) clinical events and histopathologically proven etiologies, and to facilitate the documentation of new data. In the ophthalmopathology laboratory 2890 eyes were examined between January 20, 1975 and December 12, 1996. The main diagnoses and patient data from this 22-year period were recorded. To facilitate the presentation of data, a 10-year period with eyes of 976 patients enucleated from December, 1986 to December, 1996 was chosen. Principal and secondary diagnoses served for establishing the data bank. The frequencies of successive histologic and clinical diagnoses were evaluated by a descriptive computing program using an SPSS-multi-response mode with dummy variables and a categorical variable listing of the software (SPSS version 10.0) classified as (a) non-filtered random, (b) filtered by multiple etiologies, and (c) filtered by multiple events. The principal groups (e.g., histologic diagnoses concerning etiology) and subgroups (e.g., trauma, neoplasia, surgery, systemic diseases, and inflammations) were defined and correlated with 798 separate diagnoses. From 11 diagnoses/events ascribed to the clinical cases, 11,198 namings resulted. Thus, a comparative study of complex etiologies and events leading to enucleation in different hospitals of a specific area may be performed using this electronic ophthalmopathologic data bank system. The complexity of rare disease and integration into a superimposed structure can be managed with this custom-made data bank. A chronologically and demographically oriented consideration of reasons for enucleation is thus feasible.
Pérez-Milena, Alejandro; Redondo-Olmedilla, Manuel de Dios; Martínez-Fernández, María Luz; Jiménez-Pulido, Idoia; Mesa-Gallardo, Inmaculada; Leal-Helmling, Francisco Javier
2017-11-01
To determine the changes in hazardous drinking in adolescents in the last decade, as well as their motivations and experiences. Firstly, a descriptive design using a self-report questionnaire, and secondly an explanatory qualitative design, with video recordings of discussion groups with content analysis (coding, triangulation of categories and verification of results). Pupils from an urban High School, administering a questionnaire every 3 years from 2004 to 2013. Purposive sampling was used to elect groups in qualitative design. Homogeneity criteria: education level; heterogeneity criteria: age, gender, and drug use. Questionnaire: age, gender, drug use, and the CAGE test. Interviews: semi-structured on a previous script, evaluating experiences and expectations. Descriptive design: A total of 1,558 questionnaires, age 14.2±0.3years, 50% female. The prevalence of alcohol drinking decreases (13%), but its hazardous use increases (11%; P<.001, χ 2 ). This is associated with being female (P<.01 χ 2 ), higher alcohol consumption (>6 standard drink units weekly; P<.001, ANOVA), during the weekend (56%; P<.01, χ 2 ) and multiple drug use (P<.01, χ 2 ). CAGE questionnaire: 37% ≥1positive response (related to hazardous drinking, P<.05 χ 2 ), 18% ≥2answers. A total of 48 respondents, classified into 4 categories: personal factors (age, gender), social influences (family, friends), consumption standards (accessibility, nightlife), and addiction (risk, multiple drug use). Despite the decrease in the prevalence of alcohol drinking, the increase in the percentage of the hazardous drinking is a public health problem. It is related to being female, binge-drinking, and multiple drug use. Nightlife and social standards are the main reasons given by adolescents, who have no perception of risk. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Trellis coding techniques for mobile communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Jedrey, T.
1988-01-01
A criterion for designing optimum trellis codes to be used over fading channels is given. A technique is shown for reducing certain multiple trellis codes, optimally designed for the fading channel, to conventional (i.e., multiplicity one) trellis codes. The computational cutoff rate R0 is evaluated for MPSK transmitted over fading channels. Examples of trellis codes optimally designed for the Rayleigh fading channel are given and compared with respect to R0. Two types of modulation/demodulation techniques are considered, namely coherent (using pilot tone-aided carrier recovery) and differentially coherent with Doppler frequency correction. Simulation results are given for end-to-end performance of two trellis-coded systems.
Etiology of work-related electrical injuries: a narrative analysis of workers' compensation claims.
Lombardi, David A; Matz, Simon; Brennan, Melanye J; Smith, Gordon S; Courtney, Theodore K
2009-10-01
The purpose of this study was to provide new insight into the etiology of primarily nonfatal, work-related electrical injuries. We developed a multistage, case-selection algorithm to identify electrical-related injuries from workers' compensation claims and a customized coding taxonomy to identify pre-injury circumstances. Workers' compensation claims routinely collected over a 1-year period from a large U.S. insurance provider were used to identify electrical-related injuries using an algorithm that evaluated: coded injury cause information, nature of injury, "accident" description, and injury description narratives. Concurrently, a customized coding taxonomy for these narratives was developed to abstract the activity, source, initiating process, mechanism, vector, and voltage. Among the 586,567 reported claims during 2002, electrical-related injuries accounted for 1283 (0.22%) of nonfatal claims and 15 fatalities (1.2% of electrical). Most (72.3%) were male, average age of 36, working in services (33.4%), manufacturing (24.7%), retail trade (17.3%), and construction (7.2%). Body part(s) injured most often were the hands, fingers, or wrist (34.9%); multiple body parts/systems (25.0%); lower/upper arm; elbow; shoulder, and upper extremities (19.2%). The leading activities were conducting manual tasks (55.1%); working with machinery, appliances, or equipment; working with electrical wire; and operating powered or nonpowered hand tools. Primary injury sources were appliances and office equipment (24.4%); wires, cables/cords (18.0%); machines and other equipment (11.8%); fixtures, bulbs, and switches (10.4%); and lightning (4.3%). No vector was identified in 85% of cases. and the work process was initiated by others in less than 1% of cases. Injury narratives provide valuable information to overcome some of the limitations of precoded data, more specially for identifying additional injury cases and in supplementing traditional epidemiologic data for further understanding the etiology of work-related electrical injuries that may lead to further prevention opportunities.
Single-channel voice-response-system program documentation volume I : system description
DOT National Transportation Integrated Search
1977-01-01
This report documents the design and implementation of a Voice Response System (VRS) using Adaptive Differential Pulse Code Modulation (ADPCM) voice coding. Implemented on a Digital Equipment Corporation PDP-11/20,R this VRS system supports a single ...
UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.
A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 13 2014-10-01 2014-10-01 false Species Codes: FMP Forage Fish Species (all species of the following families) 2c Table 2c to Part 679 Wildlife and Fisheries FISHERY...: FMP Forage Fish Species (all species of the following families) Species Description Code Bristlemouths...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 13 2013-10-01 2013-10-01 false Species Codes: FMP Forage Fish Species (all species of the following families) 2c Table 2c to Part 679 Wildlife and Fisheries FISHERY...: FMP Forage Fish Species (all species of the following families) Species Description Code Bristlemouths...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 13 2012-10-01 2012-10-01 false Species Codes: FMP Forage Fish Species (all species of the following families) 2c Table 2c to Part 679 Wildlife and Fisheries FISHERY...: FMP Forage Fish Species (all species of the following families) Species Description Code Bristlemouths...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 11 2011-10-01 2011-10-01 false Species Codes: FMP Forage Fish Species (all species of the following families) 2c Table 2c to Part 679 Wildlife and Fisheries FISHERY...: FMP Forage Fish Species (all species of the following families) Species Description Code Bristlemouths...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Species Codes: FMP Forage Fish Species (all species of the following families) 2c Table 2c to Part 679 Wildlife and Fisheries FISHERY...: FMP Forage Fish Species (all species of the following families) Species Description Code Bristlemouths...
CELCAP: A Computer Model for Cogeneration System Analysis
NASA Technical Reports Server (NTRS)
1985-01-01
A description of the CELCAP cogeneration analysis program is presented. A detailed description of the methodology used by the Naval Civil Engineering Laboratory in developing the CELCAP code and the procedures for analyzing cogeneration systems for a given user are given. The four engines modeled in CELCAP are: gas turbine with exhaust heat boiler, diesel engine with waste heat boiler, single automatic-extraction steam turbine, and back-pressure steam turbine. Both the design point and part-load performances are taken into account in the engine models. The load model describes how the hourly electric and steam demand of the user is represented by 24 hourly profiles. The economic model describes how the annual and life-cycle operating costs that include the costs of fuel, purchased electricity, and operation and maintenance of engines and boilers are calculated. The CELCAP code structure and principal functions of the code are described to how the various components of the code are related to each other. Three examples of the application of the CELCAP code are given to illustrate the versatility of the code. The examples shown represent cases of system selection, system modification, and system optimization.
NASA Technical Reports Server (NTRS)
Stoll, Frederick
1993-01-01
The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.
Carey, ML; Clinton-McHarg, T; Sanson-Fisher, RW; Campbell, S; Douglas, HE
2011-01-01
The psychosocial outcomes of cancer patients may be influenced by individual-level, social and treatment centre predictors. This paper aimed to examine the extent to which individual, social and treatment centre variables have been examined as predictors or targets of intervention for psychosocial outcomes of cancer patients. Medline was searched to find studies in which the psychological outcomes of cancer patient were primary variables. Papers published in English between 1999 and 2009 that reported primary data relevant to psychosocial outcomes for cancer patients were included, with 20% randomly selected for further coding. Descriptive studies were coded for inclusion of individual, social or treatment centre variables. Intervention studies were coded to determine if the unit of intervention was the individual patient, social unit or treatment centre. After random sampling, 412 publications meeting the inclusion criteria were identified, 169 were descriptive and 243 interventions. Of the descriptive papers 95.0% included individual predictors, and 5.0% social predictors. None of the descriptive papers examined treatment centre variables as predictors of psychosocial outcomes. Similarly, none of the interventions evaluated the effectiveness of treatment centre interventions for improving psychosocial outcomes. Potential reasons for the overwhelming dominance of individual predictors and individual-focused interventions in psychosocial literature are discussed. PMID:20646035
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, Scott Carlton; Roberts, Jesse D.
2014-03-01
This document describes the marine hydrokinetic (MHK) input file and subroutines for the Sandia National Laboratories Environmental Fluid Dynamics Code (SNL-EFDC), which is a combined hydrodynamic, sediment transport, and water quality model based on the Environmental Fluid Dynamics Code (EFDC) developed by John Hamrick [1], formerly sponsored by the U.S. Environmental Protection Agency, and now maintained by Tetra Tech, Inc. SNL-EFDC has been previously enhanced with the incorporation of the SEDZLJ sediment dynamics model developed by Ziegler, Lick, and Jones [2-4]. SNL-EFDC has also been upgraded to more accurately simulate algae growth with specific application to optimizing biomass in anmore » open-channel raceway for biofuels production [5]. A detailed description of the input file containing data describing the MHK device/array is provided, along with a description of the MHK FORTRAN routine. Both a theoretical description of the MHK dynamics as incorporated into SNL-EFDC and an explanation of the source code are provided. This user manual is meant to be used in conjunction with the original EFDC [6] and sediment dynamics SNL-EFDC manuals [7]. Through this document, the authors provide information for users who wish to model the effects of an MHK device (or array of devices) on a flow system with EFDC and who also seek a clear understanding of the source code, which is available from staff in the Water Power Technologies Department at Sandia National Laboratories, Albuquerque, New Mexico.« less
Transient Ejector Analysis (TEA) code user's guide
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1993-01-01
A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.
ASA24 enables multiple automatically coded self-administered 24-hour recalls and food records
A freely available web-based tool for epidemiologic, interventional, behavioral, or clinical research from NCI that enables multiple automatically coded self-administered 24-hour recalls and food records.
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes
NASA Technical Reports Server (NTRS)
Huang, P. G.
2004-01-01
During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes
NASA Technical Reports Server (NTRS)
Ashpis, David (Technical Monitor); Huang, P. G.
2004-01-01
During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
A predictive transport modeling code for ICRF-heated tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, C.K.; Hwang, D.Q.; Houlberg, W.
In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3.more » Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.« less
A predictive transport modeling code for ICRF-heated tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, C.K.; Hwang, D.Q.; Houlberg, W.
1992-02-01
In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3.more » Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.« less
Code of Federal Regulations, 2014 CFR
2014-07-01
... education, in scientific, professional, technical, mechanical, trade, clerical, fiscal, administrative, or... Data Elements for Federal Travel [Accounting & Certification] Group name Data elements Description Accounting Classification Accounting Code Agency accounting code. Non-Federal Source Indicator Per Diem...
Cognitive Architectures for Multimedia Learning
ERIC Educational Resources Information Center
Reed, Stephen K.
2006-01-01
This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…
Leduc, Y.; Cauchon, M.; Emond, J. G.; Ouellet, J.
1995-01-01
OBJECTIVE: To develop and implement a computerized version of the International Classification of Primary Care. To create a data bank and to conduct a descriptive study of our clinic's clientele. DESIGN: Testing a software program and creating a data bank. SETTING: Family Medicine Unit at Enfant-Jésus Hospital, Quebec City. PARTICIPANTS: All Family Medicine Unit doctors and patients seen between July 1, 1990, and June 30, 1993. MAIN OUTCOME MEASURE: Description of our clientele's health problems using the ICPC. RESULTS: During the study, 48,415 diagnostic codes for 33,033 visits were entered into the bank. For close to 50% of these visits, two or more health problems were coded. There was good correlation between the description of our clientele and descriptions in other studies in the literature. CONCLUSION: This article describes the development of a data bank in a family medicine unit using a software program based on the ICPC. Our 3-year experiment demonstrated that the method works well in family physicians' daily practice. A descriptive study of our clientele is presented, as well as a few examples of the many applications of such a data bank. PMID:7580382
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.
NASA Technical Reports Server (NTRS)
Wade, Randall S.; Jones, Bailey
2009-01-01
A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").
Allison, Kimberly H; Reisch, Lisa M; Carney, Patricia A; Weaver, Donald L; Schnitt, Stuart J; O’Malley, Frances P; Geller, Berta M; Elmore, Joann G
2015-01-01
Aims To gain a better understanding of the reasons for diagnostic variability, with the aim of reducing the phenomenon. Methods and results In preparation for a study on the interpretation of breast specimens (B-PATH), a panel of three experienced breast pathologists reviewed 336 cases to develop consensus reference diagnoses. After independent assessment, cases coded as diagnostically discordant were discussed at consensus meetings. By the use of qualitative data analysis techniques, transcripts of 16 h of consensus meetings for a subset of 201 cases were analysed. Diagnostic variability could be attributed to three overall root causes: (i) pathologist-related; (ii) diagnostic coding/study methodology-related; and (iii) specimen-related. Most pathologist-related root causes were attributable to professional differences in pathologists’ opinions about whether the diagnostic criteria for a specific diagnosis were met, most frequently in cases of atypia. Diagnostic coding/study methodology-related root causes were primarily miscategorizations of descriptive text diagnoses, which led to the development of a standardized electronic diagnostic form (BPATH-Dx). Specimen-related root causes included artefacts, limited diagnostic material, and poor slide quality. After re-review and discussion, a consensus diagnosis could be assigned in all cases. Conclusions Diagnostic variability is related to multiple factors, but consensus conferences, standardized electronic reporting formats and comments on suboptimal specimen quality can be used to reduce diagnostic variability. PMID:24511905
THE RETC CODE FOR QUANTIFYING THE HYDRAULIC FUNCTIONS OF UNSATURATED SOILS
This report describes the RETC computer code for analyzing the soil water retention and hydraulic conductivity functions of unsaturated soils. These hydraulic properties are key parameters in any quantitative description of water flow into and through the unsaturated zone of soil...
Study on multiple-hops performance of MOOC sequences-based optical labels for OPS networks
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Qiu, Kun; Ma, Chunli
2009-11-01
In this paper, we utilize a new study method that is under independent case of multiple optical orthogonal codes to derive the probability function of MOOCS-OPS networks, discuss the performance characteristics for a variety of parameters, and compare some characteristics of the system employed by single optical orthogonal code or multiple optical orthogonal codes sequences-based optical labels. The performance of the system is also calculated, and our results verify that the method is effective. Additionally it is found that performance of MOOCS-OPS networks would, negatively, be worsened, compared with single optical orthogonal code-based optical label for optical packet switching (SOOC-OPS); however, MOOCS-OPS networks can greatly enlarge the scalability of optical packet switching networks.
Practices and Standards in the Construction of BRL-CAD Target Descriptions
1993-09-01
Spencer) 3 AIFRS (Dr. Steven Carter) AIFRT (John Kosiewicz) AIFRE (S. Eitelman) 220 Seventh Street, NE Charlottesville, VA 22901-5396 3 Director 6...Hawkins, Code 1740.2 2231 Faraday Ave Steven L. Cohen, Code 1230 Suite 103 Dennis Clark, Code 0111 Carlsbad, CA 92008 Dr. Paul C. St. Hilaire, Code...4E995 Washington, DC 20330 1 Dr. Robert B. LaBerge 910 Via Palo 1 Cincinnati Mailacron Inc. Aptos, CA 95003 ATTN: Mr. Richard C. Messinger
Coupled-cluster based R-matrix codes (CCRM): Recent developments
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Pradhan, Anil K.
2008-05-01
We report the ongoing development of the new coupled-cluster R-matrix codes (CCRM) for treating electron-ion scattering and radiative processes within the framework of the relativistic coupled-cluster method (RCC), interfaced with the standard R-matrix methodology. The RCC method is size consistent and in principle equivalent to an all-order many-body perturbation theory. The RCC method is one of the most accurate many-body theories, and has been applied for several systems. This project should enable the study of electron-interactions with heavy atoms/ions, utilizing not only high speed computing platforms but also improved theoretical description of the relativistic and correlation effects for the target atoms/ions as treated extensively within the RCC method. Here we present a comprehensive outline of the newly developed theoretical method and a schematic representation of the new suite of CCRM codes. We begin with the flowchart and description of various stages involved in this development. We retain the notations and nomenclature of different stages as analogous to the standard R-matrix codes.
Mohajjel-Aghdam, Alireza; Hassankhani, Hadi; Zamanzadeh, Vahid; Khameneh, Saied; Moghaddam, Sara
2013-09-01
Nursing profession requires knowledge of ethics to guide performance. The nature of this profession necessitates ethical care more than routine care. Today, worldwide definition of professional ethic code has been done based on human and ethical issues in the communication between nurse and patient. To improve all dimensions of nursing, we need to respect ethic codes. The aim of this study is to assess knowledge and performance about nursing ethic codes from nurses' and patients' perspective. A descriptive study Conducted upon 345 nurses and 500 inpatients in six teaching hospitals of Tabriz, 2012. To investigate nurses' knowledge and performance, data were collected by using structured questionnaires. Statistical analysis was done using descriptive and analytic statistics, independent t-test and ANOVA and Pearson correlation coefficient, in SPSS13. Most of the nurses were female, married, educated at BS degree and 86.4% of them were aware of Ethic codes also 91.9% of nurses and 41.8% of patients represented nurses respect ethic codes. Nurses' and patients' perspective about ethic codes differed significantly. Significant relationship was found between nurses' knowledge of ethic codes and job satisfaction and complaint of ethical performance. According to the results, consideration to teaching ethic codes in nursing curriculum for student and continuous education for staff is proposed, on the other hand recognizing failures of the health system, optimizing nursing care, attempt to inform patients about Nursing ethic codes, promote patient rights and achieve patient satisfaction can minimize the differences between the two perspectives.
Mohajjel-Aghdam, Alireza; Hassankhani, Hadi; Zamanzadeh, Vahid; Khameneh, Saied; Moghaddam, Sara
2013-01-01
Introduction: Nursing profession requires knowledge of ethics to guide performance. The nature of this profession necessitates ethical care more than routine care. Today, worldwide definition of professional ethic code has been done based on human and ethical issues in the communication between nurse and patient. To improve all dimensions of nursing, we need to respect ethic codes. The aim of this study is to assess knowledge and performance about nursing ethic codes from nurses' and patients' perspective. Methods: A descriptive study Conducted upon 345 nurses and 500 inpatients in six teaching hospitals of Tabriz, 2012. To investigate nurses' knowledge and performance, data were collected by using structured questionnaires. Statistical analysis was done using descriptive and analytic statistics, independent t-test and ANOVA and Pearson correlation coefficient, in SPSS13. Results: Most of the nurses were female, married, educated at BS degree and 86.4% of them were aware of Ethic codes also 91.9% of nurses and 41.8% of patients represented nurses respect ethic codes. Nurses' and patients' perspective about ethic codes differed significantly. Significant relationship was found between nurses' knowledge of ethic codes and job satisfaction and complaint of ethical performance. Conclusion: According to the results, consideration to teaching ethic codes in nursing curriculum for student and continuous education for staff is proposed, on the other hand recognizing failures of the health system, optimizing nursing care, attempt to inform patients about Nursing ethic codes, promote patient rights and achieve patient satisfaction can minimize the differences between the two perspectives. PMID:25276730
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
How to describe a new fungal species
USDA-ARS?s Scientific Manuscript database
The formal and informal requirements for the publication of descriptions of new fungal species are discussed. This involves following the rules of the International Code of Botanical Nomenclature as well as meeting the standards set by the editorial board of the journals in which these descriptions ...
Public domain optical character recognition
NASA Astrophysics Data System (ADS)
Garris, Michael D.; Blue, James L.; Candela, Gerald T.; Dimmick, Darrin L.; Geist, Jon C.; Grother, Patrick J.; Janet, Stanley A.; Wilson, Charles L.
1995-03-01
A public domain document processing system has been developed by the National Institute of Standards and Technology (NIST). The system is a standard reference form-based handprint recognition system for evaluating optical character recognition (OCR), and it is intended to provide a baseline of performance on an open application. The system's source code, training data, performance assessment tools, and type of forms processed are all publicly available. The system recognizes the handprint entered on handwriting sample forms like the ones distributed with NIST Special Database 1. From these forms, the system reads hand-printed numeric fields, upper and lowercase alphabetic fields, and unconstrained text paragraphs comprised of words from a limited-size dictionary. The modular design of the system makes it useful for component evaluation and comparison, training and testing set validation, and multiple system voting schemes. The system contains a number of significant contributions to OCR technology, including an optimized probabilistic neural network (PNN) classifier that operates a factor of 20 times faster than traditional software implementations of the algorithm. The source code for the recognition system is written in C and is organized into 11 libraries. In all, there are approximately 19,000 lines of code supporting more than 550 subroutines. Source code is provided for form registration, form removal, field isolation, field segmentation, character normalization, feature extraction, character classification, and dictionary-based postprocessing. The recognition system has been successfully compiled and tested on a host of UNIX workstations. This paper gives an overview of the recognition system's software architecture, including descriptions of the various system components along with timing and accuracy statistics.
Molony, D; Beame, C; Behan, W; Crowley, J; Dennehy, T; Quinlan, M; Cullen, W
2016-11-01
While considerable changes are happening in primary care in Ireland and considerable potential exists in intelligence derived from practice-based data to inform these changes, relatively few large-scale general morbidity surveys have been published. To examine the most common reasons why people attend primary care, specifically 'reasons for encounter' (RFEs) among the general practice population and among specific demographic groups (i.e., young children and older adults). We retrospectively examined clinical encounters (which had a diagnostic code) over a 4-year time period. Descriptive analyses were conducted on anonymised data. 70,489 RFEs consultations were recorded (mean 13.53 recorded RFEs per person per annum) and consultations involving multiple RFEs were common. RFE categories for which codes were most commonly recorded were: 'general/unspecified' (31.6 %), 'respiratory' (15.4 %) and 'musculoskeletal' (12.6 %). Most commonly recorded codes were: 'medication renewal' (6.8 %), 'cough' (6.6 %), and 'health maintenance/prevention' (5.8 %). There was considerable variation in the number of RFEs recorded per age group. 6239 RFEs (8.9 %) were recorded by children under 6 years and 15,295 RFEs (21.7 %) were recorded by adults aged over 70. RFEs recorded per calendar month increased consistently through the study period and there was a marked seasonal and temporal variation in the number of RFEs recorded. Practice databases can generate intelligence on morbidity and health service utilisation in the community. Future research to optimise diagnostic coding at a practice level and to promote this activity in a more representative sample of practices is a priority.
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This tenth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Food Assembler, Injection Molder-Machine Operator, Data Entry Typist, Institutional Cook, and Clerk Typist. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE…
NASA Astrophysics Data System (ADS)
Kumar, Santosh; Chanderkanta; Amphawan, Angela
2016-04-01
Excess 3 code is one of the most important codes used for efficient data storage and transmission. It is a non-weighted code and also known as self complimenting code. In this paper, a four bit optical Excess 3 to BCD code converter is proposed using electro-optic effect inside lithium-niobate based Mach-Zehnder interferometers (MZIs). The MZI structures have powerful capability to switching an optical input signal to a desired output port. The paper constitutes a mathematical description of the proposed device and thereafter simulation using MATLAB. The study is verified using beam propagation method (BPM).
Project Summary. THE RETC CODE FOR QUANTIFYING THE HYDRAULIC FUNCTIONS OF UNSATURATED SOILS
This summary describes the RETC computer code for analyzing the soil water retention and hydraulic conductivity functions of unsaturated soils. These hydraulic properties are key parameters in any quantitative description of water flow into and through the unsaturated zone of soi...
Incorporation of coupled nonequilibrium chemistry into a two-dimensional nozzle code (SEAGULL)
NASA Technical Reports Server (NTRS)
Ratliff, A. W.
1979-01-01
A two-dimensional multiple shock nozzle code (SEAGULL) was extended to include the effects of finite rate chemistry. The basic code that treats multiple shocks and contact surfaces was fully coupled with a generalized finite rate chemistry and vibrational energy exchange package. The modified code retains all of the original SEAGULL features plus the capability to treat chemical and vibrational nonequilibrium reactions. Any chemical and/or vibrational energy exchange mechanism can be handled as long as thermodynamic data and rate constants are available for all participating species.
User's manual for three-dimensional analysis of propeller flow fields
NASA Technical Reports Server (NTRS)
Chaussee, D. S.; Kutler, P.
1983-01-01
A detailed operating manual is presented for the prop-fan computer code (in addition to supporting programs) recently developed by Kutler, Chaussee, Sorenson, and Pulliam while at the NASA'S Ames Research Center. This code solves the inviscid Euler equations using an implicit numerical procedure developed by Beam and Warming of Ames. A description of the underlying theory, numerical techniques, and boundary conditions with equations, formulas, and methods for the mesh generation program (MGP), three dimensional prop-fan flow field program (3DPFP), and data reduction program (DRP) is provided, together with complete operating instructions. In addition, a programmer's manual is also provided to assist the user interested in modifying the codes. Included in the programmer's manual for each program is a description of the input and output variables, flow charts, program listings, sample input and output data, and operating hints.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1991-01-01
Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
NASA Astrophysics Data System (ADS)
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
Capabilities overview of the MORET 5 Monte Carlo code
NASA Astrophysics Data System (ADS)
Cochet, B.; Jinaphanh, A.; Heulers, L.; Jacquet, O.
2014-06-01
The MORET code is a simulation tool that solves the transport equation for neutrons using the Monte Carlo method. It allows users to model complex three-dimensional geometrical configurations, describe the materials, define their own tallies in order to analyse the results. The MORET code has been initially designed to perform calculations for criticality safety assessments. New features has been introduced in the MORET 5 code to expand its use for reactor applications. This paper presents an overview of the MORET 5 code capabilities, going through the description of materials, the geometry modelling, the transport simulation and the definition of the outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Kathleen; Lopez, Hugo; Cairns, Julie
An overview of the main North American codes and standards associated with hydrogen safety sensors is provided. The distinction between a code and a standard is defined, and the relationship between standards and codes is clarified, especially for those circumstances where a standard or a certification requirement is explicitly referenced within a code. The report identifies three main types of standards commonly applied to hydrogen sensors (interface and controls standards, shock and hazard standards, and performance-based standards). The certification process and a list and description of the main standards and model codes associated with the use of hydrogen safety sensorsmore » in hydrogen infrastructure are presented.« less
A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters
NASA Technical Reports Server (NTRS)
Mackowski, D. W.; Mishchenko, M. I.
2011-01-01
A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.
NASA Astrophysics Data System (ADS)
Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.
2012-12-01
Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006) is a collection of various sample programs using ``SPML''. These sample programs provide the basekit for simple numerical experiments of geophysical fluid dynamics. For example, SPMODEL includes 1-dimensional KdV equation model, 2-dimensional barotropic, shallow water, Boussinesq models, 3-dimensional MHD dynamo models in rotating spherical shells. These models are written in the common style in harmony with SPML functions. ``Deepconv'' (Sugiyama et al., 2010) and ``Dcpam'' are a cloud resolving model and a general circulation model for the purpose of applications to the planetary atmospheres, respectively. ``Deepconv'' includes several physical processes appropriate for simulations of Jupiter and Mars atmospheres, while ``Dcpam'' does for simulations of Earth, Mars, and Venus-like atmospheres. ``Rdoc-f95'' is a automatic generator of reference manuals of Fortran90/95 programs, which is an extension of ruby documentation tool kit ``rdoc''. It analyzes dependency of modules, functions, and subroutines in the multiple program source codes. At the same time, it can list up the namelist variables in the programs.
Propellant Chemistry for CFD Applications
NASA Technical Reports Server (NTRS)
Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.
1996-01-01
Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.
NASA Technical Reports Server (NTRS)
Degaudenzi, R.; Elia, C.; Viola, R.
1990-01-01
Discussed here is a new approach to code division multiple access applied to a mobile system for voice (and data) services based on Band Limited Quasi Synchronous Code Division Multiple Access (BLQS-CDMA). The system requires users to be chip synchronized to reduce the contribution of self-interference and to make use of voice activation in order to increase the satellite power efficiency. In order to achieve spectral efficiency, Nyquist chip pulse shaping is used with no detection performance impairment. The synchronization problems are solved in the forward link by distributing a master code, whereas carrier forced activation and closed loop control techniques have been adopted in the return link. System performance sensitivity to nonlinear amplification and timing/frequency synchronization errors are analyzed.
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This fourth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Refrigerator Mechanic and Motorcycle Repairperson. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T. general educational…
23 CFR 710.201 - State responsibilities.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 23 Highways 1 2012-04-01 2012-04-01 false State responsibilities. 710.201 Section 710.201 Highways... interest acquired for all Federal-aid projects funded pursuant to title 23 of the United States Code shall... or acquisitions advanced under title 23 of the United States Code with a written description of its...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-10
... To Amend the Hearing Location Rules of the Codes of Arbitration Procedure for Customer and Industry... expand the criteria for selecting a hearing location for an arbitration proceeding. The proposed rule..., 2010. II. Description of the Proposed Rule Change Hearing Location Selection Under the Customer Code...
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... organization transmitting the data set, including first name, middle name or initial, and surname. Contact... unit's nameplate by the manufacturer. The data element is reported in megawatts or kilowatts. Method accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude...
50 CFR 679.94 - Economic data report (EDR) for the Amendment 80 sector.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: NMFS, Alaska Fisheries Science Center, Economic Data Reports, 7600 Sand Point Way NE, F/AKC2, Seattle... Operation Description of code Code NMFS Alaska region ADF&G FCP Catcher/processor Floating catcher processor. FLD Mothership Floating domestic mothership. IFP Stationary Floating Processor Inshore floating...
The "Motherese" of Mr. Rogers: A Description of the Dialogue of Educational Television Programs.
ERIC Educational Resources Information Center
Rice, Mabel L.; Haight, Patti L.
Dialogue from 30-minute samples from "Sesame Street" and "Mr. Rogers' Neighborhood" was coded for grammar, content, and discourse. Grammatical analysis used the LINGQUEST computer-assisted language assessment program (Mordecai, Palen, and Palmer 1982). Content coding was based on categories developed by Rice (1984) and…
Frequency Hopping, Multiple Frequency-Shift Keying, Coding, and Optimal Partial-Band Jamming.
1982-08-01
receivers appropriate for these two strategies. Each receiver is noncoherent (a coherent receiver is generally impractical) and implements hard...Advances in Coding and Modulation for Noncoherent Channels Affected by Fading, Partial Band, and Multiple- . Access Interference, in A. J. Viterbi...Modulation for Noncoherent Channels Affected by Fading, Partial Band, and Multiple-Access interference, in A. J. Viterbi, ed., Advances in Coumunication
Isotopic Dependence of GCR Fluence behind Shielding
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wilson, John W.; Saganti, Premkumar; Kim, Myung-Hee Y.; Cleghorn, Timothy; Zeitlin, Cary; Tripathi, Ram K.
2006-01-01
In this paper we consider the effects of the isotopic composition of the primary galactic cosmic rays (GCR), nuclear fragmentation cross-sections, and isotopic-grid on the solution to transport models used for shielding studies. Satellite measurements are used to describe the isotopic composition of the GCR. For the nuclear interaction data-base and transport solution, we use the quantum multiple-scattering theory of nuclear fragmentation (QMSFRG) and high-charge and energy (HZETRN) transport code, respectively. The QMSFRG model is shown to accurately describe existing fragmentation data including proper description of the odd-even effects as function of the iso-spin dependence on the projectile nucleus. The principle finding of this study is that large errors (+/-100%) will occur in the mass-fluence spectra when comparing transport models that use a complete isotopic-grid (approx.170 ions) to ones that use a reduced isotopic-grid, for example the 59 ion-grid used in the HZETRN code in the past, however less significant errors (<+/-20%) occur in the elemental-fluence spectra. Because a complete isotopic-grid is readily handled on small computer workstations and is needed for several applications studying GCR propagation and scattering, it is recommended that they be used for future GCR studies.
On the Validity of Certain Approximations Used in the Modeling of Nuclear EMP
Farmer, William A.; Cohen, Bruce I.; Eng, Chester D.
2016-04-01
The legacy codes developed for the modeling of EMP, multiple scattering of Compton electrons has typically been modeled by the obliquity factor. A recent publication has examined this approximation in the context of the generated Compton current [W. A. Farmer and A. Friedman, IEEE Trans. Nucl. Sc. 62, 1695 (2015)]. Here, this previous analysis is extended to include the generation of the electromagnetic fields. Obliquity factor predictions are compared with Monte-Carlo models. In using a Monte-Carlo description of scattering, two distributions of scattering angles are considered: Gaussian and a Gaussian with a single-scattering tail. Additionally, legacy codes also neglect themore » radial derivative of the backward-traveling wave for computational efficiency. The neglect of this derivative improperly treats the backward-traveling wave. Moreover, these approximations are examined in the context of a high-altitude burst, and it is shown that in comparison to more complete models, the discrepancy between field amplitudes is roughly two to three percent and between rise-times, 10%. Finally, it is concluded that the biggest factor in determining the rise time of the signal is not the dynamics of the Compton current, but is instead the conductivity.« less
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
An Efficient Method for Verifying Gyrokinetic Microstability Codes
NASA Astrophysics Data System (ADS)
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
106-17 Telemetry Standards Metadata Configuration Chapter 23
2017-07-01
23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard
48 CFR 47.207-3 - Description of shipment, origin, and destination.
Code of Federal Regulations, 2014 CFR
2014-10-01
... contracting officer shall include in solicitations full details regarding the location from which the freight is to be shipped. For example, if a single location is shown, furnish the shipper's name, street..., including boundaries and ZIP codes. (c) Description of the freight. The contracting officer shall include in...
48 CFR 47.207-3 - Description of shipment, origin, and destination.
Code of Federal Regulations, 2010 CFR
2010-10-01
... contracting officer shall include in solicitations full details regarding the location from which the freight is to be shipped. For example, if a single location is shown, furnish the shipper's name, street..., including boundaries and ZIP codes. (c) Description of the freight. The contracting officer shall include in...
48 CFR 47.207-3 - Description of shipment, origin, and destination.
Code of Federal Regulations, 2013 CFR
2013-10-01
... contracting officer shall include in solicitations full details regarding the location from which the freight is to be shipped. For example, if a single location is shown, furnish the shipper's name, street..., including boundaries and ZIP codes. (c) Description of the freight. The contracting officer shall include in...
48 CFR 47.207-3 - Description of shipment, origin, and destination.
Code of Federal Regulations, 2011 CFR
2011-10-01
... contracting officer shall include in solicitations full details regarding the location from which the freight is to be shipped. For example, if a single location is shown, furnish the shipper's name, street..., including boundaries and ZIP codes. (c) Description of the freight. The contracting officer shall include in...
48 CFR 47.207-3 - Description of shipment, origin, and destination.
Code of Federal Regulations, 2012 CFR
2012-10-01
... contracting officer shall include in solicitations full details regarding the location from which the freight is to be shipped. For example, if a single location is shown, furnish the shipper's name, street..., including boundaries and ZIP codes. (c) Description of the freight. The contracting officer shall include in...
Maternal Label and Gesture Use Affects Acquisition of Specific Object Names
ERIC Educational Resources Information Center
Zammit, Maria; Schafer, Graham
2011-01-01
Ten mothers were observed prospectively, interacting with their infants aged 0 ; 10 in two contexts (picture description and noun description). Maternal communicative behaviours were coded for volubility, gestural production and labelling style. Verbal labelling events were categorized into three exclusive categories: label only; label plus…
Photoionization and High Density Gas
NASA Technical Reports Server (NTRS)
Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)
2002-01-01
We present results of calculations using the XSTAR version 2 computer code. This code is loosely based on the XSTAR v.1 code which has been available for public use for some time. However it represents an improvement and update in several major respects, including atomic data, code structure, user interface, and improved physical description of ionization/excitation. In particular, it now is applicable to high density situations in which significant excited atomic level populations are likely to occur. We describe the computational techniques and assumptions, and present sample runs with particular emphasis on high density situations.
Infrastructure for Rapid Development of Java GUI Programs
NASA Technical Reports Server (NTRS)
Jones, Jeremy; Hostetter, Carl F.; Wheeler, Philip
2006-01-01
The Java Application Shell (JAS) is a software framework that accelerates the development of Java graphical-user-interface (GUI) application programs by enabling the reuse of common, proven GUI elements, as distinguished from writing custom code for GUI elements. JAS is a software infrastructure upon which Java interactive application programs and graphical user interfaces (GUIs) for those programs can be built as sets of plug-ins. JAS provides an application- programming interface that is extensible by application-specific plugins that describe and encapsulate both specifications of a GUI and application-specific functionality tied to the specified GUI elements. The desired GUI elements are specified in Extensible Markup Language (XML) descriptions instead of in compiled code. JAS reads and interprets these descriptions, then creates and configures a corresponding GUI from a standard set of generic, reusable GUI elements. These elements are then attached (again, according to the XML descriptions) to application-specific compiled code and scripts. An application program constructed by use of JAS as its core can be extended by writing new plug-ins and replacing existing plug-ins. Thus, JAS solves many problems that Java programmers generally solve anew for each project, thereby reducing development and testing time.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Extensions and Adjuncts to the BRL-COMGEOM Program
1974-08-01
m MAGIC Code, GIFT Code, Computer Simulation, Target Description, Geometric Modeling Techniques, Vulnerability Analysis 20...Arbitrary Quadric Surf ace.. 0Oo „<>. 7 III. BRITL: A GEOMETRY PREPROCESSOR PROGRAM FOR INPUT TO THE GIFT SYSTEM „ 0 18 A. Introduction <, „. ° 18 B...the BRL- GIFT code. The tasks completed under this contract and described in the report are: Ao The addition to the list of available body types
Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold
1997-01-01
The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.
Program for the analysis of time series. [by means of fast Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Brown, T. J.; Brown, C. G.; Hardin, J. C.
1974-01-01
A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birkholzer, J.T.; Barr, D.; Rutqvist, J.
2005-11-15
The DECOVALEX project is an international cooperativeproject initiated by SKI, the Swedish Nuclear Power Inspectorate, withparticipation of about 10 international organizations. The general goalof this project is to encourage multidisciplinary interactive andcooperative research on modelling coupledthermo-hydro-mechanical-chemical (THMC) processes in geologic formationsin support of the performance assessment for underground storage ofradioactive waste. One of the research tasks, initiated in 2004 by theU.S. Department of Energy (DOE), addresses the long-term impact ofgeomechanical and geochemical processes on the flow conditions near wasteemplacement tunnels. Within this task, four international research teamsconduct predictive analysis of the coupled processes in two genericrepositories, using multiple approaches andmore » different computer codes.Below, we give an overview of the research task and report its currentstatus.« less
WebLogo: A Sequence Logo Generator
Crooks, Gavin E.; Hon, Gary; Chandonia, John-Marc; Brenner, Steven E.
2004-01-01
WebLogo generates sequence logos, graphical representations of the patterns within a multiple sequence alignment. Sequence logos provide a richer and more precise description of sequence similarity than consensus sequences and can rapidly reveal significant features of the alignment otherwise difficult to perceive. Each logo consists of stacks of letters, one stack for each position in the sequence. The overall height of each stack indicates the sequence conservation at that position (measured in bits), whereas the height of symbols within the stack reflects the relative frequency of the corresponding amino or nucleic acid at that position. WebLogo has been enhanced recently with additional features and options, to provide a convenient and highly configurable sequence logo generator. A command line interface and the complete, open WebLogo source code are available for local installation and customization. PMID:15173120
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Nounu, Hatem N.; Ponomarev, Artem L.; Cucinotta, Francis A.
2011-01-01
A new computer model, the GCR Event-based Risk Model code (GERMcode), was developed to describe biophysical events from high-energy protons and heavy ions that have been studied at the NASA Space Radiation Laboratory (NSRL) [1] for the purpose of simulating space radiation biological effects. In the GERMcode, the biophysical description of the passage of heavy ions in tissue and shielding materials is made with a stochastic approach that includes both ion track structure and nuclear interactions. The GERMcode accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model [2]. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
A Crack Growth Evaluation Method for Interacting Multiple Cracks
NASA Astrophysics Data System (ADS)
Kamaya, Masayuki
When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e. g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks.
Rozin, P; Lowery, L; Imada, S; Haidt, J
1999-04-01
It is proposed that 3 emotions--contempt, anger, and disgust--are typically elicited, across cultures, by violations of 3 moral codes proposed by R. A. Shweder and his colleagues (R. A. Shweder, N. C. Much, M. Mahapatra, & L. Park, 1997). The proposed alignment links anger to autonomy (individual rights violations), contempt to community (violation of communal codes including hierarchy), and disgust to divinity (violations of purity-sanctity). This is the CAD triad hypothesis. Students in the United States and Japan were presented with descriptions of situations that involve 1 of the types of moral violations and asked to assign either an appropriate facial expression (from a set of 6) or an appropriate word (contempt, anger, disgust, or their translations). Results generally supported the CAD triad hypothesis. Results were further confirmed by analysis of facial expressions actually made by Americans to the descriptions of these situations.
LTCP 2D Graphical User Interface. Application Description and User's Guide
NASA Technical Reports Server (NTRS)
Ball, Robert; Navaz, Homayun K.
1996-01-01
A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.
Improved double-multiple streamtube model for the Darrieus-type vertical axis wind turbine
NASA Astrophysics Data System (ADS)
Berg, D. E.
Double streamtube codes model the curved blade (Darrieus-type) vertical axis wind turbine (VAWT) as a double actuator fish arrangement (one half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. Sandia National Laboratories developed a double multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. The theory underlying this VAWT model is described, as well as the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double multiple streamtube and a vortex filament code. The effects of neglecting dynamic stall and horizontal wind velocity distribution are also illustrated.
Multi-processing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1990-01-01
The MIMD concept is applied, through multitasking, with relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. An existing single processor algorithm is mapped without the need for developing a new algorithm. The procedure of designing a code utilizing this approach is automated with the Unix stream editor. A Multiple Processor Multiple Grid (MPMG) code is developed as a demonstration of this approach. This code solves the three-dimensional, Reynolds-averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. This solver is applied to a generic, oblique-wing aircraft problem on a four-processor computer using one process for data management and nonparallel computations and three processes for pseudotime advance on three different grid systems.
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This first of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Child Care Attendent, Guard, and Medical Assistant. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T. general educational developmental…
Description and Evaluation of GDEM-V 3.0
2009-02-06
Description and Evaluation of GDEM -V 3.0 Michael R. caRnes Ocean Sciences Branch Oceanography Division February 6, 2009 i REPORT DOCUMENTATION PAGE Form...include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Description and Evaluation of GDEM -V 3.0 Michael R. Carnes...unlimited. Unclassified Unclassified Unclassified UL 24 Michael R. Carnes (228) 688-5648 The GDEM (Generalized Digital Environment Model) has served as
ERIC Educational Resources Information Center
Gilpatrick, Eleanor
The fourth of four volumes in Research Report No. 7 of the Health Services Mobility Study (HSMS), this book contains the extended task names of all the tasks whose descriptions can be found in the three prior volumes. It serves as an index to all the tasks by listing the volume in which each task description appears. Chapter 1 of this volume…
Ly, Birama Apho; Bourgeault, Ivy Lynn; Labonté, Ronald; Niang, Mbayang Ndiaye
2017-09-18
Similar to many places, physicians in Senegal are unevenly distributed. Telemedicine is considered a potential solution to this problem. This study investigated the perceptions of Senegal's physicians of the impact of telemedicine on their recruitment to and retention in underserved areas. We conducted individual interviews with a random sample of 60 physicians in Senegal, including 30 physicians working in public hospitals and 30 physicians working in district health centres between January and June 2014, as part of a mixed methods study. Data were collected using a semi-structured interview guide comprising both open- and close-ended questions. Interviews were recorded, transcribed and coded thematically using NVivo 10 software using a priori and emergent codes. Participants' characteristics were analyzed descriptively using SPSS 23. The impact of telemedicine on physicians' recruitment and retention in underserved areas was perceived with some variability. Among the physicians who were interviewed, most (36) thought that telemedicine could have a positive impact on their recruitment and retention but many (24) believed the opposite. The advantages noted by the first included telemedicine's ability to break their professional isolation and reduce the stress related to this, facilitate their distance learning and improve their working conditions. They did acknowledge that it is not sufficient in itself, an opinion also shared by physicians who did not believe that telemedicine could affect their recruitment and retention. Both identified contextual, economic, educational, family, individual, organizational and professional factors as influential. Based on these opinions of physicians, telemedicine promotion is one intervention that, alongside others, could be promoted to assist in addressing the multiple factors that influence physicians' recruitment and retention in underserved areas.
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary Jo W.
2017-01-01
The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. At the conclusion of the development, the software and hardware description language (HDL) code was delivered to JSC for their use in their iPAS test bed to get hands-on experience with the STRS standard, and for development of their own STRS Waveforms on the now STRS compliant platform.The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx ML605 Virtex-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek eBox 620-110-FL) running the Ubuntu 12.4 operating system. Figure 1 shows the RIACS platform hardware. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications.The purpose of this document is to describe the design of the HDL code for the FPGA portion of the iPAS STRS Radio particularly the design of the FPGA wrapper and the test waveform.
Trellis Coding of Non-coherent Multiple Symbol Full Response M-ary CPFSK with Modulation Index 1/M
NASA Technical Reports Server (NTRS)
Lee, H.; Divsalar, D.; Weber, C.
1994-01-01
This paper introduces a trellis coded modulation (TCM) scheme for non-coherent multiple full response M-ary CPFSK with modulation index 1/M. A proper branch metric for the trellis decoder is obtained by employing a simple approximation of the modified Bessel function for large signal to noise ratio (SNR). Pairwise error probability of coded sequences is evaluated by applying a linear approximation to the Rician random variable.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Wornom, Stephen F.
1991-01-01
Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.
INDEX TO 16MM EDUCATIONAL FILMS.
ERIC Educational Resources Information Center
University of Southern California, Los Angeles. National Information Center for Educational Media.
SIXTEEN MILLIMETER EDUCATIONAL FILMS ARE LISTED WITH TITLE, DESCRIPTION, TIME, COLOR/BLACK AND WHITE, PRODUCER CODE NAME, DISTRIBUTER CODE NAME, AND DATE OF PRODUCTION. FILMS ARE LISTED IN TWO WAYS--WITH TITLE ONLY BY SUBJECT IN A SUBJECT MATTER SECTION WHICH HAS AN OUTLINE AND INDEX, AND WITH ALL DATA IN A SECTION WHICH LISTS ALL FILMS…
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1995-01-01
The objective of this work is to develop, verify, and incorporate the baseline two-equation turbulence models which account for the effects of compressibility into the three-dimensional Reynolds averaged Navier-Stokes (RANS) code and to provide documented descriptions of the models and their numerical procedures so that they can be implemented into 3-D CFD codes for engineering applications.
Occupational Titles Including Job Descriptions for Health Occupations Education.
ERIC Educational Resources Information Center
East Texas State Univ., Commerce. Occupational Curriculum Lab.
This alphabetical compilation of 80 occupational titles for health occupations education is taken from the Dictionary of Occupational Titles, (DOT), 4th edition, 1977. An index shows the arrangement of the occupational titles (together with instructional program and DOT code) according to the United States Office of Education code numbers. For…
User's guide to the SEPHIS computer code for calculating the Thorex solvent extraction system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, S.B.; Rainey, R.H.
1979-05-01
The SEPHIS computer program was developed to simulate the countercurrent solvent extraction process. The code has now been adapted to model the Acid Thorex flow sheet. This report represents a practical user's guide to SEPHIS - Thorex containing a program description, user information, program listing, and sample input and output.
50 CFR Table 15 to Part 679 - Gear Codes, Descriptions, and Use
Code of Federal Regulations, 2011 CFR
2011-10-01
..., power gurdy TROLL X X 15 X X All other gear types OTH X X ADF&G GEAR CODES Diving 11 X X Dredge 22 X X Dredge, hydro/mechanical 23 X X Fish ladder/raceway 77 X X Fish wheel 08 X X Gillnet, drift 03 X X...
78 FR 72576 - Criteria for a Catastrophically Disabled Determination for Purposes of Enrollment
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-03
... Procedural Terminology (CPT[supreg]) codes. The revisions ensure that the regulation is not out of date when... trademark of the American Medical Association. CPT codes and descriptions are copyrighted by the American Medical Association. All rights reserved.) This approach will soon be outdated; the ICD-9-CM and CPT...
Reporting of occupational injury and illness in the semiconductor manufacturing industry.
McCurdy, S A; Schenker, M B; Samuels, S J
1991-01-01
In the United States, occupational illness and injury cases meeting specific reporting criteria are recorded on company Occupational Safety and Health Administration (OSHA) 200 logs; case description data are submitted to participating state agencies for coding and entry in the national Supplementary Data System (SDS). We evaluated completeness of reporting (the percentage of reportable cases that were recorded in the company OSHA 200 log) in the semiconductor manufacturing industry by reviewing company health clinic records for 1984 of 10 manufacturing sites of member companies of a national semiconductor manufacturing industry trade association. Of 416 randomly selected work-related cases, 101 met OSHA reporting criteria. Reporting completeness was 60 percent and was lowest for occupational illnesses (44 percent). Case-description data from 150 reported cases were submitted twice to state coding personnel to evaluate coding reliability. Reliability was high (kappa 0.82-0.93) for "nature," "affected body part," "source," and "type" variables. Coding for the SDS appears reliable; reporting completeness may be improved by use of a stepwise approach by company personnel responsible for reporting decisions.
MPEG4: coding for content, interactivity, and universal accessibility
NASA Astrophysics Data System (ADS)
Reader, Cliff
1996-01-01
MPEG4 is a natural extension of audiovisual coding, and yet from many perspectives breaks new ground as a standard. New coding techniques are being introduced, of course, but they will work on new data structures. The standard itself has a new architecture, and will use a new operational model when implemented on equipment that is likely to have innovative system architecture. The author introduces the background developments in technology and applications that are driving or enabling the standard, introduces the focus of MPEG4, and enumerates the new functionalities to be supported. Key applications in interactive TV and heterogeneous environments are discussed. The architecture of MPEG4 is described, followed by a discussion of the multiphase MPEG4 communication scenarios, and issues of practical implementation of MPEG4 terminals. The paper concludes with a description of the MPEG4 workplan. In summary, MPEG4 has two fundamental attributes. First, it is the coding of audiovisual objects, which may be natural or synthetic data in two or three dimensions. Second, the heart of MPEG4 is its syntax: the MPEG4 Syntactic Descriptive Language -- MSDL.
Salient features of MACA and CMACA systems and their applications
NASA Astrophysics Data System (ADS)
Ratnam, C.; Goud, S. L.; Rao, V. Lakshmana
2007-09-01
The Fourier Analytical Investigation results of the Performance of the Multiple Annuli Coded Aperture (MACA) and Complementary Multiple Annuli Coded Aperture Systems (CMACA) are summarised and the probable application of these systems in Astronomy, High energy radiation Imaging, optical filters, and in the field of metallurgy, are suggested.
Leach, R; McNally, Donal; Bashir, Mohamad; Sastry, Priya; Cuerden, Richard; Richens, David; Field, Mark
2012-10-01
The severity and location of injuries resulting from vehicular collisions are normally recorded in Abbreviated Injury Scale (AIS) code; we propose a system to link AIS code to a description of acute aortic syndrome (AAS), thus allowing the hypothesis that aortic injury is progressive with collision kinematics to be tested. Standard AIS codes were matched with a clinical description of AAS. A total of 199 collisions that resulted in aortic injury were extracted from a national automotive collision database and the outcomes mapped onto AAS descriptions. The severity of aortic injury (AIS severity score) and stage of AAS progression were compared with collision kinematics and occupant demographics. Post hoc power analyses were used to estimate maximum effect size. The general demographic distribution of the sample represented that of the UK population in regard to sex and age. No significant relationship was observed between estimated test speed, collision direction, occupant location or seat belt use and clinical progression of aortic injury (once initiated). Power analysis confirmed that a suitable sample size was used to observe a medium effect in most of the cases. Similarly, no association was observed between injury severity and collision kinematics. There is sufficient information on AIS severity and location codes to map onto the clinical AAS spectrum. It was not possible, with this data set, to consider the influence of collision kinematics on aortic injury initiation. However, it was demonstrated that after initiation, further progression along the AAS pathway was not influenced by collision kinematics. This might be because the injury is not progressive, because the vehicle kinematics studied do not fully represent the kinematics of the occupants, or because an unknown factor, such as stage of cardiac cycle, dominates. Epidemiologic/prognostic study, level IV.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less
An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology
Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin; ...
2017-05-15
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less
Multiple-Symbol Noncoherent Decoding of Uncoded and Convolutionally Codes Continous Phase Modulation
NASA Technical Reports Server (NTRS)
Divsalar, D.; Raphaeli, D.
2000-01-01
Recently, a method for combined noncoherent detection and decoding of trellis-codes (noncoherent coded modulation) has been proposed, which can practically approach the performance of coherent detection.
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Lavelle, Thomas M.
1995-01-01
Modifications made to the axial-flow compressor conceptual design code CSPAN are documented in this report. Endwall blockage and stall margin predictions were added. The loss-coefficient model was upgraded. Default correlations for rotor and stator solidity and aspect-ratio inputs and for stator-exit tangential velocity inputs were included in the code along with defaults for aerodynamic design limits. A complete description of input and output along with sample cases are included.
Computer Description of the M561 Utility Truck
1984-10-01
GIFT Computer Code Sustainabi1ity Predictions for Army Spare Components Requirements for Combat (SPARC) 20. ABSTRACT (Caotfmia «a NWM eitim ft...used as input to the GIFT computer code to generate target vulnerability data. DO FORM V JAM 73 1473 EDITION OF I NOV 65 IS OBSOLETE Unclass i f ied...anaLyiis requires input from the Geometric Information for Targets ( GIFT ) ’ computer code. This report documents the combina- torial geometry (Com-Geom
Image Transmission via Spread Spectrum Techniques. Part A
1976-01-01
Code 408 DR. EDWIN H. WRENCH (714-225-6871) Code 408 and HARPER J. WHITEHOUSE (714:225-6315), Code 4002 Naval Undersea Center San Diego. California...progress report appears in two parts. Part A is a summary of work done in support of this program at the Naval Undersea Center. Part B contains final...a technical description of the bandwidth compression system developed at the Naval Undersea Center. This paper is an excerpt from the specifications
Garvin, Jennifer Hornung; Redd, Andrew; Bolton, Dan; Graham, Pauline; Roche, Dominic; Groeneveld, Peter; Leecaster, Molly; Shen, Shuying; Weiner, Mark G.
2013-01-01
Introduction International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes capture comorbidities that can be used to risk adjust nonrandom patient groups. We explored the accuracy of capturing comorbidities associated with one risk adjustment method, the Elixhauser Comorbidity Measure (ECM), in patients with chronic heart failure (CHF) at one Veterans Affairs (VA) medical center. We explored potential reasons for the differences found between the original codes assigned and conditions found through retrospective review. Methods This descriptive, retrospective study used a cohort of patients discharged with a principal diagnosis coded as CHF from one VA medical center in 2003. One admission per patient was used in the study; with multiple admissions, only the first admission was analyzed. We compared the assignment of original codes assigned to conditions found in a retrospective, manual review of the medical record conducted by an investigator with coding expertise as well as by physicians. Members of the team experienced with assigning ICD-9-CM codes and VA coding processes developed themes related to systemic reasons why chronic conditions were not coded in VA records using applied thematic techniques. Results In the 181-patient cohort, 388 comorbid conditions were identified; 305 of these were chronic conditions, originally coded at the time of discharge with an average of 1.7 comorbidities related to the ECM per patient. The review by an investigator with coding expertise revealed a total of 937 comorbidities resulting in 618 chronic comorbid conditions with an average of 3.4 per patient; physician review found 872 total comorbidities with 562 chronic conditions (average 3.1 per patient). The agreement between the original and the retrospective coding review was 88 percent. The kappa statistic for the original and the retrospective coding review was 0.375 with a 95 percent confidence interval (CI) of 0.352 to 0.398. The kappa statistic for the retrospective coding review and physician review was 0.849 (CI, 0.823–0.875). The kappa statistic for the original coding and the physician review was 0.340 (CI, 0.316–0.364). Several systemic factors were identified, including familiarity with inpatient VA and non-VA guidelines, the quality of documentation, and operational requirements to complete the coding process within short time frames and to identify the reasons for movement within a given facility. Conclusion Comorbidities within the ECM representing chronic conditions were significantly underrepresented in the original code assignment. Contributing factors potentially include prioritization of codes related to acute conditions over chronic conditions; coders’ professional training, educational level, and experience; and the limited number of codes allowed in initial coding software. This study highlights the need to evaluate systemic causes of underrepresentation of chronic conditions to improve the accuracy of risk adjustment used for health services research, resource allocation, and performance measurement. PMID:24159270
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
A three-dimensional code for muon propagation through the rock: MUSIC
NASA Astrophysics Data System (ADS)
Antonioli, P.; Ghetti, C.; Korolkova, E. V.; Kudryavtsev, V. A.; Sartorelli, G.
1997-10-01
We present a new three-dimensional Monte-Carlo code MUSIC (MUon SImulation Code) for muon propagation through the rock. All processes of muon interaction with matter with high energy loss (including the knock-on electron production) are treated as stochastic processes. The angular deviation and lateral displacement of muons due to multiple scattering, as well as bremsstrahlung, pair production and inelastic scattering are taken into account. The code has been applied to obtain the energy distribution and angular and lateral deviations of single muons at different depths underground. The muon multiplicity distributions obtained with MUSIC and CORSIKA (Extensive Air Shower simulation code) are also presented. We discuss the systematic uncertainties of the results due to different muon bremsstrahlung cross-sections.
NASA Astrophysics Data System (ADS)
Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf
2016-11-01
This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.
Efficient parallel simulation of CO2 geologic sequestration insaline aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Keni; Doughty, Christine; Wu, Yu-Shu
2007-01-01
An efficient parallel simulator for large-scale, long-termCO2 geologic sequestration in saline aquifers has been developed. Theparallel simulator is a three-dimensional, fully implicit model thatsolves large, sparse linear systems arising from discretization of thepartial differential equations for mass and energy balance in porous andfractured media. The simulator is based on the ECO2N module of the TOUGH2code and inherits all the process capabilities of the single-CPU TOUGH2code, including a comprehensive description of the thermodynamics andthermophysical properties of H2O-NaCl- CO2 mixtures, modeling singleand/or two-phase isothermal or non-isothermal flow processes, two-phasemixtures, fluid phases appearing or disappearing, as well as saltprecipitation or dissolution. The newmore » parallel simulator uses MPI forparallel implementation, the METIS software package for simulation domainpartitioning, and the iterative parallel linear solver package Aztec forsolving linear equations by multiple processors. In addition, theparallel simulator has been implemented with an efficient communicationscheme. Test examples show that a linear or super-linear speedup can beobtained on Linux clusters as well as on supercomputers. Because of thesignificant improvement in both simulation time and memory requirement,the new simulator provides a powerful tool for tackling larger scale andmore complex problems than can be solved by single-CPU codes. Ahigh-resolution simulation example is presented that models buoyantconvection, induced by a small increase in brine density caused bydissolution of CO2.« less
Bhagavatula, Pradeep; Xiang, Qun; Szabo, Aniko; Eichmiller, Fredrick; Kuthy, Raymond A; Okunseri, Christopher E
2012-12-21
Studies on rural-urban differences in dental care have primarily focused on differences in utilization rates and preventive dental services. Little is known about rural-urban differences in the use of wider range of dental procedures. This study examined patterns of preventive, restorative, endodontic, and extraction procedures provided to children enrolled in Delta Dental of Wisconsin (DDWI). We analyzed DDWI enrollment and claims data for children aged 0-18 years from 2002 to 2008. We modified and used a rural and urban classification based on ZIP codes developed by the Wisconsin Area Health Education Center (AHEC). We categorized the ZIP codes into 6 AHEC categories (3 rural and 3 urban). Descriptive and multivariable analysis using generalized linear mixed models (GLMM) were used to examine the patterns of dental procedures provided to children. Tukey-Kramer adjustment was used to control for multiple comparisons. Approximately, 50%, 67% and 68% of enrollees in inner-city Milwaukee, Rural 1 (less than 2500 people), and suburban-Milwaukee had at least one annual dental visit, respectively. Children in inner city-Milwaukee had the lowest utilization rates for all procedures examined, except for endodontic procedures. Compared to children from inner-city Milwaukee, children in other locations had significantly more preventive procedures. Children in Rural 1-ZIP codes had more restorative, endodontic and extraction procedures, compared to children from all other regions. We found significant geographic variation in dental procedures received by children enrolled in DDWI.
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This fifteenth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Bus Driver, General Loader, Forklift Operator, and Material Handler. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T.…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This third of sixteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Body Fender Mechanic and New Car Get-Ready Person. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T. general educational developmental…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This fifth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Household Appliance Mechanic; Lineworker; Painter Helper, Spray; Painter, Brush; and Carpenter Apprentice. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE…
Math Description Engine Software Development Kit
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Smith, Stephanie L.; Dexter, Dan E.; Hodgson, Terry R.
2010-01-01
The Math Description Engine Software Development Kit (MDE SDK) can be used by software developers to make computer-rendered graphs more accessible to blind and visually-impaired users. The MDE SDK generates alternative graph descriptions in two forms: textual descriptions and non-verbal sound renderings, or sonification. It also enables display of an animated trace of a graph sonification on a visual graph component, with color and line-thickness options for users having low vision or color-related impairments. A set of accessible graphical user interface widgets is provided for operation by end users and for control of accessible graph displays. Version 1.0 of the MDE SDK generates text descriptions for 2D graphs commonly seen in math and science curriculum (and practice). The mathematically rich text descriptions can also serve as a virtual math and science assistant for blind and sighted users, making graphs more accessible for everyone. The MDE SDK has a simple application programming interface (API) that makes it easy for programmers and Web-site developers to make graphs accessible with just a few lines of code. The source code is written in Java for cross-platform compatibility and to take advantage of Java s built-in support for building accessible software application interfaces. Compiled-library and NASA Open Source versions are available with API documentation and Programmer s Guide at http:/ / prim e.jsc.n asa. gov.
On the performance of a code division multiple access scheme with transmit/receive conflicts
NASA Astrophysics Data System (ADS)
Silvester, J. A.
One of the benefits of spread spectrum is that by assigning each user a different orthogonal signal set, multiple transmissions can occur simultaneously. This possibility is utilized in new access schemes called Code Division Multiple Access (CDMA). The present investigation is concerned with a particular CDMA implementation in which the transmit times for each symbol are exactly determined in a distributed manner such that both sender and receiver know them. In connection with a decision whether to transmit or receive, the loss of a symbol in one of the channels results. The system employs thus a coding technique which permits correct decoding of a codeword even if some constituent symbols are missing or in error. The technique used is Reed Solomon coding. The performance of this system is analyzed, and attention is given to the optimum strategy which should be used in deciding whether to receive or transmit.
Distributed Joint Source-Channel Coding in Wireless Sensor Networks
Zhu, Xuqi; Liu, Yu; Zhang, Lin
2009-01-01
Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560
Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih
2016-04-21
Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.
PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan
PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less
Multiple Codes, Multiple Impressions: An Analysis of Doctor-Client Encounters in Nigeria
ERIC Educational Resources Information Center
Odebunmi, Akin
2013-01-01
Existing studies on doctor-client interactions have largely focused on monolingual encounters and the interactional effects and functions of the languages used in the communication between doctors and their clients. They have neither, to a large extent, examined the several codes employed in single encounters and their pragmatic roles nor given…
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
Energy loss of argon in a laser-generated carbon plasma.
Frank, A; Blazević, A; Grande, P L; Harres, K; Hessling, T; Hoffmann, D H H; Knobloch-Maas, R; Kuznetsov, P G; Nürnberg, F; Pelka, A; Schaumann, G; Schiwietz, G; Schökel, A; Schollmeier, M; Schumacher, D; Schütrumpf, J; Vatulin, V V; Vinokurov, O A; Roth, M
2010-02-01
The experimental data presented in this paper address the energy loss determination for argon at 4 MeV/u projectile energy in laser-generated carbon plasma covering a huge parameter range in density and temperature. Furthermore, a consistent theoretical description of the projectile charge state evolution via a Monte Carlo code is combined with an improved version of the CasP code that allows us to calculate the contributions to the stopping power of bound and free electrons for each projectile charge state. This approach gets rid of any effective charge description of the stopping power. Comparison of experimental data and theoretical results allows us to judge the influence of different plasma parameters.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.; Hwang, H. C.; Bloomberg, H. W.; Lin, D. L.; Schmidt, M. J.; Strickland, D. J.
1979-01-01
The model consists of four phases: single electron dynamics, single electron avalanche, negative streamer development, and tree formation. Numerical algorithms and computer code implementations are presented for the first three phases. An approach to developing a code description of fourth phase is discussed. Numerical results are presented for a crude material model of Teflon.
A Manual for Coding Descriptions, Interpretations, and Evaluations of Visual Art Forms.
ERIC Educational Resources Information Center
Acuff, Bette C.; Sieber-Suppes, Joan
This manual presents a system for categorizing stated esthetic responses to paintings. It is primarily a training manual for coders, but it may also be used for teaching reflective thinking skills and for evaluating programs of art education. The coding system contains 33 subdivisions of esthetic responses under three major categories: Cue…
50 CFR Table 2b to Part 679 - Species Codes: FMP Prohibited Species and CR Crab
Code of Federal Regulations, 2011 CFR
2011-10-01
... CR Crab 2b Table 2b to Part 679 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL... CR Crab Species Description Code CR Crab Groundfish PSC CRAB Box Lopholithodes mandtii 900... aequispinus 923 ✓ ✓ King, red Paralithodes camtshaticus 921 ✓ ✓ King, scarlet (deepsea) Lithodes couesi 924...
The Teaching of the Code of Ethics and Standard Practices for Texas Educator Preparation Programs
ERIC Educational Resources Information Center
Davenport, Marvin; Thompson, J. Ray; Templeton, Nathan R.
2015-01-01
The purpose of this descriptive quantitative research study was to answer three basic informational questions: (1) To what extent ethics training, as stipulated in Texas Administrative Code Chapter 247, was included in the EPP curriculum; (2) To what extent Texas public universities with approved EPP programs provided faculty opportunities for…
Dependents' Educational Assistance Program (DEA), Chapter 25 of Title 38, U.S. Code
ERIC Educational Resources Information Center
US Department of Veterans Affairs, 2005
2005-01-01
This pamphlet provides a general description of the Dependents' Educational Assistance program, or DEA (chapter 35 of title 38, U. S. Code). The DEA program provides education and training opportunities to eligible dependents and survivors of certain veterans. It covers the main questions prospective participants may have about DEA benefits,…
The Impact of Bar Code Medication Administration Technology on Reported Medication Errors
ERIC Educational Resources Information Center
Holecek, Andrea
2011-01-01
The use of bar-code medication administration technology is on the rise in acute care facilities in the United States. The technology is purported to decrease medication errors that occur at the point of administration. How significantly this technology affects actual rate and severity of error is unknown. This descriptive, longitudinal research…
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
Verification of the proteus two-dimensional Navier-Stokes code for flat plate and pipe flows
NASA Technical Reports Server (NTRS)
Conley, Julianne M.; Zeman, Patrick L.
1991-01-01
The Proteus Navier-Stokes Code is evaluated for 2-D/axisymmetric, viscous, incompressible, internal, and external flows. The particular cases to be discussed are laminar and turbulent flows over a flat plate, laminar and turbulent developing pipe flows, and turbulent pipe flow with swirl. Results are compared with exact solutions, empirical correlations, and experimental data. A detailed description of the code set-up, including boundary conditions, initial conditions, grid size, and grid packing is given for each case.
Infections in Combat Casualties During Operations Iraqi and Enduring Freedom
2009-04-01
bacteria other 39 112.1 Vulva/vaginal candidiasis 1 381.4 Nonsuppurative otitis media 1 451.82 Superficial phlebitis arm 2 451.83 Deep phlebitis arm 1...Coding by Pathogen Pathogen Code Code Description Number Fungus 112.1 Vulva/vaginal candidiasis 1 112.3 Candidiasis of skin/nails 1 112.5 Disseminated... candidiasis 3 112.89 Candidiasis site not available 6 112.9 Candidiasis site unspecified 13 117.3 Aspergillus 5 117.9 Mycoses 14 Gram-negative 003.8
Computer Aided Design of Polyhedron Solids to Model Air in Com-Geom Descriptions
1983-08-01
34The GIFT Code User Manual, Volume I, Introduction and Input Requirements," BRL Report No. 1802, July 1975 (Unclassified). (AD B0060Z7LK 2G...Kuehl, L. Bain and M. Reisinger, "The GIFT Code User Manual, Volume II, The Output Options," BRL Report ARBRL-TR-02189, September 1979...is generated from the GIFT code under op- tion XSECT. This option produces plot files which define cross- sectional views of the COM-GEOM
Hot zero power reactor calculations using the Insilico code
Hamilton, Steven P.; Evans, Thomas M.; Davidson, Gregory G.; ...
2016-03-18
In this paper we describe the reactor physics simulation capabilities of the insilico code. A description of the various capabilities of the code is provided, including detailed discussion of the geometry, meshing, cross section processing, and neutron transport options. Numerical results demonstrate that the insilico SP N solver with pin-homogenized cross section generation is capable of delivering highly accurate full-core simulation of various PWR problems. Comparison to both Monte Carlo calculations and measured plant data is provided.
NASA Technical Reports Server (NTRS)
Friend, J.
1971-01-01
A manual designed both as an instructional manual for beginning coders and as a reference manual for the coding language INSTRUCT, is presented. The manual includes the major programs necessary to implement the teaching system and lists the limitation of current implementation. A detailed description is given of how to code a lesson, what buttons to push, and what utility programs to use. Suggestions for debugging coded lessons and the error messages that may be received during assembly or while running the lesson are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viktor K. Decyk
The UCLA work on this grant was to design and help implement an object-oriented version of the GTC code, which is written in Fortran90. The GTC code is the main global gyrokinetic code used in this project, and over the years multiple, incompatible versions have evolved. The reason for this effort is to allow multiple authors to work together on GTC and to simplify future enhancements to GTC. The effort was designed to proceed incrementally. Initially, an upper layer of classes (derived types and methods) was implemented which called the original GTC code 'under the hood.' The derived types pointedmore » to data in the original GTC code, and the methods called the original GTC subroutines. The original GTC code was modified only very slightly. This allowed one to define (and refine) a set of classes which described the important features of the GTC code in a new, more abstract way, with a minimum of implementation. Furthermore, classes could be added one at a time, and at the end of the each day, the code continued to work correctly. This work was done in close collaboration with Y. Nishimura from UC Irvine and Stefan Ethier from PPPL. Ten classes were ultimately defined and implemented: gyrokinetic and drift kinetic particles, scalar and vector fields, a mesh, jacobian, FLR, equilibrium, interpolation, and particles species descriptors. In the second state of this development, some of the scaffolding was removed. The constructors in the class objects now allocated the data and the array data in the original GTC code was removed. This isolated the components and now allowed multiple instantiations of the objects to be created, in particular, multiple ion species. Again, the work was done incrementally, one class at a time, so that the code was always working properly. This work was done in close collaboration with Y. Nishimura and W. Zhang from UC Irvine and Stefan Ethier from PPPL. The third stage of this work was to integrate the capabilities of the various versions of the GTC code into one flexible and extensible version. To do this, we developed a methodology to implement Design Patterns in Fortran90. Design Patterns are abstract solutions to generic programming problems, which allow one to handle increased complexity. This work was done in collaboration with Henry Gardner, a computer scientist (and former plasma physicist) from the Australian National University. As an example, the Strategy Pattern is being used in GTC to support multiple solvers. This new code is currently being used in the study of energetic particles. A document describing the evolution of the GTC code to this new object-oriented version is available to users of GTC.« less
NASA Astrophysics Data System (ADS)
Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong
2015-10-01
We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.
VizieR Online Data Catalog: Habitable zones around main-sequence stars (Kopparapu+, 2014)
NASA Astrophysics Data System (ADS)
Kopparapu, R. K.; Ramirez, R. M.; Schottelkotte, J.; Kasting, J. F.; Domagal-Goldman, S.; Eymet, V.
2017-08-01
Language: Fortran 90 Code tested under the following compilers/operating systems: ifort/CentOS linux Description of input data: No input necessary. Description of output data: Output files: HZs.dat, HZ_coefficients.dat System requirements: No major system requirement. Fortran compiler necessary. Calls to external routines: None. Additional comments: None (1 data file).
ERIC Educational Resources Information Center
Recchia, Holly E.; Howe, Nina
2010-01-01
This study examined associations between children's descriptions of sibling conflicts and their resolutions during a structured negotiation task. A sample of 58 sibling dyads (older sibling M age = 8.39 years, younger sibling M = 6.06 years) were privately interviewed about an actual conflict. Each child provided a narrative that was coded for…
NASA Technical Reports Server (NTRS)
Reardon, John E.; Violett, Duane L., Jr.
1991-01-01
The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.
The dependence of frequency distributions on multiple meanings of words, codes and signs
NASA Astrophysics Data System (ADS)
Yan, Xiaoyong; Minnhagen, Petter
2018-01-01
The dependence of the frequency distributions due to multiple meanings of words in a text is investigated by deleting letters. By coding the words with fewer letters the number of meanings per coded word increases. This increase is measured and used as an input in a predictive theory. For a text written in English, the word-frequency distribution is broad and fat-tailed, whereas if the words are only represented by their first letter the distribution becomes exponential. Both distribution are well predicted by the theory, as is the whole sequence obtained by consecutively representing the words by the first L = 6 , 5 , 4 , 3 , 2 , 1 letters. Comparisons of texts written by Chinese characters and the same texts written by letter-codes are made and the similarity of the corresponding frequency-distributions are interpreted as a consequence of the multiple meanings of Chinese characters. This further implies that the difference of the shape for word-frequencies for an English text written by letters and a Chinese text written by Chinese characters is due to the coding and not to the language per se.
Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system
NASA Astrophysics Data System (ADS)
Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.
2017-11-01
This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This eleventh of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Typist I, Grocery Checker, File Clerk, Receptionist; Bank Teller; and Clerk, General Office. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number,…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This twelfth of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Ward Clerk, Account Clerk, Mail Handler (Messenger), and Payroll Clerk. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T. general…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This fourteenth of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Meat Cutter, Shipping Clerk, Long Haul Truck Driver, and Truck Driver--Light. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T.…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This eighth of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Salesperson, Automotive; Salesperson, Men's Wear; Waiter/Waitress; Janitor; Porter; and Pressing Machine Operator. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This sixth of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Roofer Apprentice, Pipefitter, Medical Supply Clerk, Stock Clerk, and Warehouseperson. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T.…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This second of fifteen sets of Adult Competency Education (ACE) Based Job Descriptions in the ACE kit contains job descriptions for Groundskeeper, Animal Keeper, Tire Repairperson, Muffler Installer, and Garage Mechanic. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder, D.O.T.…
ERIC Educational Resources Information Center
San Mateo County Office of Education, Redwood City, CA. Career Preparation Centers.
This ninth of fifteen sets of Adult Competency Education (ACE) Competency Based Job Descriptions in the ACE kit contains job descriptions for Maid, Ticket Agent, Cosmetologist, Counterperson, Cook's Helper, and Kitchen Helper. Each begins with a fact sheet that includes this information: occupational title, D.O.T. code, ACE number, career ladder,…
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
Method to predict external store carriage characteristics at transonic speeds
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1988-01-01
Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.
MaxReport: An Enhanced Proteomic Result Reporting Tool for MaxQuant.
Zhou, Tao; Li, Chuyu; Zhao, Wene; Wang, Xinru; Wang, Fuqiang; Sha, Jiahao
2016-01-01
MaxQuant is a proteomic software widely used for large-scale tandem mass spectrometry data. We have designed and developed an enhanced result reporting tool for MaxQuant, named as MaxReport. This tool can optimize the results of MaxQuant and provide additional functions for result interpretation. MaxReport can generate report tables for protein N-terminal modifications. It also supports isobaric labelling based relative quantification at the protein, peptide or site level. To obtain an overview of the results, MaxReport performs general descriptive statistical analyses for both identification and quantification results. The output results of MaxReport are well organized and therefore helpful for proteomic users to better understand and share their data. The script of MaxReport, which is freely available at http://websdoor.net/bioinfo/maxreport/, is developed using Python code and is compatible across multiple systems including Windows and Linux.
Use of CDMA access technology in mobile satellite systems
NASA Technical Reports Server (NTRS)
Ramasastry, Jay; Wiedeman, Bob
1995-01-01
Use of Code Division Multiple Access (CDMA) technology in terrestrial wireless systems is fairly well understood. Similarly, design and operation of Power Control in a CDMA-based system in a terrestrial environment is also well established. Terrestrial multipath characteristics, and optimum design of the CDMA receiver to deal with multipath and fading conditions are reliably established. But the satellite environment is different. When the CDMA technology is adopted to the satellite environment, other design features need to be incorporated (for example; interleaving, open-loop and closed-loop power control design, diversity characteristics) to achieve comparable level of system performance. In fact, the GLOBALSTAR LEO/MSS system has incorporated all these features. Contrary to some published reports, CDMA retains the advantages in the satellite environment that are similar to those achieved in the terrestrial environment. This document gives a description of the CDMA waveform and other design features adopted for mobile satellite applications.
HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales
Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander; ...
2015-03-20
HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less
HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales.
Riccardi, Demian; Parks, Jerry M; Johs, Alexander; Smith, Jeremy C
2015-04-27
HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. The core is well-tested, well-documented, and easy to install across computational platforms. The goal of the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, an abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.
Experimental Criticality Benchmarks for SNAP 10A/2 Reactor Cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krass, A.W.
2005-12-19
This report describes computational benchmark models for nuclear criticality derived from descriptions of the Systems for Nuclear Auxiliary Power (SNAP) Critical Assembly (SCA)-4B experimental criticality program conducted by Atomics International during the early 1960's. The selected experimental configurations consist of fueled SNAP 10A/2-type reactor cores subject to varied conditions of water immersion and reflection under experimental control to measure neutron multiplication. SNAP 10A/2-type reactor cores are compact volumes fueled and moderated with the hydride of highly enriched uranium-zirconium alloy. Specifications for the materials and geometry needed to describe a given experimental configuration for a model using MCNP5 are provided. Themore » material and geometry specifications are adequate to permit user development of input for alternative nuclear safety codes, such as KENO. A total of 73 distinct experimental configurations are described.« less
HackaMol: An Object-Oriented Modern Perl Library for Molecular Hacking on Multiple Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riccardi, Demian M.; Parks, Jerry M.; Johs, Alexander
HackaMol is an open source, object-oriented toolkit written in Modern Perl that organizes atoms within molecules and provides chemically intuitive attributes and methods. The library consists of two components: HackaMol, the core that contains classes for storing and manipulating molecular information, and HackaMol::X, the extensions that use the core. We tested the core; it is well-documented and easy to install across computational platforms. Our goal for the extensions is to provide a more flexible space for researchers to develop and share new methods. In this application note, we provide a description of the core classes and two extensions: HackaMol::X::Calculator, anmore » abstract calculator that uses code references to generalize interfaces with external programs, and HackaMol::X::Vina, a structured class that provides an interface with the AutoDock Vina docking program.« less
1984-06-01
exist for the same item, as opposed to separate budget and fund codes for separate but related items. Multiple pro- cedures and fund codes can oe used...funds. If some funds are marked for multiple years and others must be obligated or outlaid witnin one year, contracting for PDSS tasks must be partitioned...Experience: PDSS requires both varied experience factors in multiple dis- ciplines and the sustaining of a critical mass of experience factors and
The multidimensional Self-Adaptive Grid code, SAGE, version 2
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1995-01-01
This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.
ERIC Educational Resources Information Center
Kezar, Adrianna
The purpose of this paper is to explore avenues for achieving pluralistic leadership cultures and present three principles: (1) awareness of identity, positionality, and power conditions; (2) acknowledgment of multiple descriptions of campus leadership and personal philosophies of leadership; and (3) negotiation among multiple descriptions of…
Multitasking for flows about multiple body configurations using the chimera grid scheme
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Morgan, R. L.
1987-01-01
The multitasking of a finite-difference scheme using multiple overset meshes is described. In this chimera, or multiple overset mesh approach, a multiple body configuration is mapped using a major grid about the main component of the configuration, with minor overset meshes used to map each additional component. This type of code is well suited to multitasking. Both steady and unsteady two dimensional computations are run on parallel processors on a CRAY-X/MP 48, usually with one mesh per processor. Flow field results are compared with single processor results to demonstrate the feasibility of running multiple mesh codes on parallel processors and to show the increase in efficiency.
REXOR 2 rotorcraft simulation model. Volume 1: Engineering documentation
NASA Technical Reports Server (NTRS)
Reaser, J. S.; Kretsinger, P. H.
1978-01-01
A rotorcraft nonlinear simulation called REXOR II, divided into three volumes, is described. The first volume is a development of rotorcraft mechanics and aerodynamics. The second is a development and explanation of the computer code required to implement the equations of motion. The third volume is a user's manual, and contains a description of code input/output as well as operating instructions.
Physical Models for Particle Tracking Simulations in the RF Gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shishlo, Andrei P.; Holmes, Jeffrey A.
2015-06-01
This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.
Patterns of Revision in Online Writing: A Study of Wikipedia's Featured Articles
ERIC Educational Resources Information Center
Jones, John
2008-01-01
This study examines the revision histories of 10 Wikipedia articles nominated for the site's Featured Article Class (FAC), its highest quality rating, 5 of which achieved FAC and 5 of which did not. The revisions to each article were coded, and the coding results were combined with a descriptive analysis of two representative articles in order to…
Common Day Care Safety Renovations: Descriptions, Explanations and Cost Estimates.
ERIC Educational Resources Information Center
Spack, Stan
This booklet explains some of the day care safety features specified by the new Massachusetts State Building Code (January 1, 1975) which must be met before a new day care center can be licensed. The safety features described are those which most often require renovation to meet the building code standards. Best estimates of the costs involved in…
Turbulence modeling for hypersonic flight
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1993-01-01
The objective of the proposed work is to continue to develop, verify, and incorporate the baseline two-equation turbulence models, which account for the effects of compressibility at high speeds, into a three-dimensional Reynolds averaged Navier-Stokes (RANS) code. Additionally, we plan to provide documented descriptions of the models and their numerical procedures so that they can be implemented into the NASP CFD codes.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
COMPLETE DETERMINATION OF POLARIZATION FOR A HIGH-ENERGY DEUTERON BEAM (thesis)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Button, J
1959-05-01
please delete the no. 17076<>13:017077The P/sub 1/ multigroup code was written for the IBM-704 in order to determine the accuracy of the few- group diffusion scheme with various imposed conditions and also to provide an alternate computational method when this scheme fails to be sufficiently accurate. The code solves for the spatially dependent multigroup flux, taking into account such nuclear phenomena is slowing down of neutrons resulting from elastic and inelastic scattering, the removal of neutrons resulting from epithermal capture and fission resonances, and the regeneration of fist neutrons resulting from fissioning which may occur in any of as manymore » as 80 fast multigroups or in the one thermal group. The code will accept as input a physical description of the reactor (that is: slab, cylindrical, or spherical geometry, number of points and regions, composition description group dependent boundary condition, transverse buckling, and mesh sizes) and a prepared library of nuclear properties of all the isotopes in each composition. The code will produce as output multigroup fluxes, currents, and isotopic slowing-down densities, in addition to pointwise and regionwise few-group macroscopic cross sections. (auth)« less
NASA Astrophysics Data System (ADS)
Fernandez, Eduardo; Borelli, Noah; Cappelli, Mark; Gascon, Nicolas
2003-10-01
Most current Hall thruster simulation efforts employ either 1D (axial), or 2D (axial and radial) codes. These descriptions crucially depend on the use of an ad-hoc perpendicular electron mobility. Several models for the mobility are typically invoked: classical, Bohm, empirically based, wall-induced, as well as combinations of the above. Experimentally, it is observed that fluctuations and electron transport depend on axial distance and operating parameters. Theoretically, linear stability analyses have predicted a number of unstable modes; yet the nonlinear character of the fluctuations and/or their contribution to electron transport remains poorly understood. Motivated by these observations, a 2D code in the azimuthal and axial coordinates has been written. In particular, the simulation self-consistently calculates the azimuthal disturbances resulting in fluctuating drifts, which in turn (if properly correlated with plasma density disturbances) result in fluctuation-driven electron transport. The characterization of the turbulence at various operating parameters and across the channel length is also the object of this study. A description of the hybrid code used in the simulation as well as the initial results will be presented.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.
1984-07-01
The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.
2011-01-01
The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.
Applying Standard Interfaces to a Process-Control Language
NASA Technical Reports Server (NTRS)
Berthold, Richard T.
2005-01-01
A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.
Multiple grid problems on concurrent-processing computers
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.
1986-01-01
Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.
A Data Parallel Multizone Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)
1995-01-01
We have developed a data parallel multizone compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the "chimera" approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. The design choices can be summarized as: 1. finite differences on structured grids; 2. implicit time-stepping with either distributed solves or data motion and local solves; 3. sequential stepping through multiple zones with interzone data transfer via a distributed data structure. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran (HPF). One interesting feature is the issue of turbulence modeling, where the architecture of a parallel machine makes the use of an algebraic turbulence model awkward, whereas models based on transport equations are more natural. We will present some performance figures for the code on the CM-5, and consider the issues involved in transitioning the code to HPF for portability to other parallel platforms.
The Lake Tahoe Basin Land Use Simulation Model
Forney, William M.; Oldham, I. Benson
2011-01-01
This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.
An evaluation of medical knowledge contained in Wikipedia and its use in the LOINC database.
Friedlin, Jeff; McDonald, Clement J
2010-01-01
The logical observation identifiers names and codes (LOINC) database contains 55 000 terms consisting of more atomic components called parts. LOINC carries more than 18 000 distinct parts. It is necessary to have definitions/descriptions for each of these parts to assist users in mapping local laboratory codes to LOINC. It is believed that much of this information can be obtained from the internet; the first effort was with Wikipedia. This project focused on 1705 laboratory analytes (the first part in the LOINC laboratory name). Of the 1705 parts queried, 1314 matching articles were found in Wikipedia. Of these, 1299 (98.9%) were perfect matches that exactly described the LOINC part, 15 (1.14%) were partial matches (the description in Wikipedia was related to the LOINC part, but did not describe it fully), and 102 (7.76%) were mis-matches. The current release of RELMA and LOINC include Wikipedia descriptions of LOINC parts obtained as a direct result of this project.
Safe, Multiphase Bounds Check Elimination in Java
2010-01-28
production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Quantized phase coding and connected region labeling for absolute phase retrieval.
Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian
2016-12-12
This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.
Protecting quantum memories using coherent parity check codes
NASA Astrophysics Data System (ADS)
Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv
2018-07-01
Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2
The Marshall Engineering Thermosphere (MET) Model. Volume 1; Technical Description
NASA Technical Reports Server (NTRS)
Smith, R. E.
1998-01-01
Volume 1 presents a technical description of the Marshall Engineering Thermosphere (MET) model atmosphere and a summary of its historical development. Various programs developed to augment the original capability of the model are discussed in detail. The report also describes each of the individual subroutines developed to enhance the model. Computer codes for these subroutines are contained in four appendices.
Analysis of a Radiation Model of the Shuttle Space Suit
NASA Technical Reports Server (NTRS)
Anderson, Brooke M.; Nealy, John E.; Kim, Myung-Hee; Qualls, Garry D.; Wilson, John W.
2003-01-01
The extravehicular activity (EVA) required to assemble the International Space Station (ISS) will take approximately 1500 hours with 400 hours of EVA per year in operations and maintenance. With the Space Station at an inclination of 51.6 deg the radiation environment is highly variable with solar activity being of great concern. Thus, it is important to study the dose gradients about the body during an EVA to help determine the cancer risk associated with the different environments the ISS will encounter. In this paper we are concerned only with the trapped radiation (electrons and protons). Two different scenarios are looked at: the first is the quiet geomagnetic periods in low Earth orbit (LEO) and the second is during a large solar particle event in the deep space environment. This study includes a description of how the space suit's computer aided design (CAD) model was developed along with a description of the human model. Also included is a brief description of the transport codes used to determine the total integrated dose at several locations within the body. Finally, the results of the transport codes when applied to the space suit and human model and a brief description of the results are presented.
A Methodology for Multiple Rule System Integration and Resolution Within a Singular Knowledge Base
NASA Technical Reports Server (NTRS)
Kautzmann, Frank N., III
1988-01-01
Expert Systems which support knowledge representation by qualitative modeling techniques experience problems, when called upon to support integrated views embodying description and explanation, especially when other factors such as multiple causality, competing rule model resolution, and multiple uses of knowledge representation are included. A series of prototypes are being developed to demonstrate the feasibility of automating the process of systems engineering, design and configuration, and diagnosis and fault management. A study involves not only a generic knowledge representation; it must also support multiple views at varying levels of description and interaction between physical elements, systems, and subsystems. Moreover, it will involve models of description and explanation for each level. This multiple model feature requires the development of control methods between rule systems and heuristics on a meta-level for each expert system involved in an integrated and larger class of expert system. The broadest possible category of interacting expert systems is described along with a general methodology for the knowledge representation and control of mutually exclusive rule systems.
Effects of Deployment on the Mental Health of Service Members at Fort Hood
2006-07-06
167) Once (n= 1498) More Than Once (n=566) Characteristic 11 Percent n Percent n Percent Gender Male 120 71.9 1150 76.8 439 77.6 Female 47 28.1 348...4,5,6,7,9,10,13,14,15,16) and item 2 in the provider section. Gender was coded as ŕ" for female and Ŕ" for male. The remainder of the nominal level variables...Appendix C Variables, Measures, and Coding of Data VARIBLE DESCRIPTION SPSS DATA CODE & SPSS CODE Male Gender Female FEMALE =0, MALE=1 El 01 Wi E2 02 W2 E3
CIFOG: Cosmological Ionization Fields frOm Galaxies
NASA Astrophysics Data System (ADS)
Hutter, Anne
2018-03-01
CIFOG is a versatile MPI-parallelised semi-numerical tool to perform simulations of the Epoch of Reionization. From a set of evolving cosmological gas density and ionizing emissivity fields, it computes the time and spatially dependent ionization of neutral hydrogen (HI), neutral (HeI) and singly ionized helium (HeII) in the intergalactic medium (IGM). The code accounts for HII, HeII, HeIII recombinations, and provides different descriptions for the photoionization rate that are used to calculate the residual HI fraction in ionized regions. This tool has been designed to be coupled to semi-analytic galaxy formation models or hydrodynamical simulations. The modular fashion of the code allows the user to easily introduce new descriptions for recombinations and the photoionization rate.
Scalable Implementation of Finite Elements by NASA _ Implicit (ScIFEi)
NASA Technical Reports Server (NTRS)
Warner, James E.; Bomarito, Geoffrey F.; Heber, Gerd; Hochhalter, Jacob D.
2016-01-01
Scalable Implementation of Finite Elements by NASA (ScIFEN) is a parallel finite element analysis code written in C++. ScIFEN is designed to provide scalable solutions to computational mechanics problems. It supports a variety of finite element types, nonlinear material models, and boundary conditions. This report provides an overview of ScIFEi (\\Sci-Fi"), the implicit solid mechanics driver within ScIFEN. A description of ScIFEi's capabilities is provided, including an overview of the tools and features that accompany the software as well as a description of the input and output le formats. Results from several problems are included, demonstrating the efficiency and scalability of ScIFEi by comparing to finite element analysis using a commercial code.
Iparraguirre, Leire; Muñoz-Culla, Maider; Prada-Luengo, Iñigo; Castillo-Triviño, Tamara; Olascoaga, Javier; Otaegui, David
2017-09-15
Multiple sclerosis is an autoimmune disease, with higher prevalence in women, in whom the immune system is dysregulated. This dysregulation has been shown to correlate with changes in transcriptome expression as well as in gene-expression regulators, such as non-coding RNAs (e.g. microRNAs). Indeed, some of these have been suggested as biomarkers for multiple sclerosis even though few biomarkers have reached the clinical practice. Recently, a novel family of non-coding RNAs, circular RNAs, has emerged as a new player in the complex network of gene-expression regulation. MicroRNA regulation function through a 'sponge system' and a RNA splicing regulation function have been proposed for the circular RNAs. This regulating role together with their high stability in biofluids makes them seemingly good candidates as biomarkers. Given the dysregulation of both protein-coding and non-coding transcriptome that have been reported in multiple sclerosis patients, we hypothesised that circular RNA expression may also be altered. Therefore, we carried out expression profiling of 13.617 circular RNAs in peripheral blood leucocytes from multiple sclerosis patients and healthy controls finding 406 differentially expressed (P-value < 0.05, Fold change > 1.5) and demonstrate after validation that, circ_0005402 and circ_0035560 are underexpressed in multiple sclerosis patients and could be used as biomarkers of the disease. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Different Web-Based Geocoding Service Using Fuzzy Techniques
NASA Astrophysics Data System (ADS)
Pahlavani, P.; Abbaspour, R. A.; Zare Zadiny, A.
2015-12-01
Geocoding - the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.
NASA Astrophysics Data System (ADS)
Akoglu, E.; Libralato, S.; Salihoglu, B.; Oguz, T.; Solidoro, C.
2015-08-01
Societal and scientific challenges foster the implementation of the ecosystem approach to marine ecosystem analysis and management, which is a comprehensive means of integrating the direct and indirect effects of multiple stressors on the different components of ecosystems, from physical to chemical and biological and from viruses to fishes and marine mammals. Ecopath with Ecosim (EwE) is a widely used software package, which offers capability for a dynamic description of the multiple interactions occurring within a food web, and, potentially, a crucial component of an integrated platform supporting the ecosystem approach. However, being written for the Microsoft .NET framework, seamless integration of this code with Fortran-based physical and/or biogeochemical oceanographic models is technically not straightforward. In this work we release a re-coding of EwE in Fortran (EwE-F). We believe that the availability of a Fortran version of EwE is an important step towards setting up coupled/integrated modelling schemes utilising this widely adopted software because it (i) increases portability of the EwE models and (ii) provides additional flexibility towards integrating EwE with Fortran-based modelling schemes. Furthermore, EwE-F might help modellers using the Fortran programming language to get close to the EwE approach. In the present work, first fundamentals of EwE-F are introduced, followed by validation of EwE-F against standard EwE utilising sample models. Afterwards, an end-to-end (E2E) ecological representation of the Gulf of Trieste (northern Adriatic Sea) ecosystem is presented as an example of online two-way coupling between an EwE-F food web model and a biogeochemical model. Finally, the possibilities that having EwE-F opens up are discussed.
EwE-F 1.0: an implementation of Ecopath with Ecosim in Fortran 95/2003 for coupling
NASA Astrophysics Data System (ADS)
Akoglu, E.; Libralato, S.; Salihoglu, B.; Oguz, T.; Solidoro, C.
2015-02-01
Societal and scientific challenges foster the implementation of the ecosystem approach to marine ecosystem analysis and management, which is a comprehensive means of integrating the direct and indirect effects of multiple stressors on the different components of ecosystems, from physical to chemical and biological and from viruses to fishes and marine mammals. Ecopath with Ecosim (EwE) is a widely used software package, which offers great capability for a dynamic description of the multiple interactions occurring within a food web, and potentially, a crucial component of an integrated platform supporting the ecosystem approach. However, being written for the Microsoft .NET framework, seamless integration of this code with Fortran-based physical oceanographic and/or biogeochemical models is technically not straightforward. In this work we release a re-coding of EwE in Fortran (EwE-F). We believe that the availability of a Fortran version of EwE is an important step towards setting-up integrated end-to-end (E2E) modelling schemes utilising this widely adopted software because it (i) increases portability of the EwE models, (ii) provides greater flexibility towards integrating EwE with Fortran-based modelling schemes. Furthermore, EwE-F might help modellers using Fortran programming language to get close to the EwE approach. In the present work, first the fundamentals of EwE-F are introduced, followed by validation of EwE-F against standard EwE utilising sample models. Afterwards, an E2E ecological representation of the Trieste Gulf (Northern Adriatic Sea) ecosystem is presented as an example of online two-way coupling between an EwE-F food web model and a biogeochemical model. Finally, the possibilities that having EwE-F opens up for are discussed.
Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN
NASA Technical Reports Server (NTRS)
Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.
1996-01-01
A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.
Portable LQCD Monte Carlo code using OpenACC
NASA Astrophysics Data System (ADS)
Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele
2018-03-01
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
Vatne, Torun M; Finset, Arnstein; Ørnes, Knut; Ruland, Cornelia M
2010-09-01
Adult patients present concerns as defined in the Verona Coding Definitions of Emotional Sequences (VR-CoDES), but we do not know how children express their concerns during medical consultations. This study aimed to evaluate the applicability of VR-CoDES to pediatric oncology consultations. Twenty-eight pediatric consultations were coded with the Verona Coding Definitions of Emotional Sequences (VR-CoDES), and the material was also qualitatively analyzed for descriptive purposes. Five consultations were randomly selected for reliability testing and descriptive statistics were computed. Perfect inter-rater reliability for concerns and moderate reliability for cues were obtained. Cues and/or concerns were present in over half of the consultations. Cues were more frequent than concerns, with the majority of cues being verbal hints to hidden concerns or non-verbal cues. Intensity of expressions, limitations in vocabulary, commonality of statements, and complexity of the setting complicated the use of VR-CoDES. Child-specific cues; use of the imperative, cues about past experiences, and use of onomatopoeia were observed. Children with cancer express concerns during medical consultations. VR-CoDES is a reliable tool for coding concerns in pediatric data sets. For future applications in pediatric settings an appendix should be developed to incorporate the child-specific traits. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Patient experience in the emergency department: inconsistencies in the ethic and duty of care.
Moss, Cheryle; Nelson, Katherine; Connor, Margaret; Wensley, Cynthia; McKinlay, Eileen; Boulton, Amohia
2015-01-01
To understand how people who present on multiple occasions to the emergency department experience their health professionals' moral comportment (ethic of care and duty of care); and to understand the consequences of this for 'people who present on multiple occasions' ongoing choices in care. People (n = 34) with chronic illness who had multiple presentations were interviewed about the role that emergency departments played within their lives and health-illness journey. Unprompted, all participants shared views about the appropriateness or inappropriateness of the care they received from the health professionals in the emergency departments they had attended. These responses raised the imperative for specific analysis of the data regarding the need for and experience of an ethic of care. Qualitative description of interview data (stage 3 of a multimethod study). The methods included further analysis of existing interviews, exploration of relevant literature, use of Tronto's ethic of care as a theoretical framework for analysis, thematic analysis of people who present on multiple occasions' texts and explication of health professionals' moral positions in relation to present on multiple occasions' experiences. Four moral comportment positions attributed by the people who present on multiple occasions to the health professionals in emergency department were identified: 'sustained and enmeshed ethic and duty of care', 'consistent duty of care', 'interrupted or mixed duty and ethic of care', and 'care in breach of both the ethic and duty of care'. People who present on multiple occasions are an important group of consumers who attend the emergency department. Tronto's phases/moral elements in an ethic of care are useful as a framework for coding qualitative texts. Investigation into the bases, outcomes and contextual circumstances that stimulate the different modes of moral comportment is needed. Findings carry implications for emergency department care of people who present on multiple occasions and for emergency department health professionals to increase awareness of their moral comportment in care. © 2014 John Wiley & Sons Ltd.
Multiple Access Schemes for Lunar Missions
NASA Technical Reports Server (NTRS)
Deutsch, Leslie; Hamkins, Jon; Stocklin, Frank J.
2010-01-01
Two years ago, the NASA Coding, Modulation, and Link Protocol (CMLP) study was completed. The study, led by the authors of this paper, recommended codes, modulation schemes, and desired attributes of link protocols for all space communication links in NASA's future space architecture. Portions of the NASA CMLP team were reassembled to resolve one open issue: the use of multiple access (MA) communication from the lunar surface. The CMLP-MA team analyzed and simulated two candidate multiple access schemes that were identified in the original CMLP study: Code Division MA (CDMA) and Frequency Division MA (FDMA) based on a bandwidth-efficient Continuous Phase Modulation (CPM) with a superimposed Pseudo-Noise (PN) ranging signal (CPM/PN). This paper summarizes the results of the analysis and simulation of the CMLP-MA study and describes the final recommendations.
Turbulence modeling for hypersonic flight
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.
1992-01-01
The objective of the present work is to develop, verify, and incorporate two equation turbulence models which account for the effect of compressibility at high speeds into a three dimensional Reynolds averaged Navier-Stokes code and to provide documented model descriptions and numerical procedures so that they can be implemented into the National Aerospace Plane (NASP) codes. A summary of accomplishments is listed: (1) Four codes have been tested and evaluated against a flat plate boundary layer flow and an external supersonic flow; (2) a code named RANS was chosen because of its speed, accuracy, and versatility; (3) the code was extended from thin boundary layer to full Navier-Stokes; (4) the K-omega two equation turbulence model has been implemented into the base code; (5) a 24 degree laminar compression corner flow has been simulated and compared to other numerical simulations; and (6) work is in progress in writing the numerical method of the base code including the turbulence model.
CFD Modeling of Free-Piston Stirling Engines
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.
2001-01-01
NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.
2014-12-15
The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.
User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles
NASA Technical Reports Server (NTRS)
Fuller, L. C.; Stovall, T. K.
1979-01-01
Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCune, W.; Shumsky, O.
2000-02-04
IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.
NASA Technical Reports Server (NTRS)
Kindall, S. M.
1980-01-01
The computer code for the trajectory processor (#TRAJ) of the high fidelity relative motion program is described. The #TRAJ processor is a 12-degrees-of-freedom trajectory integrator (6 degrees of freedom for each of two vehicles) which can be used to generate digital and graphical data describing the relative motion of the Space Shuttle Orbiter and a free-flying cylindrical payload. A listing of the code, coding standards and conventions, detailed flow charts, and discussions of the computational logic are included.
Integration of a supersonic unsteady aerodynamic code into the NASA FASTEX system
NASA Technical Reports Server (NTRS)
Appa, Kari; Smith, Michael J. C.
1987-01-01
A supersonic unsteady aerodynamic loads prediction method based on the constant pressure method was integrated into the NASA FASTEX system. The updated FASTEX code can be employed for aeroelastic analyses in subsonic and supersonic flow regimes. A brief description of the supersonic constant pressure panel method, as applied to lifting surfaces and body configurations, is followed by a documentation of updates required to incorporate this method in the FASTEX code. Test cases showing correlations of predicted pressure distributions, flutter solutions, and stability derivatives with available data are reported.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murata, K.K.; Williams, D.C.; Griffith, R.O.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less
Betz, J.W.; Blanco, M.A.; Cahn, C.R.; Dafesh, P.A.; Hegarty, C.J.; Hudnut, K.W.; Kasemsri, V.; Keegan, R.; Kovach, K.; Lenahan, L.S.; Ma, H.H.; Rushanan, J.J.; Sklar, D.; Stansell, T.A.; Wang, C.C.; Yi, S.K.
2006-01-01
Detailed design of the modernized LI civil signal (L1C) signal has been completed, and the resulting draft Interface Specification IS-GPS-800 was released in Spring 2006. The novel characteristics of the optimized L1C signal design provide advanced capabilities while offering to receiver designers considerable flexibility in how to use these capabilities. L1C provides a number of advanced features, including: 75% of power in a pilot component for enhanced signal tracking, advanced Weilbased spreading codes, an overlay code on the pilot that provides data message synchronization, support for improved reading of clock and ephemeris by combining message symbols across messages, advanced forward error control coding, and data symbol interleaving to combat fading. The resulting design offers receiver designers the opportunity to obtain unmatched performance in many ways. This paper describes the design of L1C. A summary of LIC's background and history is provided. The signal description then proceeds with the overall signal structure consisting of a pilot component and a carrier component. The new L1C spreading code family is described, along with the logic used for generating these spreading codes. Overlay codes on the pilot channel are also described, as is the logic used for generating the overlay codes. Spreading modulation characteristics are summarized. The data message structure is also presented, showing the format for providing time, ephemeris, and system data to users, along with features that enable receivers to perform code combining. Encoding of rapidly changing time bits is described, as are the Low Density Parity Check codes used for forward error control of slowly changing time bits, clock, ephemeris, and system data. The structure of the interleaver is also presented. A summary of L 1C's unique features and their benefits is provided, along with a discussion of the plan for L1C implementation.
Airport-Noise Levels and Annoyance Model (ALAMO) system's reference manual
NASA Technical Reports Server (NTRS)
Deloach, R.; Donaldson, J. L.; Johnson, M. J.
1986-01-01
The airport-noise levels and annoyance model (ALAMO) is described in terms of the constituent modules, the execution of ALAMO procedure files, necessary for system execution, and the source code documentation associated with code development at Langley Research Center. The modules constituting ALAMO are presented both in flow graph form, and through a description of the subroutines and functions that comprise them.
Analysis of hybrid subcarrier multiplexing of OCDMA based on single photodiode detection
NASA Astrophysics Data System (ADS)
Ahmad, N. A. A.; Junita, M. N.; Aljunid, S. A.; Rashidi, C. B. M.; Endut, R.
2017-11-01
This paper analyzes the performance of subcarrier multiplexing (SCM) of spectral amplitude coding optical code multiple access (SAC-OCDMA) by applying Recursive Combinatorial (RC) code based on single photodiode detection (SPD). SPD is used in the receiver part to reduce the effect of multiple access interference (MAI) which contributes as a dominant noise in incoherent SAC-OCDMA systems. Results indicate that the SCM OCDMA network performance could be improved by using lower data rates and higher number of weight. Total number of users can also be enhanced by adding lower data rates and higher number of subcarriers.
CMCpy: Genetic Code-Message Coevolution Models in Python
Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367
Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport
NASA Technical Reports Server (NTRS)
Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.
2008-01-01
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1981-10-29
This volume is the software description for the National Utility Regulatory Model (NUREG). This is the third of three volumes provided by ICF under contract number DEAC-01-79EI-10579. These three volumes are: a manual describing the NUREG methodology; a users guide; and a description of the software. This manual describes the software which has been developed for NUREG. This includes a listing of the source modules. All computer code has been written in FORTRAN.
Woodman, Jenny; Allister, Janice; Rafi, Imran; de Lusignan, Simon; Belsey, Jonathan; Petersen, Irene; Gilbert, Ruth
2012-01-01
Background Information is lacking on how concerns about child maltreatment are recorded in primary care records. Aim To determine how the recording of child maltreatment concerns can be improved. Design and setting Development of a quality improvement intervention involving: clinical audit, a descriptive survey, telephone interviews, a workshop, database analyses, and consensus development in UK general practice. Method Descriptive analyses and incidence estimates were carried out based on 11 study practices and 442 practices in The Health Improvement Network (THIN). Telephone interviews, a workshop, and a consensus development meeting were conducted with lead GPs from 11 study practices. Results The rate of children with at least one maltreatment-related code was 8.4/1000 child years (11 study practices, 2009–2010), and 8.0/1000 child years (THIN, 2009–2010). Of 25 patients with known maltreatment, six had no maltreatment-related codes recorded, but all had relevant free text, scanned documents, or codes. When stating their reasons for undercoding maltreatment concerns, GPs cited damage to the patient relationship, uncertainty about which codes to use, and having concerns about recording information on other family members in the child’s records. Consensus recommendations are to record the code ‘child is cause for concern’ as a red flag whenever maltreatment is considered, and to use a list of codes arranged around four clinical concepts, with an option for a templated short data entry form. Conclusion GPs under-record maltreatment-related concerns in children’s electronic medical records. As failure to use codes makes it impossible to search or audit these cases, an approach designed to be simple and feasible to implement in UK general practice was recommended. PMID:22781996
Rennie, Michael J; Watsford, Mark L; Spurrs, Robert W; Kelly, Stephen J; Pine, Matthew J
2018-06-01
To examine the frequency and time spent in the phases of Australian Football (AF) match-play and to assess the intra-assessor reliability of coding these phases of match-play. Observational, intra-reliability assessment. Video footage of 10 random quarters of AF match-play were coded by a single researcher. Phases of offence, defence, contested play, umpire stoppage, set shot and goal reset were coded using a set of operational definitions. Descriptive statistics were provided for all phases of match-play. Following a 6-month washout period, intra-coder reliability was assessed using typical error of measurement (TEM) and intra-class correlation coefficients (ICC). A quarter of AF match-play involved 128±20 different phases of match-play. The highest proportion of match-play involved contested play (25%), followed by offence (18%), defence (18%) and umpire stoppages (18%). The mean duration of offence, defence, contested play, umpire stoppage, set shot and goal reset were 14, 14, 10, 11, 28 and 47s, respectively. No differences were found between the two coding assessments (p>0.05). ICCs for coding the phases of play demonstrated very high reliability (r=0.902-0.992). TEM of the total time spent in each phase of play represented moderate to good reliability (TEM=1.8-9.3%). Coding of offence, defence and contested play tended to display slightly poorer TEMs than umpire stoppages, set shots and goal resets (TEM=8.1 vs 4.5%). Researchers can reliably code the phases of AF match-play which may permit the analysis of specific elements of competition. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Adaptive Transmission and Channel Modeling for Frequency Hopping Communications
2009-09-21
proposed adaptive transmission method has much greater system capacity than conventional non-adaptive MC direct- sequence ( DS )- CDMA system. • We...several mobile radio systems. First, a new improved allocation algorithm was proposed for multicarrier code-division multiple access (MC- CDMA ) system...Multicarrier code-division multiple access (MC- CDMA ) system with adaptive frequency hopping (AFH) has attracted attention of researchers due to its
ERIC Educational Resources Information Center
Haro, Elizabeth K.; Haro, Luis S.
2014-01-01
The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…
Adaptive Precoded MIMO for LTE Wireless Communication
NASA Astrophysics Data System (ADS)
Nabilla, A. F.; Tiong, T. C.
2015-04-01
Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.
What to do with a Dead Research Code
NASA Astrophysics Data System (ADS)
Nemiroff, Robert J.
2016-01-01
The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan
2017-09-01
It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.
Kash, Bita A; Spaulding, Aaron; Gamm, Larry; Johnson, Christopher E
2013-01-01
The dimensions of absorptive capacity (ACAP) are defined, and the importance of ACAP is established in the management literature, but the concept has not been applied to health care organizations attempting to implement multiple strategic initiatives. The aim of this study was to test the utility of ACAP by analyzing health care administrators' experiences with multiple strategic initiatives within two health systems. Results are drawn from administrators' assessments of multiple initiatives within two health systems using in-depth personal interviews with a total of 61 health care administrators. Data analysis was performed following deductive qualitative analysis guidelines. Interview transcripts were coded based on the four dimensions of ACAP: acquiring, assimilating, internalizing/transforming, and exploiting knowledge. Furthermore, we link results related to utilization of management resources, including number of key personnel involved and time consumption, to dimensions of ACAP. Participants' description of multiple strategic change initiatives confirmed the importance of the four ACAP dimensions. ACAP can be a useful framework to assess organizational capacity with respect to the organization's ability to concurrently implement multiple strategic initiatives. This capacity specifically revolves around human capital requirements from upper management based on the initiatives' location or stage within the ACAP framework. Strategic change initiatives in health care can be usefully viewed from an ACAP perspective. There is a tendency for those strategic initiatives ranking higher in priority and time consumption to reflect more advanced dimensions of ACAP (assimilate and transform), whereas few initiatives were identified in the ACAP "exploit" dimension. This may suggest that health care leaders tend to no longer identify as strategic initiatives those innovations that have moved to the exploitation stage or that less attention is given to the exploitation elements of a strategic initiative than to the earlier stages.
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
A hadron-nucleus collision event generator for simulations at intermediate energies
NASA Astrophysics Data System (ADS)
Ackerstaff, K.; Bisplinghoff, J.; Bollmann, R.; Cloth, P.; Diehl, O.; Dohrmann, F.; Drüke, V.; Eisenhardt, S.; Engelhardt, H. P.; Ernst, J.; Eversheim, P. D.; Filges, D.; Fritz, S.; Gasthuber, M.; Gebel, R.; Greiff, J.; Gross, A.; Gross-Hardt, R.; Hinterberger, F.; Jahn, R.; Lahr, U.; Langkau, R.; Lippert, G.; Maschuw, R.; Mayer-Kuckuk, T.; Mertler, G.; Metsch, B.; Mosel, F.; Paetz gen. Schieck, H.; Petry, H. R.; Prasuhn, D.; von Przewoski, B.; Rohdjeß, H.; Rosendaal, D.; Roß, U.; von Rossen, P.; Scheid, H.; Schirm, N.; Schulz-Rojahn, M.; Schwandt, F.; Scobel, W.; Sterzenbach, G.; Theis, D.; Weber, J.; Wellinghausen, A.; Wiedmann, W.; Woller, K.; Ziegler, R.; EDDA-Collaboration
2002-10-01
Several available codes for hadronic event generation and shower simulation are discussed and their predictions are compared to experimental data in order to obtain a satisfactory description of hadronic processes in Monte Carlo studies of detector systems for medium energy experiments. The most reasonable description is found for the intra-nuclear-cascade (INC) model of Bertini which employs microscopic description of the INC, taking into account elastic and inelastic pion-nucleon and nucleon-nucleon scattering. The isobar model of Sternheimer and Lindenbaum is used to simulate the inelastic elementary collisions inside the nucleus via formation and decay of the Δ33-resonance which, however, limits the model at higher energies. To overcome this limitation, the INC model has been extended by using the resonance model of the HADRIN code, considering all resonances in elementary collisions contributing more than 2% to the total cross-section up to kinetic energies of 5 GeV. In addition, angular distributions based on phase shift analysis are used for elastic nucleon-nucleon as well as elastic and charge exchange pion-nucleon scattering. Also kaons and antinucleons can be treated as projectiles. Good agreement with experimental data is found predominantly for lower projectile energies, i.e. in the regime of the Bertini code. The original as well as the extended Bertini model have been implemented as shower codes into the high energy detector simulation package GEANT-3.14, allowing now its use also in full Monte Carlo studies of detector systems at intermediate energies. The GEANT-3.14 here have been used mainly for its powerful geometry and analysing packages due to the complex EDDA detector system.
ERIC Educational Resources Information Center
Litman, Cindy; Marple, Stacy; Greenleaf, Cynthia; Charney-Sirott, Irisa; Bolz, Michael J.; Richardson, Lisa K.; Hall, Allison H.; George, MariAnne; Goldman, Susan R.
2017-01-01
This study presents a descriptive analysis of 71 videotaped lessons taught by 34 highly regarded secondary English language arts, history, and science teachers, collected to inform an intervention focused on evidence-based argumentation from multiple text sources. Studying the practices of highly regarded teachers is valuable for identifying…
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
User Manual for the PROTEUS Mesh Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Shemon, Emily R
2016-09-19
PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation.more » There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less
Major trauma: the unseen financial burden to trauma centres, a descriptive multicentre analysis.
Curtis, Kate; Lam, Mary; Mitchell, Rebecca; Dickson, Cara; McDonnell, Karon
2014-02-01
This research examines the existing funding model for in-hospital trauma patient episodes in New South Wales (NSW), Australia and identifies factors that cause above-average treatment costs. Accurate information on the treatment costs of injury is needed to guide health-funding strategy and prevent inadvertent underfunding of specialist trauma centres, which treat a high trauma casemix. Admitted trauma patient data provided by 12 trauma centres were linked with financial data for 2008-09. Actual costs incurred by each hospital were compared with state-wide Australian Refined Diagnostic Related Groups (AR-DRG) average costs. Patient episodes where actual cost was higher than AR-DRG cost allocation were examined. There were 16693 patients at a total cost of AU$178.7million. The total costs incurred by trauma centres were $14.7million above the NSW peer-group average cost estimates. There were 10 AR-DRG where the total cost variance was greater than $500000. The AR-DRG with the largest proportion of patients were the upper limb injury categories, many of whom had multiple body regions injured and/or a traumatic brain injury (P<0.001). AR-DRG classifications do not adequately describe the trauma patient episode and are not commensurate with the expense of trauma treatment. A revision of AR-DRG used for trauma is needed. WHAT IS KNOWN ABOUT THIS TOPIC? Severely injured trauma patients often have multiple injuries, in more than one body region and the determination of appropriate AR-DRG can be difficult. Pilot research suggests that the AR-DRG do not accurately represent the care that is required for these patients. WHAT DOES THIS PAPER ADD? This is the first multicentre analysis of treatment costs and coding variance for major trauma in Australia. This research identifies the limitations of the current AR-DRGS and those that are particularly problematic. The value of linking trauma registry and financial data within each trauma centre is demonstrated. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? Further work should be conducted between trauma services, clinical coding and finance departments to improve the accuracy of clinical coding, review funding models and ensure that AR-DRG allocation is commensurate with the expense of trauma treatment.
Older Adult Spouses with Multiple Chronic Conditions: Challenges, Rewards, and Coping Strategies.
Peacock, Shelley; Sethi, Bharati; Williams, Allison; Duggleby, Wendy; Bayly, Melanie; Swindle, Jenny; Ploeg, Jenny; Markle-Reid, Maureen
2017-06-01
There is a paucity of research exploring how spouses to older adults with multiple chronic conditions make meaning of their caregiving experience. For this study, we asked: What is the experience of spousal caregivers to persons with multiple chronic conditions? We applied Thorne's interpretive description approach, interviewing 18 spouses who provided a rich description of their caregiving experience; interviews were transcribed verbatim and thematically analysed. Themes were categorized according to challenges encountered, rewards gleaned, and sustaining strategies employed by participants in caregiving to their spouse with multiple chronic conditions. Unique findings relate to the challenges inherent in decision-making within the context of multiple chronic conditions. This article begins to address the gap in the literature regarding the caregiving experience within the context of multiple chronic conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belousov, V. I.; Ezhela, V. V.; Kuyanov, Yu. V., E-mail: Yu.Kuyanov@gmail.com
The experience of using the dynamic atlas of the experimental data and mathematical models of their description in the problems of adjusting parametric models of observable values depending on kinematic variables is presented. The functional possibilities of an image of a large number of experimental data and the models describing them are shown by examples of data and models of observable values determined by the amplitudes of elastic scattering of hadrons. The Internet implementation of an interactive tool DaMoScope and its interface with the experimental data and codes of adjusted parametric models with the parameters of the best description ofmore » data are schematically shown. The DaMoScope codes are freely available.« less
A meta-analysis of research on science teacher education practices associated with inquiry strategy
NASA Astrophysics Data System (ADS)
Sweitzer, Gary L.; Anderson, Ronald D.
A meta-analysis was conducted of studies of teacher education having as measured outcomes one or more variables associated with inquiry teaching. Inquiry addresses those teacher behaviors that facilitate student acquisition of concepts and processes through strategies such as problem solving, uses of evidence, logical and analytical reasoning, clarification of values, and decision making. Studies which contained sufficient data for the calculation of an effect size were coded for 114 variables. These variables were divided into the following six major categories: study information and design characteristics, teacher and teacher trainee characteristics, student characteristics, treatment description, outcome description, and effect size calculation. A total of 68 studies resulting in 177 effect size calculations were coded. Mean effect sizes broken across selected variables were calculated.
Distributed magnetic field positioning system using code division multiple access
NASA Technical Reports Server (NTRS)
Prigge, Eric A. (Inventor)
2003-01-01
An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.
Measurement of neutron spectra in the AWE workplace using a Bonner sphere spectrometer.
Danyluk, Peter
2010-12-01
A Bonner sphere spectrometer has been used to measure the neutron spectra in eight different workplace areas at AWE (Atomic Weapons Establishment). The spectra were analysed by the National Physical Laboratory using their principal unfolding code STAY'SL and the results were also analysed by AWE using a bespoke parametrised unfolding code. The bespoke code was designed specifically for the AWE workplace and is very simple to use. Both codes gave results, in good agreement. It was found that the measured fluence rate varied from 2 to 70 neutrons cm⁻² s⁻¹ (± 10%) and the ambient dose equivalent H*(10) varied from 0.5 to 57 µSv h⁻¹ (± 20%). A detailed description of the development and use of the bespoke code is presented.
TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.
Potential flow theory and operation guide for the panel code PMARC
NASA Technical Reports Server (NTRS)
Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steve K.; Browne, Lindsey; Katz, Joseph
1991-01-01
The theoretical basis for PMARC, a low-order potential-flow panel code for modeling complex three-dimensional geometries, is outlined. Several of the advanced features currently included in the code, such as internal flow modeling, a simple jet model, and a time-stepping wake model, are discussed in some detail. The code is written using adjustable size arrays so that it can be easily redimensioned for the size problem being solved and the computer hardware being used. An overview of the program input is presented, with a detailed description of the input available in the appendices. Finally, PMARC results for a generic wing/body configuration are compared with experimental data to demonstrate the accuracy of the code. The input file for this test case is given in the appendices.
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
EBT reactor systems analysis and cost code: description and users guide (Version 1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santoro, R.T.; Uckan, N.A.; Barnes, J.M.
1984-06-01
An ELMO Bumpy Torus (EBT) reactor systems analysis and cost code that incorporates the most recent advances in EBT physics has been written. The code determines a set of reactors that fall within an allowed operating window determined from the coupling of ring and core plasma properties and the self-consistent treatment of the coupled ring-core stability and power balance requirements. The essential elements of the systems analysis and cost code are described, along with the calculational sequences leading to the specification of the reactor options and their associated costs. The input parameters, the constraints imposed upon them, and the operatingmore » range over which the code provides valid results are discussed. A sample problem and the interpretation of the results are also presented.« less
Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system
NASA Astrophysics Data System (ADS)
Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.
2017-11-01
Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage
Distributed reservation-based code division multiple access
NASA Astrophysics Data System (ADS)
Wieselthier, J. E.; Ephremides, A.
1984-11-01
The use of spread spectrum signaling, motivated primarily by its antijamming capabilities in military applications, leads naturally to the use of Code Division Multiple Access (CDMA) techniques that permit the successful simultaneous transmission by a number of users over a wideband channel. In this paper we address some of the major issues that are associated with the design of multiple access protocols for spread spectrum networks. We then propose, analyze, and evaluate a distributed reservation-based multiple access protocol that does in fact exploit CDMA properties. Especially significant is the fact that no acknowledgment or feedback information from the destination is required (thus facilitating communication with a radio-silent mode), nor is any form of coordination among the users necessary.
A Counterexample Guided Abstraction Refinement Framework for Verifying Concurrent C Programs
2005-05-24
source code are routinely executed. The source code is written in languages ranging from C/C++/Java to ML/ Ocaml . These languages differ not only in...from the difficulty to model computer programs—due to the complexity of programming languages as compared to hardware description languages —to...intermediate specification language lying between high-level Statechart- like formalisms and transition systems. Actions are encoded as changes in
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. The governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models are described in detail.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D has been developed to solve the three dimensional, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort has been to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation have been emphasized. The governing equations are solved in generalized non-orthogonal body-fitted coordinates by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the Analysis Description, and presents the equations and solution procedure. It describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
Mobile Code: The Future of the Internet
1999-01-01
code ( mobile agents) to multiple proxies or servers " Customization " (e.g., re-formatting, filtering, metasearch) Information overload Diversified... Mobile code is necessary, rather than client-side code, since many customization features (such as information monitoring) do not work if the...economic foundation for Web sites, many Web sites earn money solely from advertisements . If these sites allow mobile agents to easily access the content
Park, Seong C; Finnell, John T
2012-01-01
In 2009, Indianapolis launched an electronic medical record system within their ambulances1 and started to exchange patient data with the Indiana Network for Patient Care (INPC) This unique system allows EMS personnel to get important information prior to the patient's arrival to the hospital. In this descriptive study, we found EMS personnel requested patient data on 14% of all transports, with a "success" match rate of 46%, and a match "failure" rate of 17%. The three major factors for causing match "failure" were ZIP code 55%, Patient Name 22%, and Birth date 12%. We conclude that the ZIP code matching process needs to be improved by applying a limitation of 5 digits in ZIP code instead of using ZIP+4 code. Non-ZIP code identifiers may be a better choice due to inaccuracies and changes of the ZIP code in a patient's record.
On the evolution of primitive genetic codes.
Weberndorfer, Günter; Hofacker, Ivo L; Stadler, Peter F
2003-10-01
The primordial genetic code probably has been a drastically simplified ancestor of the canonical code that is used by contemporary cells. In order to understand how the present-day code came about we first need to explain how the language of the building plan can change without destroying the encoded information. In this work we introduce a minimal organism model that is based on biophysically reasonable descriptions of RNA and protein, namely secondary structure folding and knowledge based potentials. The evolution of a population of such organism under competition for a common resource is simulated explicitly at the level of individual replication events. Starting with very simple codes, and hence greatly reduced amino acid alphabets, we observe a diversification of the codes in most simulation runs. The driving force behind this effect is the possibility to produce fitter proteins when the repertoire of amino acids is enlarged.
Astrophysics Source Code Library: Incite to Cite!
NASA Astrophysics Data System (ADS)
DuPrie, K.; Allen, A.; Berriman, B.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P.; Wallen, J. F.
2014-05-01
The Astrophysics Source Code Library (ASCl,http://ascl.net/) is an on-line registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or from an identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.
High Order Modulation Protograph Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.
DPADL: An Action Language for Data Processing Domains
NASA Technical Reports Server (NTRS)
Golden, Keith; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper presents DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which will provide automation for an ecosystem forecasting system called TOPS.
XPOSE: the Exxon Nuclear revised LEOPARD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skogen, F.B.
1975-04-01
Main differences between XPOSE and LEOPARD codes used to generate fast and thermal neutron spectra and cross sections are presented. Models used for fast and thermal spectrum calculations as well as the depletion calculations considering U-238 chain, U-235 chain, xenon and samarium, fission products and boron-10 are described. A detailed description of the input required to run XPOSE and a description of the output are included. (FS)
Single-shot secure quantum network coding on butterfly network with free public communication
NASA Astrophysics Data System (ADS)
Owari, Masaki; Kato, Go; Hayashi, Masahito
2018-01-01
Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.
Hu, Long; Xu, Zhiyu; Hu, Boqin; Lu, Zhi John
2017-01-09
Recent genomic studies suggest that novel long non-coding RNAs (lncRNAs) are specifically expressed and far outnumber annotated lncRNA sequences. To identify and characterize novel lncRNAs in RNA sequencing data from new samples, we have developed COME, a coding potential calculation tool based on multiple features. It integrates multiple sequence-derived and experiment-based features using a decompose-compose method, which makes it more accurate and robust than other well-known tools. We also showed that COME was able to substantially improve the consistency of predication results from other coding potential calculators. Moreover, COME annotates and characterizes each predicted lncRNA transcript with multiple lines of supporting evidence, which are not provided by other tools. Remarkably, we found that one subgroup of lncRNAs classified by such supporting features (i.e. conserved local RNA secondary structure) was highly enriched in a well-validated database (lncRNAdb). We further found that the conserved structural domains on lncRNAs had better chance than other RNA regions to interact with RNA binding proteins, based on the recent eCLIP-seq data in human, indicating their potential regulatory roles. Overall, we present COME as an accurate, robust and multiple-feature supported method for the identification and characterization of novel lncRNAs. The software implementation is available at https://github.com/lulab/COME. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Progress toward the development of an aircraft icing analysis capability
NASA Technical Reports Server (NTRS)
Shaw, R. J.
1984-01-01
An overview of the NASA efforts to develop an aircraft icing analysis capability is presented. Discussions are included of the overall and long term objectives of the program as well as current capabilities and limitations of the various computer codes being developed. Descriptions are given of codes being developed to analyze two and three dimensional trajectories of water droplets, airfoil ice accretion, aerodynamic performance degradation of components and complete aircraft configurations, electrothermal deicer, and fluid freezing point depressant deicer. The need for bench mark and verification data to support the code development is also discussed.
A numerical simulation of the flow in the diffuser of the NASA Lewis icing research tunnel
NASA Technical Reports Server (NTRS)
Addy, Harold E., Jr.; Keith, Theo G., Jr.
1990-01-01
The flow in the diffuser section of the Icing Research Tunnel at the NASA Lewis Research Center is numerically investigated. To accomplish this, an existing computer code is utilized. The code, known as PARC3D, is based on the Beam-Warming algorithm applied to the strong conservation law form of the complete Navier-Stokes equations. The first portion of the paper consists of a brief description of the diffuser and its current flow characteristics. A brief discussion of the code work follows. Predicted velocity patterns are then compared with the measured values.
NASA Technical Reports Server (NTRS)
Suhs, Norman E.; Dietz, William E.; Rogers, Stuart E.; Nash, Steven M.; Onufer, Jeffrey T.
2000-01-01
PEGASUS 5.1 is the latest version of the PEGASUS series of mesh interpolation codes. It is a fully three-dimensional code. The main purpose for the development of this latest version was to significantly decrease the number of user inputs required and to allow for easier operation of the code. This guide is to be used with the user's manual for version 4 of PEGASUS. A basic description of methods used in both versions is described in the Version 4 manual. A complete list of all user inputs used in version 5.1 is given in this guide.
HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.
Advances in Computational Capabilities for Hypersonic Flows
NASA Technical Reports Server (NTRS)
Kumar, Ajay; Gnoffo, Peter A.; Moss, James N.; Drummond, J. Philip
1997-01-01
The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.
Rapid Assessment of Agility for Conceptual Design Synthesis
NASA Technical Reports Server (NTRS)
Biezad, Daniel J.
1996-01-01
This project consists of designing and implementing a real-time graphical interface for a workstation-based flight simulator. It is capable of creating a three-dimensional out-the-window scene of the aircraft's flying environment, with extensive information about the aircraft's state displayed in the form of a heads-up-display (HUD) overlay. The code, written in the C programming language, makes calls to Silicon Graphics' Graphics Library (GL) to draw the graphics primitives. Included in this report is a detailed description of the capabilities of the code, including graphical examples, as well as a printout of the code itself
NASA Astrophysics Data System (ADS)
Cherubin, S.; Agosta, G.
2018-01-01
We present LIBVERSIONINGCOMPILER, a C++ library designed to support the dynamic generation of multiple versions of the same compute kernel in a HPC scenario. It can be used to provide continuous optimization, code specialization based on the input data or on workload changes, or otherwise to dynamically adjust the application, without the burden of a full dynamic compiler. The library supports multiple underlying compilers but specifically targets the LLVM framework. We also provide examples of use, showing the overhead of the library, and providing guidelines for its efficient use.
Quantum internet using code division multiple access
Zhang, Jing; Liu, Yu-xi; Özdemir, Şahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco
2013-01-01
A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels. PMID:23860488
Tagiyeva, Nara; Semple, Sean; Devereux, Graham; Sherriff, Andrea; Henderson, John; Elias, Peter; Ayres, Jon G
2011-06-01
Most of the evidence on agreement between self- and proxy-reported occupational data comes from interview-based studies. The authors aimed to examine agreement between women's reports of their partner's occupation and their partner's own description using questionnaire-based data collected as a part of the prospective, population-based Avon Longitudinal Study of Parents and Children. Information on present occupation was self-reported by women's partners and proxy-reported by women through questionnaires administered at 8 and 21 months after the birth of a child. Job titles were coded to the Standard Occupational Classification (SOC2000) using software developed by the University of Warwick (Computer-Assisted Structured Coding Tool). The accuracy of proxy-report was expressed as percentage agreement and kappa coefficients for four-, three- and two-digit SOC2000 codes obtained in automatic and semiautomatic (manually improved) coding modes. Data from 6016 couples at 8 months and 5232 couples at 21 months postnatally were included in the analyses. The agreement between men's self-reported occupation and women's report of their partner's occupation in fully automatic coding mode at four-, three- and two-digit code level was 65%, 71% and 77% at 8 months and 68%, 73% and 76% at 21 months. The accuracy of agreement was slightly improved by semiautomatic coding of occupations: 73%/73%, 78%/77% and 83%/80% at 8/21 months respectively. While this suggests that women's description of their partners' occupation can be used as a valuable tool in epidemiological research where data from partners are not available, this study revealed no agreement between these young women and their partners at the two-digit level of SOC2000 coding in approximately one in five cases. Proxy reporting of occupation introduces a statistically significant degree of error in classification. The effects of occupational misclassification by proxy reporting in retrospective occupational epidemiological studies based on questionnaire data should be considered.
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
Transient dynamics capability at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Attaway, Steven W.; Biffle, Johnny H.; Sjaardema, G. D.; Heinstein, M. W.; Schoof, L. A.
1993-01-01
A brief overview of the transient dynamics capabilities at Sandia National Laboratories, with an emphasis on recent new developments and current research is presented. In addition, the Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS), which is a collection of structural and thermal codes and utilities used by analysts at SNL, is described. The SEACAS system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, Unix shell scripts for execution, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 190 days of CPU time were used by SEACAS codes on jobs running from a few seconds up to two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard PH-UX, Digital Equipment Ultrix, and Sun SunOS. An overview of SEACAS, including a short description of the codes in the system, are presented. Abstracts and references for the codes are listed at the end of the report.
Geometric descriptions of entangled states by auxiliary varieties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holweck, Frederic; Luque, Jean-Gabriel; Thibon, Jean-Yves
2012-10-15
The aim of the paper is to propose geometric descriptions of multipartite entangled states using algebraic geometry. In the context of this paper, geometric means each stratum of the Hilbert space, corresponding to an entangled state, is an open subset of an algebraic variety built by classical geometric constructions (tangent lines, secant lines) from the set of separable states. In this setting, we describe well-known classifications of multipartite entanglement such as 2 Multiplication-Sign 2 Multiplication-Sign (n+ 1), for n Greater-Than-Or-Slanted-Equal-To 1, quantum systems and a new description with the 2 Multiplication-Sign 3 Multiplication-Sign 3 quantum system. Our results complete themore » approach of Miyake and make stronger connections with recent work of algebraic geometers. Moreover, for the quantum systems detailed in this paper, we propose an algorithm, based on the classical theory of invariants, to decide to which subvariety of the Hilbert space a given state belongs.« less
Canada's international response to HIV during times of global transition: a qualitative inquiry.
Nixon, Stephanie
2011-04-01
Canada's international response to HIV may be under threat given CIDA's new aid priorities that appear to exclude health. Drivers of this recent priority shift have been the influence of global aid trends among public sector donors and changes within the global HIV milieu itself. However, this is not the first time Canada has shifted in response to these two global trends. The era from 2000-2004 also witnessed dramatic changes in both the HIV field and in global thinking around international aid. As such, this article presents an evaluation of the Government of Canada's international response to HIV during the first era of transition (2000-2004) in order to derive lessons for decision-making around HIV in the current climate of change. In-depth, semi-structured interviews were conducted with 23 key informants with expertise regarding Canada's international response to HIV over time. Analysis involved multiple readings of transcripts to identify descriptive codes and establish intimacy with the data. Descriptive codes were then collapsed into thematic categories using a process of inductive reasoning. Canada's international response to HIV was perceived to be exemplary at times (e.g. seminal funding to WHO's "3-by-5" strategy), but also inconsistent (e.g., underutilized technical assistance capacity) and non-strategic (e.g., contradiction between investing in training health providers while poaching professionals to bolster Canada's workforce). Lessons from the 2000-2004 era of transition focus on strategic investments, the inextricable connection between HIV and development and strategy coherence. These results highlight that it is more constructive to ensure that Canadian development responses in all areas engage with both the upstream drivers of HIV as well as the impacts of the epidemic itself in order to achieve the greatest results from international investment and the most effective contributions to the lives of the people that these endeavours seek to support.
Variation of SNOMED CT coding of clinical research concepts among coding experts.
Andrews, James E; Richesson, Rachel L; Krischer, Jeffrey
2007-01-01
To compare consistency of coding among professional SNOMED CT coders representing three commercial providers of coding services when coding clinical research concepts with SNOMED CT. A sample of clinical research questions from case report forms (CRFs) generated by the NIH-funded Rare Disease Clinical Research Network (RDCRN) were sent to three coding companies with instructions to code the core concepts using SNOMED CT. The sample consisted of 319 question/answer pairs from 15 separate studies. The companies were asked to select SNOMED CT concepts (in any form, including post-coordinated) that capture the core concept(s) reflected in the question. Also, they were asked to state their level of certainty, as well as how precise they felt their coding was. Basic frequencies were calculated to determine raw level agreement among the companies and other descriptive information. Krippendorff's alpha was used to determine a statistical measure of agreement among the coding companies for several measures (semantic, certainty, and precision). No significant level of agreement among the experts was found. There is little semantic agreement in coding of clinical research data items across coders from 3 professional coding services, even using a very liberal definition of agreement.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
ERIC Educational Resources Information Center
Green, Crystal D.
2010-01-01
This action research study investigated the perceptions that student participants had on the development of a career exploration model and a career exploration project. The Holland code theory was the primary assessment used for this research study, in addition to the Multiple Intelligences theory and the identification of a role model for the…
The VLSI design of a Reed-Solomon encoder using Berlekamps bit-serial multiplier algorithm
NASA Technical Reports Server (NTRS)
Truong, T. K.; Deutsch, L. J.; Reed, I. S.; Hsu, I. S.; Wang, K.; Yeh, C. S.
1982-01-01
Realization of a bit-serial multiplication algorithm for the encoding of Reed-Solomon (RS) codes on a single VLSI chip using NMOS technology is demonstrated to be feasible. A dual basis (255, 223) over a Galois field is used. The conventional RS encoder for long codes ofter requires look-up tables to perform the multiplication of two field elements. Berlekamp's algorithm requires only shifting and exclusive-OR operations.
2011-09-01
tectonically active regions such as the Middle East. For example, we previously applied the code to determine the crust and upper mantle structure...Objective Optimization (MOO) for Multiple Datasets The primary goal of our current project is to develop a tool for estimating crustal structure that...be used to obtain crustal velocity structures by modeling broadband waveform, receiver function, and surface wave dispersion data. The code has been
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
Multiple sclerosis lesion segmentation using dictionary learning and sparse coding.
Weiss, Nick; Rueckert, Daniel; Rao, Anil
2013-01-01
The segmentation of lesions in the brain during the development of Multiple Sclerosis is part of the diagnostic assessment for this disease and gives information on its current severity. This laborious process is still carried out in a manual or semiautomatic fashion by clinicians because published automatic approaches have not been universal enough to be widely employed in clinical practice. Thus Multiple Sclerosis lesion segmentation remains an open problem. In this paper we present a new unsupervised approach addressing this problem with dictionary learning and sparse coding methods. We show its general applicability to the problem of lesion segmentation by evaluating our approach on synthetic and clinical image data and comparing it to state-of-the-art methods. Furthermore the potential of using dictionary learning and sparse coding for such segmentation tasks is investigated and various possibilities for further experiments are discussed.
Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues
NASA Astrophysics Data System (ADS)
Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.
Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.
The effect of multiple internal representations on context-rich instruction
NASA Astrophysics Data System (ADS)
Lasry, Nathaniel; Aulls, Mark W.
2007-11-01
We discuss n-coding, a theoretical model of multiple internal mental representations. The n-coding construct is developed from a review of cognitive and imaging data that demonstrates the independence of information processed along different modalities such as verbal, visual, kinesthetic, logico-mathematic, and social modalities. A study testing the effectiveness of the n-coding construct in classrooms is presented. Four sections differing in the level of n-coding opportunities were compared. Besides a traditional-instruction section used as a control group, each of the remaining three sections were given context-rich problems, which differed by the level of n-coding opportunities designed into their laboratory environment. To measure the effectiveness of the construct, problem-solving skills were assessed as conceptual learning using the force concept inventory. We also developed several new measures that take students' confidence in concepts into account. Our results show that the n-coding construct is useful in designing context-rich environments and can be used to increase learning gains in problem solving, conceptual knowledge, and concept confidence. Specifically, when using props in designing context-rich problems, we find n-coding to be a useful construct in guiding which additional dimensions need to be attended to.
Full core analysis of IRIS reactor by using MCNPX.
Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S
2016-07-01
This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.; ...
2017-03-23
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashnik, Stepan Georgievich; Kerby, Leslie Marie; Gudima, Konstantin K.
We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N-particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM)more » used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤ 7, in the case of CEM, and A ≤ 12, in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Lastly, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.« less
Vuković Rodríguez, Jadranka; Juričić, Živka
2018-05-01
Formal training in pharmacy ethics is relatively new in Croatia, and the professional code of ethics is more than 20 years old. Very little is known about how practicing pharmacists implement ethical considerations and relevant professional guidelines in their work. This study aimed to provide the first description of the perceptions and attitudes of Croatian community pharmacists toward ethics in pharmacy practice, how often they face certain ethical dilemmas and how they resolve them. A cross-sectional survey of 252 community pharmacists, including community pharmacists and pre-licensing trainees, was conducted in Zagreb, Croatia. This group accounts for 18% of licensed pharmacists in Croatia. The survey questions included four sections: general sociodemographic information, multiple-choice questions, pre-defined ethical scenarios, and ethical scenarios filled in by respondents. More than half of pharmacists (62.7%) face ethical dilemmas in everyday work. Nearly all (94.4%) are familiar with the current professional code of ethics in Croatia, but only 47.6% think that the code reflects the changes that the pharmacy profession faces today. Most pharmacists (83.3%) solve ethical dilemmas on their own, while nearly the same proportion (75.4%) think that they are not adequately trained to deal with ethical dilemmas. The pre-defined ethical scenarios experienced by the largest proportion of pharmacists are being asked to dispense a drug to someone other than the patient (93.3%), an unnecessary over-the-counter medicine (84.3%), a generic medicine clinically equivalent to the prescribed one (79.4%), or hormonal contraception over the counter (70.4%). The results demonstrate a need to improve formal pharmacy ethics education and training in how to assess ethical issues and make appropriate decisions, which implies the need for stronger collaboration between pharmacists and their professional association. Our results also highlight an urgent need to revise and update the Croatian code of ethics for pharmacists. Copyright © 2017 Elsevier Inc. All rights reserved.
Seppälä, Tuija; Hankonen, Nelli; Korkiakangas, Eveliina; Ruusuvuori, Johanna; Laitinen, Jaana
2017-08-02
Health policy papers disseminate recommendations and guidelines for the development and implementation of health promotion interventions. Such documents have rarely been investigated with regard to their assumed mechanisms of action for changing behaviour. The Theoretical Domains Framework (TDF) and Behaviour Change Techniques (BCT) Taxonomy have been used to code behaviour change intervention descriptions, but to our knowledge such "retrofitting" of policy papers has not previously been reported. This study aims first to identify targets, mediators, and change strategies for physical activity (PA) and nutrition behaviour change in Finnish policy papers on workplace health promotion, and second to assess the suitability of the Behaviour Change Wheel (BCW) approach for this purpose. We searched all national-level health policy papers effectual in Finland in August 2016 focusing on the promotion of PA and/or healthy nutrition in the workplace context (n = 6). Policy recommendations targeting employees' nutrition and PA including sedentary behaviour (SB) were coded using BCW, TDF, and BCT Taxonomy. A total of 125 recommendations were coded in the six policy papers, and in two additional documents referenced by them. Psychological capability, physical opportunity, and social opportunity were frequently identified (22%, 31%, and 24%, respectively), whereas physical capability was almost completely absent (1%). Three TDF domains (knowledge, skills, and social influence) were observed in all papers. Multiple intervention functions and BCTs were identified in all papers but several recommendations were too vague to be coded reliably. Influencing individuals (46%) and changing the physical environment (44%) were recommended more frequently than influencing the social environment (10%). The BCW approach appeared to be useful for analysing the content of health policy papers. Paying more attention to underlying assumptions regarding behavioural change processes may help to identify neglected aspects in current policy, and to develop interventions based on recommendations, thus helping to increase the impact of policy papers.
The CRONOS Code for Astrophysical Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Kissmann, R.; Kleimann, J.; Krebl, B.; Wiengarten, T.
2018-06-01
We describe the magnetohydrodynamics (MHD) code CRONOS, which has been used in astrophysics and space-physics studies in recent years. CRONOS has been designed to be easily adaptable to the problem in hand, where the user can expand or exchange core modules or add new functionality to the code. This modularity comes about through its implementation using a C++ class structure. The core components of the code include solvers for both hydrodynamical (HD) and MHD problems. These problems are solved on different rectangular grids, which currently support Cartesian, spherical, and cylindrical coordinates. CRONOS uses a finite-volume description with different approximate Riemann solvers that can be chosen at runtime. Here, we describe the implementation of the code with a view toward its ongoing development. We illustrate the code’s potential through several (M)HD test problems and some astrophysical applications.